Search This Blog

Showing posts with label algorithm. Show all posts
Showing posts with label algorithm. Show all posts

Thursday, May 18, 2023

Did Open AI Open Pandora's Box?


Pandora opening the box that releases  harm into the world. Image at: https://nypl.getarchive.net/media/pandora-opens-the-box-ca3915



OpenAI not only democratized access to AI but popularized it by inviting people to use it for free. Many of us have opened that box, but some of us have been dismayed at the results that speak as if they are objective truth without any accountability for sources of information or explanation for how ChatGPT arrives at its conclusion.

Relying on AI as an objective source of information ignores the fact that it reflects the bias embedded by its human programmers and can reinforce discriminatory effects. The consequences of that can range from biased beauty standards to reinforcing illegal discriminatory practices.  

Now that it just costs $20 a month or can even be available for free at off-peak times, everyone can -- and many are -- make use of ChatGPT instead of doing research in a variety of sources with documentation that offers at least some level of accountability that is essential for explainable AI. 

Read more here:  https://www.linkedin.com/pulse/did-open-ai-pandoras-box-write-way-pro/?trackingId=E3yor3MrHAi2B2HwIdqlVQ%3D%3D


 Related:


An A/B Test of Generative AI


AI' s Got Some Explaining to Do

AI's early attempts at screenwriting

The Pros and Cons of Generative AI

11 Quotes About AI 

AI Informs Personalization for Starbucks

AI Accessibility: The Next Spreadsheet Revolution for Modern Business? 

 


http://uncommoncontent.blogspot.com/2021/01/the-original-selection-of-11-ai-quotes.html

 

Wednesday, March 9, 2022

When automated messages make your brand look stupid


Marketers love using emails and texts to be in contact with customers. It's so cheap and easy to get messages out that some abuse the channels and send out daily messages. Even worse, some send out multiple messages a day, which just crowd a customers' inbox and make them start tuning out those messages.

One of the biggest offenders on this front is the Gap family of brands. As the umbrella organization comprises not just Gap but also Banana Republic, as well as the "Factory" versions of both those brands, on top of Old Navy and Althea, it sends me a minimum of three and sometimes even five emails each and every day. So, yes, I tune most of them out now. 

But the one pictured above caught my eye. Can you guess why?

Are you motivated to make a purchase because a brand lets you know that you have free money to spend that amounts to just $0 in rewards? In other words, your purchasing power is unchanged from what you thought it was before.

 It's all too obvious that Old Navy is attempting to personalize the offer not just by using my name but by trying to tempt me to make a purchase that will be discounted by my rewards. As the algorithm is not programmed to discard that message for customers without a reward balance, we get a message that shows not all personalization necessarily fits your marketing message.

A bit later I got this email that made a similar mistake in a PR pitch. Notice how the personalization is worked in without regard for understanding how we address people in real life:

"Setting up your business remotely during Great Resignation

Inbox

KJ Helms via prnewswire.com 


to me

Hi Brown, Ariella​ Team,

 

I have a story I think Brown, Ariella​ would want to cover about a firm that can help businesses 

affected by “The Great Resignation,” which is continuing with 4.3 million resignations in 

December 2021 alone (1).




One other nitpick I have is that it refers to the Great Resignation continuing by citing the numbers from December 2021. As we are in March now, that is a non sequitur. Instead of presenting the sentence in this order  the text should have started with the December stat and then say that the trend continues in 2022, possibly with its own sentence set up this way: In December 2021 alone 4.3 million resigned from their jobs, and "The Great Resignation" trend is continuing in 2022, raising concerns for businesses that want to retain their employees.



 Related:  


MAJOR MARKETING MISSTEPS FROM ADIDAS, M&M'S AND COKE


TODAY'S TARGETED MARKETING IS POWERED BY DATA AND AUTOMATION

Wednesday, December 13, 2017

Can Facebook Prevent Suicide? Ethical Questions Arising from AI

In today’s hyperconnected world, we are generating and collecting so much data that it is beyond human capability to sift through it all. Indeed, one application of artificial intelligence is identifying patterns and deviations that signal intent on posts. Facebook is using AI in this way to extract value from its own Big Data trove. While that may be applied to a good purpose, it also raises ethical concerns.
Where might one get insight into this issue? In my own search, I found an organization called PERVADE (Pervasive Data Ethics for Computational Research). With the cooperation of six universities and the funding it received this September, it is working to frame the questions and move toward the answers.
I reached out to the organization for some expert views on the ethical questions related to Facebook’s announcement that it was incorporating AI in its expanded suicide-signal detection effort. That led to a call with one of the group’s members, Matthew Bietz.
Bietz told me the people involved in PERVADE are researching the ramifications of pervasive data, which encompasses continuous data collection — not just from what we post to social media, but also from the “digital traces that we leave behind anytime we’re online,” such as when we Google or email. New connections from the Internet of Things (IoT) and wearables further contribute to the growing body of “data about spaces we’re in,” he said. As this phenomenon is “relatively new,” it opens up new questions to explore with respect to “data ethics.”

Read more in 

The Ethics of AI for Suicide Prevention

Monday, December 11, 2017

AI Raises Awareness of Fake News

The proliferation of fake news couldn't happen without technology. The internet allows anyone, anywhere to spread information -- whether or not it is true. But technology could also help serve as a tool that makes people more aware of which stories are not trustworthy.
(Image: Mega Pixel/Shutterstock)
(Image: Mega Pixel/Shutterstock)
True story: one of my social media connections asked for recommendations for reliable new sources and got a few outlets named, though some of us -- myself included -- said that you simply cannot rely wholly on any single source and have to check through multiple sources to be sure you get the full picture of the facts in context to find where the truth lies.
But not everyone is sophisticated enough to be aware that reports they see -- even from outlets with solid reputations -- need to be taken with a grain of salt. That's why Valentinos Tzekas founded FightHoax. Its AI-powered algorithm that empowers anyone to ascertain if an article is fake or not in just seconds without Googling the story.

Read more in 

How AI Can Help You Decide What to Trust in Online News

Wednesday, September 13, 2017

Got sarcasm?

🧐🚂🤖🤓🙃
I'm being sarcastic." We've all had at least one exchange in which we either had to explain or had someone else explain that what was said was not intended to be taken straight. Generally, you need to know something about both the context and the speaker to grasp when to take a statement at face value or interpret it as sarcastic.
That's why it's particularly challenging to get handle on intent when attempting sentiment analytics on social media. For artificial intelligence to truly understand what humans mean, it needs emotional intelligence, as well. Iyad Rahwan, an associate professor the MIT Media lab and one of his students, who developed the algorithm with one of, Bjarke Felbo worked on just that.
The results are what they call Deep Moji. Described as "artificial emotional intelligence," Deep Moji was trained on millions of emojis "to understand emotions and sarcasm." Rahwan explained to MIT's Technology Review that in the context of online communication emojis take on the function of body language or tone in offering nonverbal cues for meaning.

Read more in Emojis Train AI to Recognize Sarcasm

Thursday, July 27, 2017

Google Feed = Massive Marketing Opportunities

credit: https://upload.wikimedia.org/wikipedia/commons/8/83/Google_wordmark.gif
Google has always dominated search, but it has not done so well with social as evidenced by the perceived failure of Google+. So, capitalizing on its strengths, it set up a feed for users that uploads items of interest based on their own signals, rather than on what their friends shared or Twitter connections posted online.

Back in December, Google introduced an app update that promised “load your life's interests and updates” with just “a single tap” that can bring up “useful cards.”  Seven months later, Google proclaimed “Feed Your Need to Know,” announcing that — thanks to machine learning advances — the algorithms that direct the feed can “better anticipate” the type of content that an individual would want to see.

Monday, June 19, 2017

Wait, what?

This is not a post on the popular book that bears that title. (I did write about that here:  uncommoncontent.blogspot.com.) This is my reaction to the number a billion that sounds impressive but is really completely meaningless without context.*

When I shared a link today on LI, it offered me three other links to read. Among them was a FastCompany article, "Six Ways YouTube Is Primed For The Future (And Four Areas That Need Work)" Now read what it says for the fifth and see if you have the same reaction I do:
5. YouTube’s rebuilt algorithms have led viewers to watch 1 billion hours of video a day. YouTube is optimized for what it calls “watch time,” which encompasses what users view, how long they tune in, the length of their overall YouTube session, and so forth. Together, these signals help YouTube algorithms decide which videos a user is most likely to watch shortly after they’re posted and which will lead to the longest overall viewing period.
Do you get what's missing here? How many viewers are there? How many hours did they watch before the algorithms were rebuilt?

Without those two pieces of information, we really have no way of knowing how much of an advance one billion hours of video a day represents. Sure, it sounds like a lot, but we don't know if it represents the two billion people watching an average of a half an hour a day or one billion watching an average of an hour, or half a million watching two hours.

 We also don't know if the actual goal was to bring in more viewers or to keep the ones already watching on the channel for longer. That's a pretty important piece of context, as well, if one is to judge if the algorithms are accomplishing what the company intended for them. The article does refer to 800 million YouTube consumers of music but doesn't clarify whether or not that represents the viewers in total and if that number represents an increase over the number before the adjustment to the algorithms.

The bottom line is this: Don't be dazzled by numbers, no matter how large, that are presented without the relevant context.


*Related post http://writewaypro.blogspot.com/2016/10/data-visualization-you-have-to-c-it-to.html
http://uncommoncontent.blogspot.com/2017/09/missingness-at-museum.html

Monday, July 25, 2016

A supercomputer for more efficient oil extraction

The low oil prices we've seen lately present a challenge for the energy industry. To maximize output, global energy company Total upgraded its supercomputer.

The global energy company Total draws on the power of supercomputers for advanced 4D modeling to locate and simulate the behavior of oil reserves under the surface. 4D seismic consists of repeating 3D seismic surveys over time across the same area. Total's geophysicists and reservoir engineers develop models based on complex physics by working with advanced algorithms that require a great deal of computational power.
- See more at: http://www.baselinemag.com/infrastructure/supercomputer-delivers-for-energy-sector.html#sthash.aOCcz9YP.dpuf

Thursday, May 12, 2016

Data for better job matches

How do you know that a new hire will work out? Even a perfect resume doesn’t guarantee it because there are many other factors that determine if an individual will be happy and productive at a particular organization. That’s the premise of job matching startup called Ideal.com. It takes in a lot more data from both employee and employer to predict compatibility for sales positions.

Credit: iStock
Credit: iStock
If that sounds rather like online dating, it should, because that's the model that Somen Mondal, Ideal.com’s CEO, invokes. I spoke to him on the phone about how his approach works. He also revealed what made him realize that there is a need for a better way to match candidates with companies.
Read more in 

Data-Driven Hiring Takes Command

Sunday, April 10, 2016

Trading places the high tech way

A high tech approach to barter promises to make getaways more affordable. That’s the concept behind Nightswapping.com. It allows you to offer your home in exchange for staying at someone else's without limiting you to staying in the town of the specific person who wants to come to yours.
The business was founded in Lyon, France in 2012, though it also has offices in New York, London, and Sydney. The listings on the service extend much further, with accommodations in 160 countries.

In a way, the service mirrors the monetary solution to the problem of barter. What if you don’t want the eggs your neighbor offered in exchange for your wheat? Likewise, perhaps you don’t want to go to London on the same dates the person in London wishes to come to your hometown. Through Nightswapping, all parties get a consistent medium of exchange, measure of value, and store of value through points. Points are earned by giving nights in your home, and redeemed by staying at another’s place. The service brings the two together and provides some information in the form of reviews from visitors and its own scale of ranking.


You go here...
Credit: Pixabay
You go here...
Credit: Pixabay
The price for each night’s stay is determined by Nightswapping’s scale that ranges from 1 to 7. The number on that scale is based on the Nightswapping algorithm, which takes into account the popularity of the area, the square footage, the number of bedrooms, the comfort level, and the type of accommodation -- there’s more value in having a whole apartment than a bedroom within a house. A shorter stay at a place with a higher standard can cost the same number of points as a longer stay at a place closer to the bottom of the scale.

Read more in Swipe to Swap and Go

 

Friday, February 19, 2016

Shopping with Watson

Over the past decade and a half, successful web retailers have been able to tailor their
marketing toward the individual consumer, providing a level of personalization all shoppers—both offline and on—have come to expect as standard across the industry.
Shoppers have come to expect the tailored marketing that algorithms can deliver to them when shopping online in physical stores. However, that kind of personalization is only possible with sales staff that knows the customer and the merchandise very well. Even in the e-commerce space, customers often are frustrated by an overwhelming number of irrelevant search results that steers them away from their intended purchase. 

Tuesday, December 1, 2015

Data Mining for Legislative Influence

If you want to learn about the process of getting a proposed bill passed, you can read the official explanation on a state senate site. It’s remarkably similar to the steps involved for federal legislation, according to the explanation offered to the protagonist of Mr. Smith Goes to Washington.

 What the explanations don’t reveal, however, are the entities behind the proposed legislation.
The actual authors of proposed legislation don’t sign their names, but they do leave signatures of a sort, the signals of individual style that can be found throughout their written work. All it takes is reading through thousands of proposed bills to find the textual clues that link bills to the same source. The only drawback is coming up with the time it takes for humans to read through it all. But this is one problem that technology can solve.

Read more in

Data for Good: Tracking Legislative Influence

Tuesday, August 18, 2015

What would Spock do?

from https://lurentis.com/blog/driverless-cars-pandoras-box-now-wheels/
Is there an ethical algoirthm for driverless cars

Say you’re driving at 30 miles an hour when a child suddenly chases a ball right into the path of your car. You would brake if you can stop in time. If you can’t brake you’d swerve to avoid hitting the child. But what if swerving forces you either to hit another car with passengers in it or a truck that would cause harm to those in your car? Does self-preservation override all other consideration? Would we be driven by the emotional pull of saving a child over all else? Or would we be paralyzed into doing nothing because we can’t bring ourselves to take part in any action that causes harm?
These are the types of questions that bring ethics specialists and engineers together in addressing the challenge of directing driverless cars. 

Does Spock offer a solution to the problem? He may, if people would accept Vulcan logic. Learn more in  

Driverless Cars Present Ethical Challenges


Wednesday, July 8, 2015

Got rhythm? This algorithm does.

from https://commons.wikimedia.org/wiki/File:Rap-logo-persian-wiki.png
Most of us have heard of DeepBlue, the computer that harnessed artificial intelligence to beat a chess champion back in 1997. Now there’s DeepBeat, a machine learning algorithm that raps.