Search This Blog

Showing posts with label algorithms. Show all posts
Showing posts with label algorithms. Show all posts

Thursday, August 15, 2024

How to increase traffic 16,500%: clickbait vs. reality

 Fish on computer monitor  caught on hook


by Ariella Brown

I just attended an event with the title " Information Gain Content: How to Increase Traffic 16,500% by Going Above & Beyond with Bernard Huang." Did the session live up to its clickbait title?


Not at all. 


This wasn't necessarily Bernard Huang's fault. The presentation was hosted by the Top of the Funnel group that favors these types of large numbers that sound just specific enough that people may believe they are real.



Earlier this month, it offered "Social Copywriting Secrets: Building an Audience of 114,985 with Eddie Shleyner" I attended that one, too, and there was nothing in it that justified that number as the guaranteed result of some tactic you could apply. Shleyner just emphasized sticking to good, authentic storytelling to keep your audience engaged.


There were no easy-to-apply tricks in this session. If anything, it was about the reason the old tricks no longer work.  Huang explained that Google is currently applying its stated standard of Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) in the context of its AI overview.


My main takeaway from this session was not that it's easy to increase your traffic on Google but that it simply has switched one set of algorithmic rules for another, now including its own AI reads on content and that this is not necessarily a good thing.

What bad about this? 


The bottom line is that Google is relying very heavily on consensus as well as pre-established authority. That means that it is very easy for the sites that have already built up big audiences and strong rankings on Google to leverage that to put out proclamations that will be accepted as true, especially when they are -- inevitably -- echoed by the wannabee followers and all those who pretend to be thought leaders by parroting what influencers already say.


In other words, truly original thoughts by those who are not just reinforcing group think will likely be buried. I did raise this question and wasn't wholly reassured by the answer. It was that, yes, Google expects all points of consensus on a particular topic to be represented. If they are not present, your content will be an outcast. The only way you have a chance to be noted is if you play the contrarian game by refuting the consensus views point by point. Not giving it that nod is SEO suicide.

And so we the current state of search is one that aims to homogenize information according to preset parameters from those already granted expert status where those who are not in line with the consensus will be buried in the obscurity of high number page results. 

This is truly the opposite of a democratic platform in which understanding of history and current events is allowed to rise or fall based solely on its own merit rather than the pre-established narrative. George Patton would be appalled. He's the one who said: "If everyone is thinking alike, then somebody isn't thinking."

Google's algorithms are designed to reward those who follow in the paths preset by others rather than really thinking for themselves. 


P.S. For the story behind the illustration above, see my LinkedIn post


Related:

Aim higher than SEO for your marketing

The 6 step plan that fails

What Edison can teach us about SEO

Put SEO in the picture


You can also follow Ariella Brown.  

Wednesday, February 22, 2017

Shining light on the dark side of big data

Does the shift toward more data and algorithmic direction for our business decisions assure us that organizations and businesses are operating to everyone's advantage? There are a number of issues involved that some people feel need to be addressed going forward.
Numbers don't lie, or do they? Perhaps the fact that they are perceived to be absolutely objective is what makes us accept the determinations of algorithms without questioning what factors could have shaped the outcome.
That's the argument Cathy O'Neil makes in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens DemocracyWhile we tend to think of big data as a counterforce to biased, just decisions, O'Neil finds that in practice, they can reinforce biases even while claiming unassailable objectivity.
 “The models being used today are opaque, unregulated, and uncontestable, even when they’re wrong.”   The math destruction posed by algorithms is the result of models that reinforces barriers, keeping particular demographic populations disadvantaged by identifying them as less worthy of credit, education, job opportunities, parole, etc. 

Now the organizations and businesses that make those decisions can point to the authority of the algorithm and so shut down any possible discussion that question the decision. In that way, big data can be misused to increase inequality. As algorithms are not created in a vacuum but are born of minds operating in a human context that already has some set assumptions, they actually can extend the reach of human biases rather than counteract them.  

“Even algorithms have parents, and those parents are computer programmers, with their values and assumptions, “Alberto Ibargüenhttps://www.knightfoundation.org/articles/ethics-and-governance-of-artificial-intelligence-fund,  president and  CEO and of the John S. and James L. Knight Foundation wrote.  “As computers learn and adapt from new data, those initial algorithms can shape what information we see, how much money we can borrow, what health care we receive, and more.”

I spoke with the foundation’s VP of Technology Innovation, John Bracken about its partnership with the MIT Media Lab and the Berkman Klein Center for Internet & Society as well as other individuals and organizations to create a $27 million fund for research in this area. 
The idea is to open the way to “bridging” together “people across fields and nations” to pull together a range of experiences and perspectives on the “social impact” of the development of artificial intelligence. As AI is on the road “to impact every aspect of human life,” it is important to think about sharping policies  for the “tools to be built” and how they are to be implemented.
Read more in 

Algorithms' Dark Side: Embedding Bias into Code

Tuesday, May 19, 2015

When efficiency, algorithms, and labor laws collide

Timeclock Wikipedia Commons
Flexibility is considered a virtue and an essential component an agile organization which can respond to changing needs in real-time. However, when that type of flexibility comes at the expense of employees, the company may not only be crossing the line of ethics but of law.

On April 10, New York Attorney General Eric Schneiderman directed his office to send a letter (posted by the Wall Street Journal) to 13 major retailers.  What Gap Inc., Abercrombie & Fitch, J. Crew Group Inc., L. Brands, Burlington Coat Factory, TJX Companies, Urban Outfitters, Target Corp., Sears Holding Corp., Williams Sonoma Inc., Crocs, Ann Inc. and J.C. Penney Co. Inc were all asked were to account for questionable scheduling practices known as “on-call” shifts.


Read more in 

The Legal Limits for On-Call Shifts

Thursday, May 7, 2015

Big data alone is not enough for an agile enterprise

Ever get a promotional email or ad that has no relevance to you? We all have, and it’s usually due to the marketing algorithms used to analyze big data inputs responding incorrectly to the wrong signal. For example, eBay started applying algorithms to the tags used to track customers in 2007 to measure the relevance of search results on its site. After a couple of years of success, the results became less accurate and seemed more random and arbitrary. The algorithms no longer worked because one of the tags had shifted. Events like that one resulted in customers seeing search results or receiving marketing emails that made no sense to them.
“The algorithm is not a human brain and doesn’t realize that the parameters have changed when tags change,” Ratzesberger observed. If a change is made to a variable, everything “downstream” from that variable must change, too, or the complex results can backfire.

The solution to this entire problem of achieving agility at scale is the Sentient Enterprise, a concept that Ratzesberger developed with Dr. Mohan Sawhney, a professor at Kellogg School of Management at Northwestern University. 
Read more here