Search This Blog

Showing posts with label LLM. Show all posts
Showing posts with label LLM. Show all posts

Monday, May 4, 2026

Fighting the Attack of the Clones

 

It's May 4th, the perfect time to bring up Star Wars.

But I'm not doing the standard post. Instead, I'm waging my own war on cloning, which is why I selected the image of the poster for Star Wars Episode II Attack of the clones movie poster.

If you haven't seen it, the clones in the title are the duplicates of Jango Fett who requests one clone for himself that grows at the normal pace rather than the expedited one set for the clones made for the Empire to serve as storm troopers.

In the foreground you see Jango Fett (what you can see of someone in Mandalorian armor and helmet) along with our other main characters. But the clones themselves who are even named in the title practically fade into the background.

Why? It's because clones are not interesting. We're interested in characters who have their own personality and story -- not interchangeable copies.

Have you guessed yet where I am going with this?

AI slop is a real life attack of the clones


What we've come to see on this platform, as well as many others, is the equivalent of an attack of clones, that AI slop that has no character and holds no real interest. Often it's used in clickbait or engagement posts like this one from Technology Advice:

But you see it also in standard ads like this one.



However, it could also be put in standard posts, as I now see hundreds -- if not thousands -- of people allowing
LLMs to talk for them on LinkedIn, as you see in the examples below that all sound exactly alike.


It makes the platform sound completely repetitive and everyone who uses the formula utterly forgettable.
By adopting the quick and lazy approach of allowing generative AI to write your posts, you turn yourself and your brand into a clone, robbing yourself of the opportunity to establish a unique identity and voice.





If you hire a content marketers whose posts have the earmarks of AI, as in using this kind of construction - "Most businesses don't have a marketing problem. They have a sameness problem." (So ironic that this person put in that message through generative AI-speak that reinforces sameness!) -- then you're paying for a cloned content rather than content that is specific to what you're about.

And why would you do that? It's like paying a chef to cook you a meal from scratch only to be served microwaved TV dinner.

It's possible to cook up something good when you actually like the challenge of cooking with the ingredients available to you. If you don't, you shouldn't be a cook. Heating up TV dinners is not cooking, and pushing out AI slop is not creative content marketing.


It takes a bit of effort to differentiate yourself 


It is possible to do better with just a bit of effort from someone who cares about the craft of writing and marketing. I'll give you quick example based on one of the many posts that used the formula in a promoted ad. 


It says "Your BD person
isn't failing.
You set them up to. "

Notice that the second sentence doesn't logically follow the first at all. In fact, it contradicts the first because it acknowledges that the BD person is -- in fact -- failing, though it blames the "you" rather than "them" in this case.

This kind of illogical nonsense said with a straight face by people who think they sounds smart is the result of being so used to seeing this pattern of posts that you don't even pause to think if it makes any sense.

Here's what they could have written instead with a logical progression.:
"Is your BD team coming up short?
It's not their fault.
You set them up for failure."

There is an affordable alternative to AI 

Perhaps one of the reasons people let the opportunity to truly connect with customers through their communications slip through their fingers is because they think that they can't afford to pay for quality content. That is as ridiculous as saying you'll just eat food out of vending machines to save money on groceries. That is not sustainable and can cause real harm like vitamin deficiency, high blood pressure,  and even diabetes. Compromising on content quality is the same thing. You can get cheap and fast but not good enough to contribute to your overall health when you rely on LLM output. 

So here's my affordable plan to help you avoid turning into a clone that just fades into the background: a special package offered in May and June 2026 only!    Get a consultation and a plan for social media posts that attract organic traffic through differentiated messaging and a distinctly human voice for just $1500.  Contact me for details. 

And for proof of my writing ability, check out my many publications assembled on my portfolio where you can search by topic. 


Sunday, August 25, 2024

Ouroboros, an apt symbol for AI model collapse

Engraving of a wyvern-type ouroboros by Lucas Jennis, in the 1625 alchemical tract De Lapide Philosophico

by Ariella Brown


AI hits the ouroboros (sometimes written uroboros) stage. You've likely seen it in the form of a snake in a circle, eating its own tail. The ancient symbol also sometimes showed dragons or a wyvern, so I chose this engraving by Lucas Jennis intended to represent mercury in the 1625 alchemical tract "De Lapide Philosophico," for my illustration instead of just going with something as prosaic as "model collapse"


To get a bit meta and bring generative AI into the picture (pun intended, I'm afraid) here's an
ouroboros image made with generative AI. ked Google

Ouroboros image generated by Google Gemini



Model collapse is what the researchers who published their take on this in Nature called the phenomenon of large language models (LLMs) doing the equivalent of eating their own tails when ingesting LLM output for new generation. They insist that the models should be limited to"data collected about genuine human interactions."

From the abstract:
"Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). We build theoretical intuition behind the phenomenon and portray its ubiquity among all learned generative models. We demonstrate that it must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet."

Shumailov, I., Shumaylov, Z., Zhao, Y. et al. AI models collapse when trained on recursively generated data. Nature 631, 755–759 (2024).


Friday, June 28, 2024

An Apology to Generative AI

ChatGPT spelled out in Scrabble tiles

By Ariella Brown


I'm not a generative AI fangirl. If anything, I'd consider myself more of a skeptic because people tend to not just use it as a tool to improve their writing but as a tool to replace the work of research, composition, and revision that is essential to good writing.

It is generally embraced by people who consider online research to be too much work and who believe that anything that comes out of a machine that will charge them no more than $20 a month for writing to be too good a deal to pass up. 

For those of us who actually read, the output of ChatGPT and similar LLMs is not exactly something to write home about. Unless you know how to prompt it and train it to write in a truly readable style, it will default to the worst of wordy, opaque corporate style text. 

But this isn't the fault of the technology. It's the fault of the mediocre content that dominates the internet that trained it. Below is one example that I pulled off  the "About" section of a real LinkedIn profile (first name Kerri maintained in the screenshot that proves this is real and not something I made up):  LinkedIn screenshot

As a strategic thinker, problem-solver, and mediator, I thrive in managing multiple, sometimes differing inputs to achieve optimal messaging and positioning. My proactive nature drives me to partner with leaders across marketing teams and internal business units, aligning efforts, connecting dots, and adding context to enable flawless execution of communication strategies and tactics.


In fast-paced, fluid environments, I excel in effectively prioritizing tasks and ensuring they are completed efficiently. I have a proven track record of setting and meeting strict deadlines and budgets, leveraging my ability to navigate dynamic landscapes seamlessly.

Driven by natural curiosity, I am constantly seeking to understand and implement the latest trends, technologies, and tactics essential for driving B2B sales opportunities. My keen interest in exploring new channels for messaging and content distribution fuels my passion for innovation and continuous improvement to not just meet but exceed expectations.

Let’s connect to explore how we can drive success together.

You know what sounds exactly like this? Cover letters you ask ChatGPT to compose for you. 

I've tried those out a few times and never been happy with the results because they always sounds like the text above. Trying to tell it to sound less stiff doesn't make it sound any less canned, and forget about getting it to copy my own writing style.

It's possible that Kerri used ChatGPT to create her "About" section. Given that she's been in the marketing biz for some time, though, I'd think she had to have had something filled out for years before ChatGPT was available, and it likely sounded very much like this even if she did let some LLM or something like Grammarly tweak it for her.  

People like Kerri, who ignore all writing advice from the masters like Orwell, White (watch for a upcoming  blog about him), and others made this the public face of corporate communication who are to blame for the bombastic and soulless style that LLMs replicate at scale. 


That's the reason for this apology too ChatGPT for mocking its output. You're not the one at fault. You had no way of knowing better. Humans do, and they should have provided you with better models for writing. 

Note on the title: I thought of giving this post the title "Apology" intended in the classical sense of a defense or justification for something others take as wrong with the hint of an apology to AI. Knowing that that wouldn't be clear to some readers, I opted to make this just a straight apology instead. 

Related:

A new generative AI comparison