Search This Blog

Showing posts with label generative AI. Show all posts
Showing posts with label generative AI. Show all posts

Thursday, December 11, 2025

"Learnings" explained by ChatGPT, Claude, and Gemini

 I'm not at all a fan of the modern use of the word "learnings" in place of what could be called lessons or takeaways. I wondered what led to seeing this all-too-often in business communication. I  decided to see what I can glean about it from generative AI. 

ChatGPT

I started with ChatGPT and got the following response: 

📜 1. Historical origin of “learnings”

  • Old English already had leornung (“learning, knowledge obtained by study”).

  • “Learnings” appears intermittently in Middle English and Early Modern English, but mostly in the sense of knowledge or scholarship, not “takeaways.”

So the plural form is historically attested, but the modern meaning is new.


📈 2. When did “learnings” (meaning “takeaways”) take off?

Linguists and corpus researchers note that:

  • The usage begins to appear in the 1970s–80s in fields like education, psychology, organizational development, and management consulting.

  • It becomes much more widespread after the 1990s, especially in:

    • Corporate training

    • Agile/software development

    • Post-mortem analyses (“What were the key learnings?”)

By the early 2000s, it was common in business English; by the 2010s, it had firmly entered corporate jargon.


💬 3. Why was this term coined (or revived)?

Several reasons:

1. Jargon need in organizational development

Fields like management consulting, OD, and “lessons learned” methodology wanted a compact noun for:

“knowledge gained from experience, not just from instruction.”

“Lesson” felt too teacher-student oriented; “takeaway” too informal.

2. Influence of corporate and academic training culture

Training programs often talk about:

  • learnings,

  • key learnings,

  • actionable learnings.

The plural makes it feel like a measurable, list-friendly deliverable.

3. Analogy with words like “findings”

Consulting and research language uses many “-ings” nouns:

  • findings

  • earnings

  • savings

  • outcomes (a parallel)

“Learnings” fit the pattern of a concrete output after an event or analysis.

4. Global business English spread

Because English is used by many non-native speakers in international companies, simplified or regularized forms often spread.
“Learnings” works grammatically for learners who think:

verb → noun with “-ing” → plural is okay.


🚫 4. Why do some people dislike it?

  • For many native speakers, “learnings” sounds ungrammatical or like jargon.

  • They prefer:

    • lessons

    • takeaways

    • insights

Style guides (The Economist, AP, etc.) often recommend avoiding “learnings” unless quoting business jargon.

Claude's take

I then put the same question to Claude and got this response written out in paragraph form rather than bullet format with two citations, albeit from the same source. Also note that it does not throw in emojis like the other two do:

Based on my search, "learnings" has an interesting history. While the word itself dates back to the 14th century and was commonly used through the 18th century (Shakespeare even used it in Cymbeline), it surged in popularity during the 1950s in business jargon and educational contexts CSMonitor.com.

The term wasn't really "coined" in the modern sense—it was actually revived. From the 1400s through the 1700s, "learnings" meant "lessons" or "instructions" and was perfectly acceptable. However, it fell out of common use in the 19th and 20th centuries.

When it resurged in the 1950s, it took on a slightly different meaning in corporate and academic settings, essentially as a synonym for "findings," "conclusions," "results," or "takeaways." This revival happened as business jargon and corporate speak were developing their own vocabulary.

Today, learnings is widely viewed as pretentious and useless business jargon CSMonitor.com, appearing on countless lists of annoying corporate terminology. Despite this criticism, it has become firmly established in business, academic, and scientific contexts. The irony is that people generally have no problem with "teachings" (what is taught), but "learnings" (what is learned) somehow grates on many ears.

So it wasn't so much coined as it was resurrected from older English and repurposed for modern business speak—which is perhaps why it feels both familiar and oddly wrong to many people.

Google Gemini

To round out the experiment, I also posed the question to Google Gemini, which is closer to ChatGPT in style of presentation. however, it fails to grasp how much some of us hate the term. This is its response:

The word "learnings," particularly as a plural noun replacing terms like "lessons" or "takeaways," saw a significant increase in use as a corporate and educational buzzword starting around the year 2000.

However, it's important to note that the word itself is not new.

🗓️ Timeline of Usage

  • Middle/Early Modern English (14th - 17th Century): The noun "learning" meaning "a thing learned" (a countable sense) and its plural, "learnings," were in occasional use. Shakespeare, for example, used the term.

  • 1950s: The term saw an increase in use within academic and educational circles (often referred to as "edu-speak").

  • Circa 2000: It became a widespread buzzword in business-speak (corporate jargon), commonly appearing in phrases like "key learnings" or "apply these learnings." This is when it really started replacing "lessons learned" or "takeaways" in project management and professional contexts.


🤔 Why Was the Term Coined/Revived?

While "learnings" is often considered unnecessary jargon by language purists, its resurgence and adoption in business contexts appear to stem from a perceived need for a different nuance than existing words:

  • To Distinguish from "Lessons": The word "lessons" can carry connotations of being taught (as in a classroom lesson) or of something learned from a failure (as in "we learned our lesson"). "Learnings" is sometimes used to refer to insights or knowledge gained through experience or a process, often in a less painful or formal way.

  • To Merge Concepts: In corporate and project environments, "learnings" acts as a shorthand to combine the idea of "things we've learned as a team" and "action items" or "next steps." It is intended to imply a list of actionable insights or points of wisdom distilled from an experience.

  • Corporate Trend: It is part of a broader trend of turning verbs into nouns in corporate language ("What's the ask?" or "What's the solve?"). The plural "learnings" parallels other accepted plural nouns derived from verbal actions, such as findings and teachings.

Essentially, its modern rise is largely attributed to it becoming a recognizable piece of corporate jargon that differentiates the knowledge gained from the traditional, sometimes rigid, sense of a "lesson."

-----------------------------------------------------------------------------------------------------------

The information from all three is consistent, so we don't have the problem of AI hallucinations here. However, Gemini seems to ignore the fact that many people hate the use of this jargon, considering it pretentious, as Claude points out. It also omits that it's not recommended for use in publications, as per ChatGPT's citation. Perhaps that's a reflection of Google's own corporate culture and form of communication.

Related: https://writewaypro.blogspot.com/2024/06/a-new-generative-ai-comparison.html


Thursday, November 27, 2025

How Thankful Should We Be for Generative AI?

 I thought I'd like to generate a picture of Jane Austen in context of an American Thanksgiving celebration for a holiday greeting post on the social media accounts I have associated with my Totally Jane Austen brand. I opted to use Google Gemini. 

Here's how it went: 

First prompt: Can you generate an image of Jane Austen celebrating an American Thanksgiving

Here's what it delivered with this message: 

"Here is an image of Jane Austen celebrating American Thanksgiving."



Second prompt included the portrait of Jane Austen that I referenced: Let's have fewer people, no signs, and a woman who is dressed like Jane Austen is in her portrait 



Second iteration came back with this: "Here is a Thanksgiving celebration with fewer people, no signs, and a woman dressed like Jane Austen in her portrait."




I attached the Jane Austen portrait again and wrote: "Nope you didn't do it. That's almost the same picture. Try doing it with Jane Austen herself looking like this."

Gemini then delivered this with the note, "My apologies, I misunderstood the previous request. Here is an image of Jane Austen herself, looking like in her portrait, celebrating Thanksgiving with fewer people and no signs." 


My response was to again attach the portrait with this prompt:  "That's the same picture again. Delete that and just put this picture of Jane Austen in a context of a table with turkey, etc."


Gemini then offered this monstrosity with this phrase "You are absolutely right, I apologize for the repeated image. I will generate a new image for you, placing Jane Austen from the provided portrait into a Thanksgiving setting. "


I then wrote:  "She's out of proportion to the people. Just take them out and get the proportions right."


Gemini responded with "My apologies for the scale issue. Let me correct that for you. Here is Jane Austen, in proper proportion, at a Thanksgiving table by herself." and delivered this: 


I wasn't thrilled with the results for several reasons but was tired of the back-and-forth, so I said, "It'll have to do."

Gemini's response: "I understand. I apologize that the image generations were not meeting your expectations and appreciate your patience with the process.

Is there anything else I can help you generate or create today?"


Related:

Aim higher than SEO for your marketing content


Conversation with Gemini

can you generate an image of Jane Austen celebrating an American Thanksgiving

Let's have fewer people, no signs, and a woman who is dressed like Jane Austen is in her portrait

Nope you didn't do it. That's almost the same picture. Try doing it with Jane Austen herself looking like this

That's the same picture again. Delete that and just put this picture of Jane Austen in a context of a table with turkey, etc.

She's out of proportion to the people. Just take them out and get the proportions right.

It'll have to do.

Sunday, August 25, 2024

Ouroboros, an apt symbol for AI model collapse

Engraving of a wyvern-type ouroboros by Lucas Jennis, in the 1625 alchemical tract De Lapide Philosophico

by Ariella Brown


AI hits the ouroboros (sometimes written uroboros) stage. You've likely seen it in the form of a snake in a circle, eating its own tail. The ancient symbol also sometimes showed dragons or a wyvern, so I chose this engraving by Lucas Jennis intended to represent mercury in the 1625 alchemical tract "De Lapide Philosophico," for my illustration instead of just going with something as prosaic as "model collapse"


To get a bit meta and bring generative AI into the picture (pun intended, I'm afraid) here's an ouroboros image
made with generative AI. ked Google

Ouroboros image generated by Google Gemini



Model collapse is what the researchers who published their take on this in Nature called the phenomenon of large language models (LLMs) doing the equivalent of eating their own tails when ingesting LLM output for new generation. They insist that the models should be limited to"data collected about genuine human interactions."

From the abstract:
"Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). We build theoretical intuition behind the phenomenon and portray its ubiquity among all learned generative models. We demonstrate that it must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet."

Shumailov, I., Shumaylov, Z., Zhao, Y. et al. AI models collapse when trained on recursively generated data. Nature 631, 755–759 (2024).

Let me know in the comments which illustration you like more. 

Friday, June 28, 2024

An Apology to Generative AI

ChatGPT spelled out in Scrabble tiles

By Ariella Brown


I'm not a generative AI fangirl. If anything, I'd consider myself more of a skeptic because people tend to not just use it as a tool to improve their writing but as a tool to replace the work of research, composition, and revision that is essential to good writing.

It is generally embraced by people who consider online research to be too much work and who believe that anything that comes out of a machine that will charge them no more than $20 a month for writing to be too good a deal to pass up. 

For those of us who actually read, the output of ChatGPT and similar LLMs is not exactly something to write home about. Unless you know how to prompt it and train it to write in a truly readable style, it will default to the worst of wordy, opaque corporate style text. 

But this isn't the fault of the technology. It's the fault of the mediocre content that dominates the internet that trained it. Below is one example that I pulled off  the "About" section of a real LinkedIn profile (first name Kerri maintained in the screenshot that proves this is real and not something I made up):  LinkedIn screenshot

As a strategic thinker, problem-solver, and mediator, I thrive in managing multiple, sometimes differing inputs to achieve optimal messaging and positioning. My proactive nature drives me to partner with leaders across marketing teams and internal business units, aligning efforts, connecting dots, and adding context to enable flawless execution of communication strategies and tactics.


In fast-paced, fluid environments, I excel in effectively prioritizing tasks and ensuring they are completed efficiently. I have a proven track record of setting and meeting strict deadlines and budgets, leveraging my ability to navigate dynamic landscapes seamlessly.

Driven by natural curiosity, I am constantly seeking to understand and implement the latest trends, technologies, and tactics essential for driving B2B sales opportunities. My keen interest in exploring new channels for messaging and content distribution fuels my passion for innovation and continuous improvement to not just meet but exceed expectations.

Let’s connect to explore how we can drive success together.

You know what sounds exactly like this? Cover letters you ask ChatGPT to compose for you. 

I've tried those out a few times and never been happy with the results because they always sounds like the text above. Trying to tell it to sound less stiff doesn't make it sound any less canned, and forget about getting it to copy my own writing style.

It's possible that Kerri used ChatGPT to create her "About" section. Given that she's been in the marketing biz for some time, though, I'd think she had to have had something filled out for years before ChatGPT was available, and it likely sounded very much like this even if she did let some LLM or something like Grammarly tweak it for her.  

People like Kerri, who ignore all writing advice from the masters like Orwell, White (watch for a upcoming  blog about him), and others made this the public face of corporate communication who are to blame for the bombastic and soulless style that LLMs replicate at scale. 


That's the reason for this apology too ChatGPT for mocking its output. You're not the one at fault. You had no way of knowing better. Humans do, and they should have provided you with better models for writing. 

Note on the title: I thought of giving this post the title "Apology" intended in the classical sense of a defense or justification for something others take as wrong with the hint of an apology to AI. Knowing that that wouldn't be clear to some readers, I opted to make this just a straight apology instead. 

Related:

A new generative AI comparison