Search This Blog

Showing posts with label Gemini. Show all posts
Showing posts with label Gemini. Show all posts

Thursday, December 11, 2025

"Learnings" explained by ChatGPT, Claude, and Gemini

 I'm not at all a fan of the modern use of the word "learnings" in place of what could be called lessons or takeaways. I wondered what led to seeing this all-too-often in business communication. I  decided to see what I can glean about it from generative AI. 

ChatGPT

I started with ChatGPT and got the following response: 

📜 1. Historical origin of “learnings”

  • Old English already had leornung (“learning, knowledge obtained by study”).

  • “Learnings” appears intermittently in Middle English and Early Modern English, but mostly in the sense of knowledge or scholarship, not “takeaways.”

So the plural form is historically attested, but the modern meaning is new.


📈 2. When did “learnings” (meaning “takeaways”) take off?

Linguists and corpus researchers note that:

  • The usage begins to appear in the 1970s–80s in fields like education, psychology, organizational development, and management consulting.

  • It becomes much more widespread after the 1990s, especially in:

    • Corporate training

    • Agile/software development

    • Post-mortem analyses (“What were the key learnings?”)

By the early 2000s, it was common in business English; by the 2010s, it had firmly entered corporate jargon.


💬 3. Why was this term coined (or revived)?

Several reasons:

1. Jargon need in organizational development

Fields like management consulting, OD, and “lessons learned” methodology wanted a compact noun for:

“knowledge gained from experience, not just from instruction.”

“Lesson” felt too teacher-student oriented; “takeaway” too informal.

2. Influence of corporate and academic training culture

Training programs often talk about:

  • learnings,

  • key learnings,

  • actionable learnings.

The plural makes it feel like a measurable, list-friendly deliverable.

3. Analogy with words like “findings”

Consulting and research language uses many “-ings” nouns:

  • findings

  • earnings

  • savings

  • outcomes (a parallel)

“Learnings” fit the pattern of a concrete output after an event or analysis.

4. Global business English spread

Because English is used by many non-native speakers in international companies, simplified or regularized forms often spread.
“Learnings” works grammatically for learners who think:

verb → noun with “-ing” → plural is okay.


🚫 4. Why do some people dislike it?

  • For many native speakers, “learnings” sounds ungrammatical or like jargon.

  • They prefer:

    • lessons

    • takeaways

    • insights

Style guides (The Economist, AP, etc.) often recommend avoiding “learnings” unless quoting business jargon.

Claude's take

I then put the same question to Claude and got this response written out in paragraph form rather than bullet format with two citations, albeit from the same source. Also note that it does not throw in emojis like the other two do:

Based on my search, "learnings" has an interesting history. While the word itself dates back to the 14th century and was commonly used through the 18th century (Shakespeare even used it in Cymbeline), it surged in popularity during the 1950s in business jargon and educational contexts CSMonitor.com.

The term wasn't really "coined" in the modern sense—it was actually revived. From the 1400s through the 1700s, "learnings" meant "lessons" or "instructions" and was perfectly acceptable. However, it fell out of common use in the 19th and 20th centuries.

When it resurged in the 1950s, it took on a slightly different meaning in corporate and academic settings, essentially as a synonym for "findings," "conclusions," "results," or "takeaways." This revival happened as business jargon and corporate speak were developing their own vocabulary.

Today, learnings is widely viewed as pretentious and useless business jargon CSMonitor.com, appearing on countless lists of annoying corporate terminology. Despite this criticism, it has become firmly established in business, academic, and scientific contexts. The irony is that people generally have no problem with "teachings" (what is taught), but "learnings" (what is learned) somehow grates on many ears.

So it wasn't so much coined as it was resurrected from older English and repurposed for modern business speak—which is perhaps why it feels both familiar and oddly wrong to many people.

Google Gemini

To round out the experiment, I also posed the question to Google Gemini, which is closer to ChatGPT in style of presentation. however, it fails to grasp how much some of us hate the term. This is its response:

The word "learnings," particularly as a plural noun replacing terms like "lessons" or "takeaways," saw a significant increase in use as a corporate and educational buzzword starting around the year 2000.

However, it's important to note that the word itself is not new.

🗓️ Timeline of Usage

  • Middle/Early Modern English (14th - 17th Century): The noun "learning" meaning "a thing learned" (a countable sense) and its plural, "learnings," were in occasional use. Shakespeare, for example, used the term.

  • 1950s: The term saw an increase in use within academic and educational circles (often referred to as "edu-speak").

  • Circa 2000: It became a widespread buzzword in business-speak (corporate jargon), commonly appearing in phrases like "key learnings" or "apply these learnings." This is when it really started replacing "lessons learned" or "takeaways" in project management and professional contexts.


🤔 Why Was the Term Coined/Revived?

While "learnings" is often considered unnecessary jargon by language purists, its resurgence and adoption in business contexts appear to stem from a perceived need for a different nuance than existing words:

  • To Distinguish from "Lessons": The word "lessons" can carry connotations of being taught (as in a classroom lesson) or of something learned from a failure (as in "we learned our lesson"). "Learnings" is sometimes used to refer to insights or knowledge gained through experience or a process, often in a less painful or formal way.

  • To Merge Concepts: In corporate and project environments, "learnings" acts as a shorthand to combine the idea of "things we've learned as a team" and "action items" or "next steps." It is intended to imply a list of actionable insights or points of wisdom distilled from an experience.

  • Corporate Trend: It is part of a broader trend of turning verbs into nouns in corporate language ("What's the ask?" or "What's the solve?"). The plural "learnings" parallels other accepted plural nouns derived from verbal actions, such as findings and teachings.

Essentially, its modern rise is largely attributed to it becoming a recognizable piece of corporate jargon that differentiates the knowledge gained from the traditional, sometimes rigid, sense of a "lesson."

-----------------------------------------------------------------------------------------------------------

The information from all three is consistent, so we don't have the problem of AI hallucinations here. However, Gemini seems to ignore the fact that many people hate the use of this jargon, considering it pretentious, as Claude points out. It also omits that it's not recommended for use in publications, as per ChatGPT's citation. Perhaps that's a reflection of Google's own corporate culture and form of communication.

Related: https://writewaypro.blogspot.com/2024/06/a-new-generative-ai-comparison.html


Thursday, November 27, 2025

How Thankful Should We Be for Generative AI?

 I thought I'd like to generate a picture of Jane Austen in context of an American Thanksgiving celebration for a holiday greeting post on the social media accounts I have associated with my Totally Jane Austen brand. I opted to use Google Gemini. 

Here's how it went: 

First prompt: Can you generate an image of Jane Austen celebrating an American Thanksgiving

Here's what it delivered with this message: 

"Here is an image of Jane Austen celebrating American Thanksgiving."



Second prompt included the portrait of Jane Austen that I referenced: Let's have fewer people, no signs, and a woman who is dressed like Jane Austen is in her portrait 



Second iteration came back with this: "Here is a Thanksgiving celebration with fewer people, no signs, and a woman dressed like Jane Austen in her portrait."




I attached the Jane Austen portrait again and wrote: "Nope you didn't do it. That's almost the same picture. Try doing it with Jane Austen herself looking like this."

Gemini then delivered this with the note, "My apologies, I misunderstood the previous request. Here is an image of Jane Austen herself, looking like in her portrait, celebrating Thanksgiving with fewer people and no signs." 


My response was to again attach the portrait with this prompt:  "That's the same picture again. Delete that and just put this picture of Jane Austen in a context of a table with turkey, etc."


Gemini then offered this monstrosity with this phrase "You are absolutely right, I apologize for the repeated image. I will generate a new image for you, placing Jane Austen from the provided portrait into a Thanksgiving setting. "


I then wrote:  "She's out of proportion to the people. Just take them out and get the proportions right."


Gemini responded with "My apologies for the scale issue. Let me correct that for you. Here is Jane Austen, in proper proportion, at a Thanksgiving table by herself." and delivered this: 


I wasn't thrilled with the results for several reasons but was tired of the back-and-forth, so I said, "It'll have to do."

Gemini's response: "I understand. I apologize that the image generations were not meeting your expectations and appreciate your patience with the process.

Is there anything else I can help you generate or create today?"


Related:

Aim higher than SEO for your marketing content


Conversation with Gemini

can you generate an image of Jane Austen celebrating an American Thanksgiving

Let's have fewer people, no signs, and a woman who is dressed like Jane Austen is in her portrait

Nope you didn't do it. That's almost the same picture. Try doing it with Jane Austen herself looking like this

That's the same picture again. Delete that and just put this picture of Jane Austen in a context of a table with turkey, etc.

She's out of proportion to the people. Just take them out and get the proportions right.

It'll have to do.

Sunday, June 23, 2024

A new generative AI comparison






Photo by Andrew Neel: https://www.pexels.com/photo/monitor-screen-showing-chatgpt-landing-page-15863066/



In the past, I've compared Chat GPT to Google Bard (now Gemini). For today's exercise, I put the same query to Claude, ChatGPT, Gemini, and Copilot. I put in the name of a real person and identified the location for that individual in Queens, NY.  None could give me the information I sough. 
What's interesting, though, is noting the differences in how they contextualized their responses. Out of the four, Claude shows the greatest self-awareness, and I'd say that Copilot shows the least.  

Claude's response

As this appears to be asking about a relatively obscure individual, I want to remind you that while I try to be accurate, I may hallucinate or generate incorrect information when asked about very specific people who are not widely known public figures. If you're looking for factual information about this person, I'd recommend checking other reliable sources.

I award Claude points for warning me about the possibility of hallucinations occurring when generative AI attempts to answer questions in the absence of data. See 

ChatGPT's response

I don't have specific information about an individual named ____from Queens, NY. It's possible that she is a private individual or someone not widely known beyond her personal circles. If she is known for something specific like public contributions, achievements, or notable activities, that information might be available through local news sources or community records.

ChatGPT's response failed to warn about hallucination, though it's interesting that it inferred the person in question was a she (based on the first name, I suppose), a gender identification that Claude managed to avoid.

Google Gemini's response

Gemini just bowed out, practically chiding me for giving it this query that it could not answer:
 
I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request?


Copilot's response

Copilot went in a completely different direction. I wouldn't call it a hallucination, but it is still inaccurate. It wanted to find a match, so in one case it named a business with the last name appearing twice as a partnership, though it admitted that was Brooklyn rather than Queens. Its second suggestion was a male doctor in Queens with a similar last name, though it is spelled differently. Honestly, I find it hard to believe that these are the only two results for the last name in all of NYC. I mean once you're expanding the parameters, there really should be a lot more than two possible results. 


RelatedAn A/B test of generative AI
Bard brings in the clowns


You can also follow Ariella Brown.