Photo by Andrew Neel: https://www.pexels.com/photo/monitor-screen-showing-chatgpt-landing-page-15863066/ |
In the past, I've ncomparted Cha GPT to Google Bard (now Gemini). For today's exercise, I put the same query to Claude, ChatGPT, Gemini, and Copilot. I put in the name of a real person and identified the location for that individual in Queens, NY. None could give me the information I sough.
What's interesting, though, is noting the differences in how they contextualized their responses. Out of the four, Claude shows the greatest self-awareness, and I'd say that Copilot shows the least.
ChatGPT's response failed to warn about hallucination, though it's interesting that it inferred the person in question was a she (based on the first name, I suppose), a gender identification that Claude managed to avoid.
I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request?
Claude's response
As this appears to be asking about a relatively obscure individual, I want to remind you that while I try to be accurate, I may hallucinate or generate incorrect information when asked about very specific people who are not widely known public figures. If you're looking for factual information about this person, I'd recommend checking other reliable sources.
I award Claude points for warning me about the possibility of hallucinations occurring when generative AI attempts to answer questions in the absence of data. See
ChatGPT's response
I don't have specific information about an individual named ____from Queens, NY. It's possible that she is a private individual or someone not widely known beyond her personal circles. If she is known for something specific like public contributions, achievements, or notable activities, that information might be available through local news sources or community records.ChatGPT's response failed to warn about hallucination, though it's interesting that it inferred the person in question was a she (based on the first name, I suppose), a gender identification that Claude managed to avoid.
Google Gemini's response
Gemini just bowed out, practically chiding me for giving it this query that it could not answer:I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request?
Copilot's response
Copilot went in a completely different direction. I wouldn't call it a hallucination, but it is still inaccurate. It wanted to find a match, so in one case it named a business with the last name appearing twice as a partnership, though it admitted that was Brooklyn rather than Queens. Its second suggestion was a male doctor in Queens with a similar last name, though it is spelled differently. Honestly, I find it hard to believe that these are the only two results for the last name in all of NYC. I mean once you're expanding the parameters, there really should be a lot more than two possible results.RelatedAn A/B test of generative AI
Bard brings in the clowns
Follow Write Way Pro on LinkedIn and on Facebook.
You can also follow Ariella Brown.