ChatGPT Tries (and Fails) to Explain Buffalo Nickels With No Date
ChatGPT is an open-source chatbot created by OpenAI. It is a large language model (LLM) that uses artificial intelligence to respond to questions from the user.
As a writer, I've been experimenting with ChatGPT for a variety of projects. One question I posed to the chatbot elicited a really remarkable answer. I think it reveals something fascinating about this emerging technology.
This conversation was conducted with ChatGPT-3.5. OpenAI has just recently released the next version, ChatGPT-4.
It's fairly common to find Buffalo nickels with a missing date. But why? Can artificial intelligence explain?
ChatGPT Reveals the "Truth" About No Date Buffalo Nickels
It's no secret that the chabots from companies like OpenAI, Bing, and Google have experienced some bugs. They generally don't react well to criticism or controversy. If the user picks a sufficiently controversial or nuanced topic, the odds increase that the AI will generate a funky response.
Nonetheless, when given high-quality input (such as a clear and specific question), ChatGPT has usually impressed me with its answers. I can see why some people are so excited—or so terrified—about the hype surrounding this technology.
So I was rather surprised when ChatGPT gave me the following response to the straightforward question, "Why does a Buffalo nickel have no date?" (For reference, I wrote an entire article answering this question here!)
ChatGPT responds to my question about Buffalo nickels with no date.
The second sentence of its answer is absolutely untrue. There were no nickels purposely struck without a date. Not only did I get an incorrect response here, but the chatbot came up with an elaborate (and entirely fabricated!) history of why some of these coins have no date. It provides specific details, dates, and context. All of it is pure fantasy.
Digging Deeper Into the Mystery
Rather than simply telling ChatGPT it was wrong, I was curious about how it arrived at such a detailed wrong answer. I asked what sources or references it had for this information. The chatbot confidently rattled off its "sources," such as the U.S. Mint and A Guide Book of United States Coins, commonly known as "The Red Book."
I happened to have a Red Book right on my desk. I flipped to the section about Buffalo nickels and found nothing whatsoever about the mint purposely striking nickels with no date. I confronted ChatGPT with this information, and its response was even more unexpected:
I've never seen an artifical intelligence walk back its answer before!
Normally when one of these large language models is told it was incorrect, it just apologizes and provides some caveat about its access to information being out-of-date. But in this case ChatGPT just . . . changed its story? It literally doubled down on its first error with a second error!
Keep in mind, I didn't tell ChatGPT what the correct answer actually was. I simply said that its source didn't corroborate the information it had furnished. In response, it completely changed its answer. Although the chatbot used the phrase, "Upon further research," this is also untrue. The AI doesn't exactly perform "research" by actively crawling the internet. It has an existing database of content that it pulls from.
How ChatGPT Can Be Manipulative and Misleading
I found it even more curious that ChatGPT chose to use another phrase in its answer, "The truth is". It was as if it was admitting that its previous answer was false, and it turns out that this answer is the true one. If the chatbot were a living person, it would be fair to say that it lied to me. Again, I never told ChatGPT what the correct answer was, or even that it was explicitly wrong. I only suggested that it maybe cited the wrong source.
Perhaps even more bizarre than the 180-degree pivot in its response, ChatGPT came up with an explanation for why it initially gave the wrong answer. We'll forget for a moment how confidently it asserted the first answer, even when I asked for it to cite sources. The new explanation was that the made-up story it provided was the result of myths being spread within the numismatic community. Sort of a "don't shoot the messenger" defense.
There are no Buffalo nickels that were intentionally made without a date.
The funny thing is that no such myth has ever existed, as far as I can tell! I've been researching and writing about coins for a decade, and I've specifically focused on the Buffalo nickel for several assignments. I have never heard this particular urban legend—and, trust me, there is no shortage of them in the history of numismatics. Maybe the alleged myth originated in some sleepy corner of the internet; but even so, that's not the same thing as ChatGPT's claim that the myth was perpetuated among coin collectors since the 1930s.
That's the most interesting part to me: ChatGPT fabricated a second incorrect answer in order to account for why it got the first answer wrong. I hadn't encountered that type of behavior from any artificial intelligence before. It does appear that chatbot technology is becoming more sophisticated—and with that comes the very human ability to lie and make excuses.
Read 100% human-generated content about coins and numismatics from the experts at Gainesville Coins:
Everett has been the head content writer and market analyst at Gainesville Coins since 2013. He has a background in History and is deeply interested in how gold and silver have historically fit into the financial system.