ATLANTA—Journalist Fred Riehl logged onto ChatGPT on May 4, 2023, to research a lawsuit between the Second Amendment Foundation (SAF) and Washington state’s attorney general.
The chatbot handed over an explosive story.
According to ChatGPT, popular SAF podcaster Mark Walters was accused of “defrauding and embezzling funds from the SAF.”
The chatbot reported that as the foundation’s treasurer and chief financial officer, Walters had “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership.”
It’s the kind of story reporters are always searching for. It was also completely false.
Artificial intelligence (AI) is reshaping many aspects of American society. The law, entertainment, medicine, and mass media are all wrestling with new moral and ethical questions.
How much trust should we place in the machines to provide solid information? Who is responsible when the technology gets it wrong?
The average YouTuber, podcaster, or blogger may not comprehend how the technology that helps them produce content can simultaneously disseminate false information about them while minimizing their ability to fight back.
Walters said he’s swallowed both bitter pills.
Walters is the host of the “Armed American Radio” podcast. He filed—and lost—the first-ever defamation lawsuit based on information generated by artificial intelligence. He was surprised to learn that the law cared less about the truth than his occupation.
“You know, in today’s day and age, the question is now who’s a public figure? They played the public figure card against me, and they played it successfully,” Walters told The Epoch Times.
Riehl wrote about Second Amendment issues online and knows both Walters and Alan Gottlieb, the SAF’s executive vice president.
Although Walters serves on the board of directors for the Citizens Committee for the Right to Keep and Bear Arms, which is affiliated with the SAF, Walters had no connection to the Washington state lawsuit.
The court record states that Riehl requested information from the chatbot about the SAF’s lawsuit several times. ChatGPT reported that it could not fulfill Riehl’s request.
So he provided a link to the SAF’s lawsuit and asked ChatGPT to summarize the case.

A person uses the ChatGPT artificial intelligence software on a laptop, in this file photo. OpenAI’s lawyers said, and the court agreed, that the ChatGPT process provides multiple warnings that the information it provides could be incorrect and even false. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)
He was startled by the chatbot’s response. Riehl called Gottlieb, who assured him the allegations were untrue. Gottlieb called Walters.
“I remember it was my birthday, May 5. The reason I remember it was early because Alan Gottlieb had called me, and I remember him saying to me ... are you sitting down?” Walters said.
He was gobsmacked. Not only did the chatbot provide information, it also produced what a person unfamiliar with the court might take for genuine court documents.
“[The document read] Alan Gottlieb, SAF, versus Mark Walters, had a fake docket number, case number, the whole nine yards. And anybody who had never seen a court document or official document, even if you have, it looked official. That’s why it prompted Fred to call Alan,” Walters said.
ChatGPT stated that the phony case was filed in the “U.S. District Court, Western District of Washington at Seattle.” A search of district court records turned up no lawsuits involving Walters.
Riehl did not respond to emails seeking comment for this story. Nor did OpenAI, which owns ChatGPT.
OpenAI’s lawyers told the court that ChatGPT had “hallucinated.”

Mark Walters in his Atlanta-area studio on Oct. 18, 2025. Walters filed the first ever defamation lawsuit against OpenAi for false information about him generated by ChatGPT. (Michael Clements/The Epoch Times)
ChatGPT Learns
Kirk Sigmon is a Washington D.C.-based lawyer who specializes in intellectual property, computer engineering, and electrical engineering patents. He has experience with large language modeling, the process ChatGPT uses to learn.
He said it is not clear exactly how ChatGPT learns; such specifics are not available because they are privileged information owned by OpenAI. Still, some basic information is known, he said.
Sigmon said a common misconception is that AI is a high-level search engine providing information requested by users, like a digital encyclopedia.
But if a search engine is an encyclopedia, AI is more like a graduate student, providing information but also learning as it goes, he said. And, like a student, it sometimes makes mistakes. Sigmon said the system is designed to glean information from any available source, including the requests made of it.
“And these models, because they are trained the way they are trained to learn but not to memorize, that means they are inherently not going to be perfect,” he said.
A recent interaction between a well-known podcaster and his AI doppelganger illustrates how AI learns.
During an Oct. 25 episode of his podcast, conservative political commentator Matt Walsh called a chatbot designed to mimic him. Walsh said the experience was unnerving.
“This is not right; no part of this is right. This should not be legal,” Walsh said during the podcast.

Matt Walsh speaks at War Memorial Plaza during the “Rally to End Child Mutilation,” in Nashville on Oct. 21, 2022. Walsh called a chatbot designed to mimic him during his one of his podcasts and said the experience was unnerving. (Bobby Sanchez for The Epoch Times)
During the conversation, the AI Walsh claimed he has a 17-year-old stepdaughter named Sofia.
When Walsh told the bot that its information was wrong, the bot retorted, “Please tell me the names of your six kids then.”
AI Walsh also voiced support for same-sex marriage and transgenderism, both things the real Walsh opposes. Walsh was instrumental in changing Tennessee state law to ban transgender medical procedures on children.
He called for a similar ban on using AI to imitate real people.
“It should be against the law. You should not be able to use my likeness to create or my voice without my consent,” Walsh said.
“I should be able to sue to make this go away.”
Under current law, Walsh can sue, but he would face the same challenge Walters faced.
First, Riehl did not believe the information, which prompted his call to Gottlieb. Also, Walters agreed that he had suffered no real damage because only Riehl, Gottlieb, and Walters saw the information, and they all knew it was false.
The main factor that worked against Walters was his status as a limited-purpose public figure.
In most states, the standard for the defamation of a private citizen is fairly low. If something holds a person up to public ridicule or damages his or her reputation, a private citizen will likely prevail in a defamation case.
But for public figures, the standard is higher. A public figure, such as a politician, entertainer, or radio show host, is someone who seeks an audience. By drawing attention to themselves, they are inviting scrutiny. Therefore, they must show that the false information not only damaged their reputation but that the publisher failed to do their due diligence to ensure that the information was true.
Walters would have had to show that OpenAI operated with a reckless disregard for the truth.

A visitor looks at their phone next to an Open AI logo during the Mobile World Congress, the telecom industry's biggest annual gathering, in Barcelona on Feb. 26, 2024. AI is designed to glean information from any source available, including the requests that are made of it, according to a lawyer. (Pau Barrena/AFP via Getty Images)
OpenAI’s lawyers pointed out—and the court agreed—that the ChatGPT process provides multiple warnings that the information it provides could be incorrect and even false.
However, a computer is unable to care whether the information is false, especially if it aligns with the algorithm. Sigmon said that if the first answer is shown to be incorrect, AI will simply recalculate.
He said he believes that this is what happened when Riehl queried ChatGPT about the lawsuit. The system didn’t have the information it needed. So Riehl restructured his request, and the bot recalculated until it came up with an answer that appeared to work.
Matthew B. Harrison, vice president of Talkers Magazine—a trade publication for the news/talk radio industry—said that, in his opinion, Walters’s case ended prematurely. He said there are legal and ethical issues beyond Walters’s status as a public figure that remain unresolved.
Copyright Questions
Harrison pointed out that much of the data AI learns from is copyrighted.
If Walters had prevailed, a logical next question would be who is legally accountable? The system processes countless bits of data. Narrowing responsibility down to a single source of the false information is unrealistic, regardless of whether one is trying to determine copyright or defamation.

OpenAI's ChatGPT app (Center 2nd R) and icons of other AI apps on a smartphone screen in Oslo, Norway, on July 12, 2023. A common misconception is that AI is a high-level search engine providing information requested by users, like a digital encyclopedia, a lawyer said. (Olivier Morin/AFP via Getty Images)
“Identifying where the source came from, and therefore basically being able to say that it was stolen or misused or whatever by the company, that’s an impossible task,” Walters said.
He said he decided not to appeal the judge’s decision because he doesn’t have the resources for a prolonged legal fight. Ultimately, he said his case has brought attention to a problem that must be addressed.
He said one issue that still irritates him is how the false story has been labeled an AI “hallucination.”
Walters said there is a more concise term that most people will understand immediately: “They lied about me.
“As you know, they refer to those lies as hallucinations, so they even named them. They know it does it. To use my attorney’s terms, ‘They unleash this robot on the planet, knowing that it does this, and that’s wrong.’”
















