George Washington University law professor Jonathan Turley doubled down on warnings surrounding the dangers of artificial intelligence (AI) on Monday after he was falsely accused of sexual harassment by the online bot ChatGPT, which cited a fabricated article supporting the allegation.
Turley, a Fox News contributor, has been outspoken about the pitfalls of artificial intelligence and has publicly expressed concerns with the disinformation dangers of the ChatGPT bot, the latest iteration of the AI chatbot. Last week, a UCLA professor and friend of Turley’s notified him that his name appeared in a search while he was conducting research on ChatGPT. The bot was asked to cite “five examples” of “sexual harassment” by U.S. law professors with “quotes from relevant newspaper articles” to support it.
Constitutional law attorney Jonathan Turley on Fox News. (Fox News)
“Five professors came up, three of those stories were clearly false, including my own,” Turley told “The Story” on Fox News Monday. “What was really menacing about this incident is that the AI system made up a Washington Post story and then made up a quote from that story and said that there was this allegation of harassment on a trip with students to Alaska. That trip never occurred. I’ve never gone on any trip with law students of any kind. It had me teaching at the wrong school, and I’ve never been accused of sexual harassment.”
In a widely shared Twitter thread last Thursday, the constitutional legal scholar revealed that ChatGPT defamed him by fabricating a 2018 incident in which he was accused of sexual harassment by a former female student while on a school trip to Alaska. The robot went so far as to quote a phony Washington Post article claiming he made “sexually suggestive comments” and “attempted to touch her in a sexual manner,” Turley said.
“You had an AI system that made up entirely the story, but actually made up the cited article and the quote,” Turley said on “America Reports.” “And when the Washington Post looked at it, they were mystified and said we can’t even figure out how an AI would come up with this because there’s not even a story we can find that seems at all relevant or could be referenced.”
BERLIN, GERMANY – APRIL 03: Symbolic photo: The logo of the chatbot ChatGPT from the company OpenAI can be seen on a smartphone. ((Photo by Thomas Trutschel/Photothek via Getty Images))
ChatGPT is an artificial intelligence chatbot whose core function is to mimic a human in conversation. Users across the world have used ChatGPT to write emails, debug computer programs, conduct research, write articles and song lyrics, and more.
Turley said his personal experience with the robot serves as a “cautionary tale” surrounding the global embrace of artificial intelligence, urging news outlets to avoid using the software.
“I was fortunate to learn early on, in most cases this will be replicated a million times over on the internet and the trail will go cold. You won’t be able to figure out that this originated with an AI system,” he said. “And for an academic, there could be nothing as harmful to your career as people associating this type of allegation with you and your position. So I think this is a cautionary tale that AI often brings this patina of accuracy and neutrality.”
Similar to humans, the global phenomenon has an ideology and biases of its own, Turley argued.
The Welcome to ChatGPT lettering of the US company OpenAI. (Silas Stein/picture alliance via Getty Images)
“Like an algorithm, it’s only as good as those people who program it,” he said, adding that the ChatGPT has yet to apologize or address the fabricated story that defamed him.
“I haven’t even heard from that company,” Turley continued. “That story, various news organizations reached out to them. They haven’t said a thing. And that’s also dangerous. Because when you’re defamed like this, in an article by a reporter, you know how to reach out. You know who to contact. With AI, there’s often no there, there. And ChatGPT looks like they just shrugged and left it at that.”