A few hours after James Whitbrook clocked into work at Gizmodo on Wednesday, he received a note from his editor in chief: Within 12 hours, the company would roll out articles written by artificial intelligence. Roughly 10 minutes later, a story by “Gizmodo Bot” posted on the site about the chronological order of Star Wars movies and television shows.
Whitbrook — a deputy editor at Gizmodo who writes and edits articles about science fiction — quickly read the story, which he said he had not asked for or seen before it was published. He catalogued 18 “concerns, corrections and comments” about the story in an email to Gizmodo’s editor in chief, Dan Ackerman, noting the bot put the Star Wars TV series “Star Wars: The Clone Wars” in the wrong order, omitted any mention of television shows such as “Star Wars: Andor” and the 2008 film also entitled “Star Wars: The Clone Wars,” inaccurately formatted movie titles and the story’s headline, had repetitive descriptions, and contained no “explicit disclaimer” that it was written by AI except for the “Gizmodo Bot” byline.
The article quickly prompted an outcry among staffers who complained in the company’s internal Slack messaging system that the error-riddled story was “actively hurting our reputations and credibility,” showed “zero respect” for journalists and should be deleted immediately, according to messages obtained by The Washington Post. The story was written using a combination of Google Bard and ChatGPT, according to a G/O Media staff member familiar with the matter. (G/O Media owns several digital media sites including Gizmodo, Deadspin, The Root, Jezebel and The Onion.)
“I have never had to deal with this basic level of incompetence with any of the colleagues that I have ever worked with,” Whitbrook said in an interview. “If these AI [chatbots] can’t even do something as basic as put a Star Wars movie in order one after the other, I don’t think you can trust it to [report] any kind of accurate information.”
The irony that the turmoil was happening at Gizmodo, a publication dedicated to covering technology, was undeniable. On June 29, Merrill Brown, the editorial director of G/O Media, had cited the organization’s editorial mission as a reason to embrace AI. Because G/O Media owns several sites that cover technology, he wrote, it has a responsibility to “do all we can to develop AI initiatives relatively early in the evolution of the technology.”
“These features aren’t replacing work currently being done by writers and editors,” Brown said in announcing to staffers that the company would roll out a trial to test “our editorial and technological thinking about use of AI.” “There will be errors, and they’ll be corrected as swiftly as possible,” he promised.
Gizmodo’s error-plagued test speaks to a larger debate about the role of AI in the news. Several reporters and editors said they don’t trust chatbots to create well-reported and thoroughly fact-checked articles. They fear business leaders want to thrust the technology into newsrooms with insufficient caution. When trials go poorly, it ruins employee morale as well as the reputation of the outlet, they argue.
Artificial intelligence experts said many large language models still have technological deficiencies that make them an untrustworthy source for journalism unless humans are deeply involved in the process. Left unchecked, they said, artificially generated news stories could spread disinformation, sow political discord and significantly impact media organizations.
“The danger is to the trustworthiness of the news organization,” said Nick Diakopoulos, an associate professor of communication studies and computer science at Northwestern University. “If you’re going to publish content that is inaccurate, then I think that’s probably going to be a credibility hit to you over time.”
Well, that horse bolted the barn some time ago.