An AI-generated news anchor in China. (Xinhua via CNBC)

Artificial intelligence (AI) has endless positive potential. It can also cause big problems if those who use it aren’t transparent about it, which major news companies are learning the hard way.

Sports Illustrated, the famed athletic magazine, announced in February that it would use AI to help produce articles and suggest topics for journalists to track. Just this week, it was accused of publishing AI-generated articles from fake authors. (Its parent company, The Arena Group, called the reports “inaccurate.”)

Gannett, which owns USA TODAY (Lean Left bias) and many local newspapers, is taking heat for seemingly publishing AI-generated product reviews and other content across its hundreds of news outlets. It denies the allegations. 

G/O Media, which runs popular sites like The Onion, Gizmodo, and Deadspin, published AI-generated articles in July without the knowledge of editorial leaders at the publications. A spokesman for the company called it a “successful” experiment, while unions for writers at G/O publications bashed it as unethical.

Tech news website CNET (Center) got caught publishing unlabeled AI-generated articles in January, and only then did it disclose what it described as an “experiment.”

And these are just the ones we know about.

If the reports are true, these are unethical moves by major media brands. Journalists, by trade, have a duty to only report information they can verify, and to be 100% clear about where that information came from. Publishing AI-generated material without full disclosure is a violation of journalistic ethics.

These issues couldn’t come at a worse time for journalists. Trust in U.S. media is at historic lows, and Americans trust their media far less than people do in many other developed countries. 

Meanwhile, independent journalists have found homes on platforms that communicate a direct-to-reader approach as a core pillar of their product, such as Substack, Rumble, and X. Pundits like Bari Weiss (Center), Glenn Greenwald (Center), and Tucker Carlson (Right) have all migrated in one way or another from established news outlets to these platforms, and have maintained both an engaged following and a voice in the zeitgeist.

Part of what makes these pundits successful is that they're mostly or entirely on their own as an information source. Non-sentient AI, which depends on pre-determined source data to create output, is incapable of having such a reputation.

At a time when media favorability is declining, mainstream news outlets are likely to exacerbate their problems if they use AI to produce lower-quality content.

Beyond news media, music and podcast streaming giant Spotify has been under fire for years for prominently featuring “fake” artists on its platform. Music media sources accused Spotify of using music it created with AI to populate playlists where users listened passively (themed around activities like studying or sleeping). In theory, Spotify controlled the rights to this music to keep royalty payments coming back to itself instead of going out to real creators.

Still, Spotify has made it clear that it will not ban AI-generated content from its platform. Using AI to create content, no matter how poor it may be, adds profit incentive. One can presume legacy news media are tempted to do the same.

This behavior from established media brands is a blow to public trust in AI, which wasn’t very high to begin with. Botched usage of the technology only threatens to lower that trust. 

News outlets should absolutely “experiment” with using AI to improve the quality of their work, but any such test needs to be explained exhaustively.

AI can be used for good in journalism: to support research, suggest story ideas, transcribe interviews, and more. But if media professionals aren’t overly explicit about how AI supports them, its positive potential will be totally lost on consumers.

AI’s propensity for problem-solving can help mend the deteriorating media-audience relationship. Achieving that will require news outlets to prioritize AI transparency over rushed experiments that can harm reader trust.


Henry A. Brechter is the Editor-in-chief of AllSides. He has a Center bias.

Andy Gorel is a News Editor and Bias Analyst at AllSides. He has a Center bias.

This piece was reviewed by AllSides CEO John Gable (Lean Right bias) and News Editor Joseph Ratliff (Lean Left bias).