Remember that ‘drunk Pelosi’ video? AI-powered deepfakes are making disinformation much more toxic

Should we be worried about deepfake videos? Well, sure. But I’ve tended to think that some skepticism is warranted.

My leading example is a 6-year-old video of then-House Speaker Nancy Pelosi in which we are told that she appears to be drunk. I say “we are told” because the video was simply slowed down to 75%, and the right-wing audience for whom it was intended thought this crude alteration was proof that she was loaded. Who needs deepfakes when gullible viewers will be fooled by such crap? People believe what they want to believe.

Become a supporter of Media Nation for just $6 a month. You’ll receive a weekly newsletter with exclusive content, a roundup of the week’s posts, photography and a song of the week.

But the deepfakes are getting better. This morning I want to call your attention to a crucially important story in The New York Times (gift link) showing that deepfakes powered by artificial intelligence are causing toxic damage to the political and cultural environment around the world.

“The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal,” write reporters Steven Lee Myers and Stuart A. Thompson. A few examples:

  • Romania had to redo last year’s presidential election after a court ruled that AI manipulation of one of the candidates may have changed the result.
  • An AI-generated TikTok video falsely showed Donald Trump endorsing a far-right candidate in Poland.
  • Another fake video from last year’s U.S. election tied to Russia falsely showed Kamala Harris saying that Trump refused to “die with dignity.”

As with the Pelosi video, fakes have been polluting the media environment for a long time. So I was struck by something that Isabelle Frances-Wright of the Institute for Strategic Dialogue told the Times: Before AI, “you had to pick between scale or quality — quality coming from human troll farms, essentially, and scale coming from bots that could give you that but were low quality. Now, you can have both, and that’s really scary territory to be in.”

In other words, disinformation is expanding exponentially both in terms of quality and quantity. Given that, it’s unlikely we’ll see any more Russian-generated memes of a satanic Hillary Clinton boxing with Jesus, a particularly inept example of Russian propaganda from 2016. Next time, you’ll see a realistic video of a politician pledging their eternal soul to the Dark Lord.

And since I still have a few gift links to give out before the end of month, here’s a Times quiz with 10 videos, some of which are AI fakes and some real. Can you tell the difference? I didn’t do very well.

So what can we do to protect our political discourse? I’m sure we can all agree that it’s already in shockingly bad shape, dominated by lies from Trump and his allies that are amplified on Fox News and social media. As I said, people are going to believe what they want to believe. But AI-generated deepfake videos are only going to make things that much worse.

How Sahan Journal is using AI to streamline its operations; plus, more on search, and screening pitches

Cynthia Tu of Sahan Journal. Photo (cc) 2025 by Lev Gringauz / MinnPost

Like it or not (and my own feelings are mixed), artificial intelligence is being used by news organizations, and there’s no turning back. The big question is how.

The worst possible use of AI is to write stories, especially without sufficient human intervention to make sure that what’s being spit out is accurate. Somewhat more defensible is using it to write headlines, summaries and social-media posts — again, with actual editors checking it over. The most promising, though, is using it to streamline certain internal operations that no one has the time to do.

Become a supporter of Media Nation for just $6 a month. You’ll receive a weekly newsletter with all sorts of exclusive goodies.

That’s what’s happening at Sahan Journal, a 6-year-old digital nonprofit that covers immigrants and communities of color in Minnesota. It’s one of the projects that Ellen Clegg profile in our book, “What Works in Community News.” And according to Lev Gringauz of MinnPost (one of the original nonprofit news pioneers), the Journal has embarked on a project to streamline some of its news and business functions with AI. (I learned about Gringauz’s story in Nieman Lab, where it was republished.)

Bolstered with $220,000 in grant money from the American Journalism Project and OpenAI, the creator of ChatGPT, the Journal has employed AI to help with such tasks as processing financial data of the state’s charter schools, generating story summaries for Instagram, and adding audio to some articles.

The real value, though, has come in bolstering the revenue side, as the Journal has experimented with using AI to retool its media kit and to understand its audience better, such as “pulling up how much of Sahan Journal’s audience cares about public transportation.”

“We’re less enthusiastic, more skeptical, about using AI to generate editorial content,” Cynthia Tu, the Journal’s data journalist and AI specialist, told Gringauz. Even on internal tasks, though, AI has proved to be a less than reliable partner, hallucinating data despite Tu explicitly giving it commands not to scour the broader internet.

And as Gringauz observes, OpenAI is bleeding money. How much of a commitment makes sense given that Sahan Journal may be building systems on top of a platform that may cease to exist at some point?

Two other AI-related notes:

➤ Quality matters. In his newsletter Second Rough Draft, Richard J. Tofel has some useful thoughts on the panic over Google’s AI search engine, which has been described as representing an existential threat to news organizations since it will deprive them of click-throughs to their websites.

Tofel writes that clickbait will be harmed more than high-quality journalism, noting that The New York Times and The Wall Street Journal have been hurt less than HuffPost, Business Insider and The Washington Post. “If there is one overriding lesson of publishing in the digital age,” Tofel writes, “it remains that distinctive content remains the most unassailable, the least vulnerable.”

Though Tofel doesn’t say so, I think there’s a lesson for local news publishers as well: hyperlocal journalism should be far less affected by AI search than national outlets, especially for those organizations that emphasize building a relationship with their communities.

➤ Here’s the pitch. Caleb Okereke, a Ph.D. student at Northeastern, is using AI to screen pitches for his digital publication Minority Africa. He writes that “we are receiving 10x more pitches than we did in our early days after launch,” adding: “With a lean editorial team, we faced a challenge familiar to many digital publications: how do you maintain depth, fairness, and attention when the volume scales but the staff doesn’t?”

He and his colleagues have built a customized tool called Iraka (which means “voice” in the Rutooro language) and put it to the test. As he writes, it’s far from perfect, though it’s getting better.

“As of now, editors are using Iraka individually to provide a first-pass on submissions, testing its utility alongside regular human review,” Okereke reports. “Every pitch is still manually read, and no editorial decisions are made solely based on the model’s output. This staged integration allows us to observe how the tool fits into existing workflows without disrupting the editorial process.”

ABC goes too far in pushing out Terry Moran; plus, Google’s AI assault, and Jay Rosen moves on

Terry Moran, right, interviews Donald Trump in April 2025. Public domain photo by Joyce N. Boghosian via the White House.

How to behave on social media has bedeviled journalists and confounded editors for years. Marty Baron clashed with reporters Wesley Lowery and Felicia Sonmez over their provocative Twitter comments back when he was executive editor of The Washington Post, and those are just two well-known examples.

Become a supporter of Media Nation for just $6 a month. You’ll receive a weekly newsletter with exclusive commentary, a roundup of the week’s posts, photography and a song of the week.

The latest journalist to run afoul of his news organization’s social-media standards is Terry Moran, who was, until Tuesday, employed by ABC News. Moran was suspended on Sunday after he tweeted that White House official Stephen Miller and President Trump is each a “world-class hater.” The tweet is now gone, but I’ve included an image. On Tuesday, Moran’s employer announced that they were parting company with him, as NPR media reporter David Folkenflik writes.

I think ABC was right to suspend Moran but wrong to get rid of him, and that media critic Margaret Sullivan got the nuances perfectly when she wrote this for her newsletter, American Crisis:

I’m amazed that Moran posted what he did. It’s well outside the bounds of what straight-news reporters do. It’s more than just calling a lie a lie, or identifying a statement as racist — all of which I think is necessary. Moran is not a pundit or a columnist or any other kind of opinion journalist….

I would hate to see Moran — with his worthy career at ABC News, where he’s been for almost 30 years — lose his job over this. I hope that the honchos at ABC let a brief suspension serve its purpose, and put him back to work.

Unfortunately, this is ABC News, whose corporate owner, Disney, disgraced itself earlier this year by paying $15 million to settle a libel suit brought by Trump over a minor, non-substantive error: George Stephanopoulos said on the air that Trump had been found “liable for rape” in a civil case brought by E. Jean Carroll when, in fact, he’d been found liable for sexual abuse. The federal judge in the Carroll case even said in a ruling that the jury had found Trump “raped” Carroll in the ordinary meaning of the term. But Disney couldn’t wait to prostrate itself before our authoritarian ruler.

So when Moran violated ABC News’ social-media policy, as the organization claimed, he no doubt knew he could expect no mercy.

Continue reading “ABC goes too far in pushing out Terry Moran; plus, Google’s AI assault, and Jay Rosen moves on”

AI roundup: The WashPost eyes robot-edited op-ed pieces, while Chicago and Philly execs speak out

Jeff Bezos. Painting (cc) 2017 by thierry ehrmann

The Washington Post’s plan to bring in a plethora of outside opinion writers, edited by artificial intelligence, is being widely mocked, as it should be. But the idea is not new — at least the non-AI part.

A decade ago, the Post started publishing something called PostEverything, which the paper called a digital daily magazine for voices from around the world.” Here’s how the 2014 rollout described it:

In PostEverything, outsiders will entertain and inform readers with fresh takes, personal essays, news analyses, and other innovative ways to tell the stories everyone is talking about — and the ones they haven’t yet heard.

PostEverything went PostNothing sometime in 2022, but now it’s back. According to Benjamin Mullin of The New York Times (gift link), the revived feature, known internally as Ripple, will comprise opinion writing from other newspapers, independent writers on Substack and, eventually, nonprofessional writers. Ripple will be digital-only and will be offered outside the Post’s paywall.

Become a supporter of Media Nation for just $6 a month. You’ll receive a weekly newsletter with exclusive commentary, a roundup of the week’s posts, photography and even a song of the week.

What’s hilarious is that Mullin contacted several of the partners the Post is considering, such as The Salt Lake Tribune and The Atlanta Journal-Constitution, and was told they’re not interested. Another potential partner was identified as Jennifer Rubin, who quit the Post over owner Jeff Bezos’ meddling and started her own publication called The Contrarian. Mullin writes: “When told that she had been under consideration at all, Ms. Rubin burst out in laughter. ‘Did they read my public resignation letter?’ she said.”

Continue reading “AI roundup: The WashPost eyes robot-edited op-ed pieces, while Chicago and Philly execs speak out”

AI embarrassment aside, Business Insider faces huge challenges in the post-SEO environment

Former masters of the universe Henry Blodget, founder of Business Insider, and Nick Denton, founder of Gawker. Photo (cc) 2012 by the Financial Times.

There was a time when Business Insider’s digital strategy was among the most widely admired and emulated in publishing. But that was then.

Last week, the outlet announced it was laying off 21% of its staff and doubling down on artificial intelligence, a sign of how drastically the business model for digital news has changed over the past few years. I’ll get back to that. But first, an AI-related embarrassment.

Become a supporter of Media Nation for just $6 a month. You’ll receive a weekly newsletter with exclusive content as well as my undying thanks.

On Sunday, Semafor media reporter Max Tani revealed that, last May, Business Insider management distributed to staff members a list of books it recommended so that its employees could learn about the vision and best practices of leading figures in business and technology. The list included such classics as “Jensen Huang: the Founder of Nvidia,” “Simply Target: A CEO’s Lessons in a Turbulent Time and Transforming an Iconic Brand” and “The Costco Experience: An Unofficial Survivor’s Guide.”

As it turned out, those books and several others either don’t exist or have slightly different titles and were written by authors other than the ones cited in what managers called “Beacon Books.” In all likelihood, Tani reports, the book titles were generated by AI. At least Business Insider didn’t recommend them to readers, as two daily newspapers did recently with a list of summer books generated by a third-party publisher.

Business Insider is owned by Axel Springer, a German-based conglomerate that also owns Politico and Morning Brew, neither of which faces layoffs, according to Corbin Bolies of The Daily Beast.

Henry Blodget founded Business Insider in 2007, and the publication quickly established itself as a success in the world of SEO, or search engine optimization. In 2016, I interviewed The Washington Post’s then-chief technologist, Shailesh Prakash, for my book “The Return of the Moguls.” He told me that BI was one of several outlets the Post studied to see how it used a variety of factors to get its journalism in front of as many eyeballs as possible. Here’s part of what he said:

We have built our own crawlers, so we have crawlers go and crawl a bunch of other sites — USA Today, New York Times, Business Insider — and we go and grab their content and bring it in-house, strip out all the branding, only have the headline, image and a blurb, and put it in front of 500-plus users every month as a test. And the question that’s asked is, “Would you read this story?” And you don’t know whether it’s a Business Insider story or a Washington Post story or a Huffington Post story or a USA Today story. All you see is an image, a headline and blurb. And based on the results of that, we compare our content to these different sites. Are we better than The Huffington Post in politics content for women? Are we better than Business Insider in business content for men?

Back then, Business Insider and HuffPost were offering their journalism for free and paying for it by building huge audiences and selling them advertisers. The Times and The Washington Post were in the early stages of building their paywall strategy.

Eventually, the free model collapsed as Google drove the value of digital advertising through the floor. Today, HuffPost is a greatly diminished outlet owned by BuzzFeed, which itself is a shadow of what it used to be. And Business Insider has a paywall.

Now, I have nothing against for-profit news organizations charging for their journalism. But who would take out a paid subscription to Business Insider? That’s not a comment about the quality. But readers are dealing with subscription fatigue, and even the most hardcore news junkies might pay for one national paper (perhaps The Wall Street Journal in the case of BI’s target audience), one regional paper and a few newsletters.

BI isn’t going to make the cut for more than a handful of readers.

There’s an additional factor. BI still relies on Google to attract readers who might be enticed into buying a subscription — and now a Google search gives you an AI-generated result. There’s no need to click through, even though the AI summary might prove to be wildly inaccurate.

In an interview with Andy Meek of Forbes, Blodget said he was “very sad” to learn about the layoffs at BI, and he offered his thoughts on how digital publishers can survive in the current environment. “Direct distribution and subscriptions,” he said. “That model will support thousands of excellent publications, big and small. And audio and video are still growing as we move from TV/radio to digital.”

But Business Insider already has a paywall and newsletters. At best, the publication faces a smaller, less ambitious future. And turning over some of what it produces to AI is not going to help it maintain a relationship of trust with its readers.

What’s the Colorado angle in the NPR lawsuit?; plus, a Muzzle for Quincy’s mayor, and an AI LOL

Kevin Dale, executive editor of Colorado Public Radio. Photo (cc) 2021 by Dan Kennedy.

I haven’t seen any explanation for why three public radio outlets in Colorado joined NPR in suing the Trump administration over its threat to defund the Corporation for Public Broadcasting. I’m glad they did, but it seems to me that all 246 member stations ought to sign on, including GBH and WBUR in Boston.

The Colorado entities, according to Ben Markus of Colorado Public Radio, are CPR (which reaches 80% of the state through a network of transmitters and translators), Aspen Public Radio and KSUT Public Radio of Ignacio, a Native American station that serves the Southern Ute Tribe.

Support this free source of news and commentary for just $6 a month. You’ll receive a weekly newsletter with all sorts of exclusive goodies.

When I was in Colorado several years ago to interview people for the book that Ellen Clegg and I wrote, “What Works in Community News,” CPR was perhaps the largest news organization in the state, with a staff of 65 journalists. (I say “perhaps” because executive editor Kevin Dale thought one or two television stations might be bigger.) Some cuts were made last year as business challenges hit a number of public broadcasting outlets as well as NPR itself.

The basis of the lawsuit, writes NPR media reporter David Folkenflik, is that CPB is an independent, private nonprofit that is funded by Congress. The suit claims that the president has no right to rescind any money through an executive order; only Congress can do that. Moreover, the suit contends that this is pure viewpoint discrimination, as demonstrated by Trump’s own words — that NPR and PBS, which also relies on CPB funding, present “biased and partisan news coverage.”

Continue reading “What’s the Colorado angle in the NPR lawsuit?; plus, a Muzzle for Quincy’s mayor, and an AI LOL”

That AI-generated list of fake books was published by a Hearst subsidiary, 404 Media reports

Illustration — of course! — by ChatGPT

We now know more about the AI-generated slop that was published in the Chicago Sun-Times and The Philadelphia Inquirer.

According to Jason Koebler of 404 Media, the 64-page summer guide called “Heat Index” was produced by King Features, part of the Hearst chain. As Koebler reported earlier, a freelancer named Marco Buscaglia used AI to write a guide to summer books. He admitted that he did not check his work, and it turned out that most of the books don’t exist.

Become a supporter of Media Nation for just $6 a month. You’ll receive a weekly newsletter with exclusive content, a roundup of the week’s posts, photography and a song of the week.

Marina Dunbar reports in The Guardian that other articles in “Heat Index” may also contain AI hallucinations, including one on food and another on gardening. The Sun-Times addressed the fiasco on Tuesday but put its statement behind the paper’s paywall. That’s unacceptable, so here’s a link where you can find it. The paper says in part:

Our partner confirmed that a freelancer used an AI agent to write the article. This should be a learning moment for all of journalism that our work is valued because of the relationship our very real, human reporters and editors have with our audiences.

The Sun-Times statement also says that subscribers won’t be charged, that “Heat Index” is being removed from its e-paper version, and that various steps are being taken to improve transparency.

The Chicago Sun-Times News Guild issued a statement as well:

The Sun-Times Guild is aware of the third-party “summer guide” content in the Sunday, May 18 edition of the Chicago Sun-Times newspaper. This was a syndicated section produced externally without the knowledge of the members of our newsroom.

We take great pride in the union-produced journalism that goes into the respected pages of our newspaper and on our website. We’re deeply disturbed that AI-generated content was printed alongside our work. The fact that it was sixty-plus pages of this “content” is very concerning — primarily for our relationship with our audience but also for our union’s jurisdiction.

Our members go to great lengths to build trust with our sources and communities and are horrified by this slop syndication. Our readers signed up for work that has been vigorously reported and fact-checked, and we hate the idea that our own paper could spread computer- or third-party-generated misinformation. We call on Chicago Public Media management to do everything it can to prevent repeating this disaster in the future.

It’s interesting that most of the focus has been on the Sun-Times rather than the Inquirer, even though “Heat Index” appeared in the Inquirer last Thursday, three days before the Sun-Times, according to Herb Scribner of The Washington Post (gift link). Axios reported that the Inquirer’s publisher and CEO, Lisa Hughes, called the screw-up “a violation of our own internal policies and a serious breach.” Mostly, though, the focus has been on Chicago, where the mistake was first caught.

It’s worth noting, too, that the Sun-Times and the Inquirer are both owned by mission-oriented nonprofits — the Sun-Times by Chicago Public Media and the Inquirer by the Lenfest Institute. It shows that anyone can get caught up in this. And I don’t really blame editors at either paper for not checking, since “Heat Index” is outside content produced by a respected media organization.

Speaking of Hearst, we have not yet heard from them as to how this was allowed to happen. Because even if it was acceptable for the Sun-Times and the Inquirer not to edit the supplement, it certainly should have been thoroughly edited by King Features before it was sent out to client newspapers.

This is a story about the hazards of AI, but, even more, it’s a story about human failure.

How an AI-generated guide to summer books that don’t exist found its way into two newspapers

Illustration (cc) 2010 by Elfboy

Well, this is embarrassing. The Chicago Sun-Times and The Philadelphia Inquirer have been caught running an AI-generated guide to summer books that don’t exist. I saw some hilarious posts about it this morning on Bluesky, but I wanted to wait until there was news about what had happened.

Now we know. Jason Koebler reports for the tech site 404 Media that the feature was written (or, rather, not written) by someone named Marco Buscaglia as part of a 64-page summer guide. The section was not specific to the Sun-Times or the Inquirer but, rather, was intended for multiple client newspapers. “It’s supposed to be generic and national,” Buscaglia told Koebler. “We never get a list of where things ran.”

Buscaglia pleads guilty to using AI, too, saying, “I do use AI for background at times but always check out the material first. This time, I did not and I can’t believe I missed it because it’s so obvious. No excuses. On me 100% and I’m completely embarrassed.”

And now it’s being reported that The Philadelphia Inquirer ran the supplement, too.

The Chicago Sun-Times, a tabloid, merged several years ago with Chicago Public Media, creating a nonprofit hybrid that could compete with the larger Chicago Tribune, which has labored under cuts imposed by its hedge-fund owner, Alden Global Media.

The merger hasn’t gone particular well, though. In March, the Sun-Times reported that it would lose 20% of its staff under buyouts imposed by Chicago Public Media, which is dealing with its own economic woes. According to an article by Sun-Times reporter David Roeder, the cuts were aimed at eliminating 23 positions in a newsroom of 107.

As for the AI fiasco, the Sun-Times said on Bluesky: “We are looking into how this made it into print as we speak. It is not editorial content and was not created by, or approved by, the Sun-Times newsroom. We value your trust in our reporting and take this very seriously.”

If it wasn’t approved by the newsroom, that suggests it was an advertising supplement.

The triumph of hope over experience: The latest on how AI is not solving the local news crisis

Illustration produced by AI using DALL-E

This past weekend I listened to a bracingly entertaining conversation that the public radio program “On the Media” conducted with tech journalist Ed Zitron. Co-host Brooke Gladstone had billed it as a chance for Zitron to make sense out of DeepSeek, the new Chinese artificial-intelligence software that purports to do what ChatGPT and its ilk can do for a fraction of the cost — and, presumably, while using a fraction of the electric power burned by American AI companies.

But it was so much more than that. Maybe you’re familiar with Zitron. I wasn’t. As I learned, he is a caustic skeptic of American AI in general. In fact, he doesn’t even regard the large language models (LLMs) that we’ve come to think of as AI as the real thing, saying they are nothing but an error-prone scam that is attracting fast sums of venture capital but will never make any money. Here’s a taste:

The real damage that DeepSeek’s done is they’ve proven that America doesn’t really want to innovate. America doesn’t compete. There is no AI arms race. There is no real killer app to any of this. ChatGPT has 200 million weekly users. People say that’s a sign of something. Yes, that’s what happens when literally every news outlet, all the time, for two years, has been saying that ChatGPT is the biggest thing without sitting down and saying, “What does this bloody thing do and why does it matter?” “Oh, great. It helps me cheat at my college papers.”

And this:

When you actually look at the products, like OpenAI’s operator, they suck. They’re crap. They don’t work. Even now the media is still like, “Well, theoretically this could work.” They can’t. Large language models are not built for distinct tasks. They don’t do things. They are language models. If you are going to make an agent work, you have to find rules for effectively the real world, which AI has proven itself. I mean real AI, not generative AI that isn’t even autonomous is quite difficult.

As you can tell, Zitron has a Brit’s gift for vitriol, which made the program all the more compelling. Now, I am absolutely no expert in AI, but I was intrigued by Zitron’s assertion that LLMs are not AI, and that real AI is already working well in things like autonomous cars. (Really?) But given that we just can’t keep AI — excuse me, LLMs — from infesting journalism, I regarded Gladstone’s interview with Zitron as a reason to be hopeful. Maybe the robots aren’t going to take over after all.

Continue reading “The triumph of hope over experience: The latest on how AI is not solving the local news crisis”

A flick of the mutant wrist

Adam Gaffin has posted a hilarious find at Universal Hub — an AI-generated X-ray of a wrist published by The Boston Globe that has all kinds of problems, including a third forearm bone and finger bones that don’t actually connect to anything. It’s now been online since Dec. 19, and it persists despite some mockery on Bluesky as well as Adam’s post.