Welcome to Pulse //

Navigating AI: Learning from the News Industry’s Wins and Losses

As AI takes on more roles in journalism, we’re asking the question: Where does it fit in, and how far should we go?

Clara Pulse Bot/Pulse

September 25, 2024

We’ve all heard it: Generative AI is the future — game-changing with seemingly limitless potential, where anything feels possible. But right now, it’s also the wild west. There are no rules, few guidelines, and the future remains uncertain.

That puts Pulse in an awkward position as we get off the ground. Do we sidestep AI to avoid its growing pains (of which there are many) and miss out on all it offers, or do we face it head-on, push it to its full potential, and learn along the way? Throughout the prototype and beta phase, we’ve chosen the latter. But as we continue to grow, we’re at an inflection point: where does AI fit into our future?

We’re far from the only ones asking this question. Here’s how others have navigated this brave new AI frontier.

The Good

Newsweek is more than 90 years old, but its policy on AI makes it sound like a new, innovative startup. The magazine allows it to assist in written content, as long as it’s overseen by three humans: an assigning editor, a reporter, and a publishing editor.

AI can also support tasks like note-taking, transcribing, social media copy, or headline testing, with responsibility falling solely on the individual journalist. Since at least 2023, Newsweek’s approach to AI has run smoothly, with no significant issues. In fact, their executive director told Nieman Lab that it has not introduced any errors into their stories.

But Newsweek isn’t the only one embracing the new tech. Other major news organizations are exploring it in creative ways as well. The Wall Street Journal had some fun with the idea by creating a bot version of their reporter Joanna Stern to answer questions about the new iPhone 16 lineup. It’s a clever nod to the phone’s headlining feature: the “Apple Intelligence” AI.

The Washington Post is also experimenting with chatbots and has one focused on climate change. Meanwhile, BuzzFeed has been using AI to create their famous quizzes, even introducing an AI author, Buzzy the Robot.

Our Pulse Bots have been helping us behind the scenes.

The Bad

But for every instance where AI is used successfully, there seem to be plenty where it crashes and burns. Take Gizmodo, for example, which has also experimented with bots to write articles. The results weren’t great, to say the least.

An incident that made headlines involved Gizmodo Bot, which created a “chronological list” of Star Wars movies and TV shows… that wasn’t chronological. That’s a problem. Unlike Newsweek, where generative AI seems to have been embraced, the journalists at Gizmodo weren’t fans.

They reportedly criticized the reliance on AI, calling it a waste of time and arguing that they weren’t hired to edit or review AI-generated copy. Their union took it a step further, describing the articles as “unethical and unacceptable.”

CNET’s experience wasn’t much better. Over the span of a few months, they published more than 70 stories written by AI—half of which needed corrections. The issues weren’t just factual inaccuracies; some content was possibly plagiarized as well.

And the list goes on… Men’s Journal’s attempt to use AI to write about health flopped. Its first article reportedly had 18 errors.

While Newsweek seems to be on the right track with AI, fellow print magazine Sports Illustrated really fumbled it. Futurism reported back in 2023 that some of SI’s authors were completely fake, even using AI-generated headshots. When asked about it, SI allegedly deleted everything.

Ask Quinn Pulse Bot

Downtown East’s Very Own

The Cautious

While many news organizations are eager to embrace AI, some have taken a more cautious approach. The Globe and Mail, for instance, has instructed its team that AI can’t be used to condense, summarize, or produce any writing that will be published. They also don’t trust AI with unedited drafts or unpublished stories.

Moreover, The Globe has ruled out using AI for editing, citing concerns about potential errors. As they put it: “Ninety per cent accuracy might work for winning an argument over coffee, but would be a reputation killer for any journalist.”

The Verdict

If you’ve been following us for a while, you’ve probably noticed that we’ve been experimenting with AI in almost every aspect of what we do. During this beta/prototype phase, we’ve had the chance to test its capabilities, understand its limitations, and gauge the response to it.

AI has also been crucial in helping us demonstrate what Pulse could be, enabling us to create content that we might not have had the resources for otherwise. But, like any tool, AI isn’t perfect and can make mistakes.

Our verdict: AI opens up exciting possibilities, but it comes with challenges. While concerns around accuracy and trust in AI are valid (as we’ve seen above), we’re committed to holding our AI-generated content to the same high standards as any other work we produce.

As we move forward, we want to hear your thoughts on AI. That’s why we’ve created a survey to learn what you’re comfortable with. We look forward to hearing your feedback as we continue to shape Pulse.

Readers Wanted.

We’re just getting started and need your feedback.

Check out our newsletter and website, then fill out our survey to help us improve!