Please ensure Javascript is enabled for purposes of website accessibility

AI in Journalism: Transforming News Reporting and Storytelling (For Better or For Worse)

Guidance from newsrooms, the law, plus tips for editing AI-generated content.

Sara Rosenthal //October 12, 2023//

AI in Journalism: Transforming News Reporting and Storytelling (For Better or For Worse)

Guidance from newsrooms, the law, plus tips for editing AI-generated content.

Sara Rosenthal //October 12, 2023//

The proliferation of generative artificial intelligence (AI) across industries — communications, law, medicine, coding, design — can either be perceived as extremely exciting or deeply unsettling, perhaps even both. In a letter published earlier this year, Bill Gates said the technology is “as fundamental as the creation of the microprocessor, the personal computer, the Internet and the mobile phone.” This is especially true for AI in journalism.

Many companies are at the very least curious about how to integrate generative AI into their business practices, with some already using it. According to a recent survey of global executives by VentureBeat, more than half (55%) of organizations are experimenting with generative AI and 18% have actually implemented it into their operations. Notably, the largest use case was for tasks related to natural language processing, such as chat and messaging, followed by content creation.

READ: Artificial Intelligence for Social Good — Transforming Global Challenges with Innovative Solutions

Tools like ChatGPT and DALL-E hold the potential to transform work as we know it, but we’ve only just begun to scratch the surface of their use cases, parameters and risks. Fortunately, newly released editorial standards and emerging case law have started to build a framework for how to responsibly use AI in journalism.

For readers interested in using tools like ChatGPT, this article will explore guidance issued by newsrooms and the law on how to approach generative AI in journalism, as well as tips for editing AI-generated content.

Newsroom guidelines for generative AI

As it currently stands, generative AI cannot be trusted to produce objective, factually-correct reports on its own.

These programs can have “hallucinations” in which the AI confidently responds to a prompt with an answer that is not justified by the facts of its training data. They are also predisposed to historical biases and societal perspectives. However, this hasn’t stopped newsrooms from dabbling in generative AI to help with things like research, analysis, brainstorming and proofreading.

Some news outlets have started issuing AI-focused guidelines to their staff to ensure it is used consistently and responsibly. The general consensus seems to be that AI can assist with routine tasks, but stories must still be written by humans. Below are a few examples of editorial standards from leading publications about AI-generated content.

  • The Associated Press says generative AI cannot be used to create publishable content and images but is encouraging staffers to become familiar with the technology.
  • The Guardian states that generative AI requires human oversight, and should only be used to contribute “to the creation and distribution of original journalism.”
  • For the Financial Times, their journalism “will continue to be reported and written by humans,” but they will give their team the space to experiment responsibly with the technology for tasks like datamining, analyzing text and images and translation.
  • WIRED does not publish stories with text generated by or edited by AI, “except when the fact that it’s AI-generated is the whole point of the story.” They clarify that they may try using the technology to generate suggestions for headlines, text for short social media posts or story ideas. 
  • Insider tells its staff, “ChatGPT is not a journalist … your stories must be completely written by you.” On the other hand, their team is encouraged to experiment with ChatGPT for things like story outlines, proofreading and summarizing old stories. 

READ: 4 Prompts and Tips for ChatGPT — A Comprehensive Guide for Marketers

Legal implications of generative AI

While it may seem like AI conjures up text and images all on its own, these platforms are trained using data lakes and archives of images and text to recognize patterns, set rules and then draw conclusions. This process raises a number of legal questions around infringement, rights of use, intellectual property and ownership of AI-generated content.

The jury is still out on most fair use cases. In Andersen v. Stability AI et al and Getty Images v. Stability AI, both filed earlier this year, the plaintiffs allege that generative AI platforms used their images to train their AI without permission or compensation. The outcome of both these cases will likely significantly impact the generative AI landscape one way or another. 

On the question of authorship, the U.S. Copyright Office currently does not issue copyrights for AI-generated work. Colorado artist Jason M. Allen found himself at the center of this national conversation after it was discovered that his winning piece at the Colorado State Fair’s fine arts competition was created using AI. Since taking home the blue ribbon in 2022, Allen’s copyright request has been denied three times with the federal office finding he is not the “author” of the image. 

It’s important to stay vigilant when using this technology. If generative AI outputs are found to infringe on copyrights, both the AI user and AI company could be held liable under current doctrines, according to a recent report from the Congressional Research Service. 

READ: AI Revolution — Unveiling the Transformative Power and Unforeseen Consequences

Tips for editing AI-generated work 

As demonstrated, generative AI in journalism may take the hassle out of creating a rough draft, but it doesn’t produce a finished product. Interestingly, ChatGPT seems to agree. When I asked the bot, “Does AI writing need an editor?” It responded, “While AI can automate parts of the writing process, it often benefits from human oversight to produce high-quality, engaging and trustworthy content.

The main goal of editing AI-generated content is the same as editing human-produced content — to improve readability and accuracy. Here are some things to look out for when editing an AI’s work:

  • Avoid plagiarism issues by checking for originality and ensuring your piece doesn’t too closely resemble others.
  • Make adjustments to match the author’s voice or brand’s personality.
  • Always fact-check or risk ending up like the lawyer in Mata v. Avianca, Inc., who submitted AI-generated documents to the court that contained imaginary case citations.
  • Optimize your content for search engines with relevant keywords and proper formatting.
  • Reframe the piece to ensure it resonates with your target audience.
  • Refine awkward sentence structures.
  • Review for legal and ethical considerations.

If you do use AI to generate content, understand that your conversations with generative AIs belong to the AI company and could show up in answers to other users. Therefore, it’s best to avoid sharing confidential information with these platforms. 

Generative AI holds exciting possibilities for improving productivity, sparking creativity and potentially revolutionizing how we work. As the technology currently stands, though, there are still issues surrounding accuracy, rights of use, authorship and journalistic integrity. Inevitably, businesses are going to integrate generative AI in journalism, and those that learn how to responsibly hone its vast capabilities will come out ahead. 

 

Sara Rosenthal HeadshotSara Rosenthal is a freelance writer and communications consultant based in Denver. Learn more at saramrosenthal.com.