More on News
OpenAI has alleged that The New York Times has misrepresented the actions of ChatGPT by suggesting it copied articles without appropriate attribution. OpenAI, known for its commitment to ethical AI development, has viewed this dispute as an “opportunity to clarify its intentions and operations behind developing its technology.”
“While we disagree with the claims in The New York Times lawsuit, we view it as an opportunity to clarify our business, our intent, and how we build our technology. Our position can be summed up in these four points, which we flesh out below:
In its blog post, OpenAI stated that they were engaged in constructive discussions with The Times regarding potential collaboration. They discussed establishing a partnership where ChatGPT showcases real-time content from The New York Times, duly credited. This partnership would allow the Times to reach a broader audience, while OpenAI users would get access to the Times’ reporting. OpenAI conveyed to The New York Times that, compared to the varied information used to train their AI, The Times’ content did not play a significant role.
On December 27, OpenAI was surprised when it learned of a lawsuit filed against them by the Times. This came to light through an article in the Times, leaving OpenAI both surprised and disappointed by the unexpected legal movement.
During their discussions, The Times raised concerns about potential content repetition by ChatGPT. Despite OpenAI’s commitment to promptly address and rectify any issues, The Times did not share specific examples. When OpenAI discovered in July that ChatGPT could inadvertently reproduce real-time content, it promptly turned off this feature to implement necessary fixes.
OpenAI found it noteworthy that the duplicated content observed by The Times appeared to be from very old articles available on various external websites. OpenAI claims that The New York Times deliberately crafted prompts to trick ChatGPT into producing content resembling verbatim excerpts from the newspaper’s articles. However, OpenAI asserts that, even with such instructions, their AI typically does not exhibit behavior aligned with The Times’ suggestions. This raises the possibility that The Times may have selectively chosen examples or directed the AI through multiple attempts to follow a specific pattern.
Emphasizing that this form of manipulation contradicts the intended use of their AI, OpenAI underscores that their technology doesn’t replace the journalistic work of The Times. Nevertheless, OpenAI remains committed to continually improving its systems to prevent such issues, citing substantial progress in its recent models.
“We regard The New York Times’ lawsuit to be without merit. Still, we are hopeful for a constructive partnership with The New York Times and respect its long history, which includes reporting the first working neural network over 60 years ago and championing First Amendment freedoms.
We look forward to continued collaboration with news organizations, helping elevate their ability to produce quality journalism by realizing the transformative potential of AI,” the blog post concluded.