Did The New York Times ‘Hack’ ChatGPT? Copyright Lawsuit Takes a New Turn

OpenAI Claims New York Times 'Hacked' ChatGPT in Legal Battle

The copyright lawsuit has taken a new turn as OpenAI has claimed that The New York Times ‘hacked’ ChatGPT to generate misleading evidence for the case. However, OpenAI has not accused The Times of breaking anti-hacking laws.

OpenAI filed a statement in a Manhattan federal court on Monday, contending that the Times induced the technology to replicate its content using “deceptive prompts” that openly breach OpenAI’s terms of use.

“The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards,” OpenAI said. “The truth, which will come out in this case, is that the Times paid someone to hack OpenAI’s products.”

OpenAI Claims The New York Times ‘Hacked’ ChatGPT

OpenAI has stated that The New York Times hired someone to tamper with the AI company’s systems. However, OpenAI has not named the ‘hired gun’ in the filing.

In response, the Times’ attorney argued that the actions aimed to uncover copyright infringement and aligned with journalistic standards. OpenAI still needs to comment on the matter when approached.

The legal dispute began in December when the Times sued OpenAI and its leading supporter, Microsoft, for allegedly using millions of its articles to train chatbots without permission.

Similar conflicts between copyright holders and tech firms have emerged over AI training practices, raising questions about fair use under copyright law. Courts haven’t definitively ruled on this issue, though some claims have been dismissed due to lack of evidence.

The Times’ complaint highlights instances where OpenAI and Microsoft chatbots provided content like Times articles, suggesting an attempt to bypass the newspaper’s journalism investment.

OpenAI said in its filing that it took the Times “tens of thousands of attempts to generate the highly anomalous results.” “In the ordinary course, one cannot use ChatGPT to serve up Times articles at will,” OpenAI said.

“The Times cannot prevent AI models from acquiring knowledge about facts, any more than another news organization can prevent the Times itself from re-reporting stories it had no role in investigating,” OpenAI said.

WRITTEN BY

Team Eela

TechEela, the Bedrock of MarTech and Innovation, is a Digital Media Publication Website. We see a lot around us that needs to be told, shared, and experienced, and that is exactly what we offer to you as shots. As we like to say, “Here’s to everything you ever thought you knew. To everything, you never thought you knew”
0

Leave a Reply

Your email address will not be published. Required fields are marked *