More on News
PM Modi Discusses Role of Technology in Agriculture, Education, and Health with Bill Gates
-
Team Eela
The copyright lawsuit has taken a new turn as OpenAI has claimed that The New York Times ‘hacked’ ChatGPT to generate misleading evidence for the case. However, OpenAI has not accused The Times of breaking anti-hacking laws.
OpenAI filed a statement in a Manhattan federal court on Monday, contending that the Times induced the technology to replicate its content using “deceptive prompts” that openly breach OpenAI’s terms of use.
“The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards,” OpenAI said. “The truth, which will come out in this case, is that the Times paid someone to hack OpenAI’s products.”
OpenAI has stated that The New York Times hired someone to tamper with the AI company’s systems. However, OpenAI has not named the ‘hired gun’ in the filing.
In response, the Times’ attorney argued that the actions aimed to uncover copyright infringement and aligned with journalistic standards. OpenAI still needs to comment on the matter when approached.
The legal dispute began in December when the Times sued OpenAI and its leading supporter, Microsoft, for allegedly using millions of its articles to train chatbots without permission.
Similar conflicts between copyright holders and tech firms have emerged over AI training practices, raising questions about fair use under copyright law. Courts haven’t definitively ruled on this issue, though some claims have been dismissed due to lack of evidence.
The Times’ complaint highlights instances where OpenAI and Microsoft chatbots provided content like Times articles, suggesting an attempt to bypass the newspaper’s journalism investment.
OpenAI said in its filing that it took the Times “tens of thousands of attempts to generate the highly anomalous results.” “In the ordinary course, one cannot use ChatGPT to serve up Times articles at will,” OpenAI said.
“The Times cannot prevent AI models from acquiring knowledge about facts, any more than another news organization can prevent the Times itself from re-reporting stories it had no role in investigating,” OpenAI said.
More on News
More on News