OpenAI accuses ChatGPT of misleading The New York Times by copying articles

OpenAI has addressed The New York Times' allegations about its model, ChatGPT, being copied in articles. OpenAI denies these claims and uses the controversy to clarify its operations and intentions.

by Vikash Kumawat
0 comments 588 views 2 minutes read

OpenAI, the company behind the AI tool ChatGPT, has responded to allegations made by The New York Times regarding their AI model, ChatGPT, copying articles. OpenAI denies these claims and sees the controversy as an opportunity to clarify its operations and intentions in creating its technology.

OpenAI said in their blog that they are in positive discussions with The New York Times about working together. They were talking about a partnership where ChatGPT would display real-time content from The New York Times with appropriate credit. This will help The Times connect with more readers and give OpenAI users access to The Times’ reporting. OpenAI explained to the Times that, compared to all the other information used to train their AI, the Times’ content did not play a significant role.

Then, unexpectedly, on December 27, OpenAI learned about the New York Times’ lawsuit against them through an article in the Times itself. OpenAI was surprised and disappointed by this sudden action.

During their discussions, the Times mentioned that they noticed that some of their content was being duplicated by ChatGPT. But despite OpenAI’s commitment to addressing and fixing any problems, the Times did not share any specific examples. In July, when OpenAI discovered that ChatGPT could inadvertently reproduce real-time content, they immediately removed that feature to fix it.

OpenAI found it interesting that the repeated content seen by The Times appeared to be from much older articles available on various other websites. He suspects that the Times manipulated the instructions it gave to ChatGPT to get the AI to replicate its content, including longer excerpts from its articles. However, OpenAI claims that even with such instructions, their AI generally does not behave in the way suggested by the Times. This implies that the Times would have instructed the AI ​​to repeat things or carefully selected examples from among several attempts.

OpenAI emphasizes that this type of manipulation is not intended for their AI, and it does not replace the work of The New York Times. Nevertheless, OpenAI is constantly improving its systems to prevent such issues and has already made significant progress in its recent models.

You may also like

Leave a Comment

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00