OpenAI invests in AI safety research to prevent potential threats to humanity

OpenAI takes steps to safeguard humanity from the risks of AI

OpenAI, the creator of ChatGPT, has announced its plans to invest substantial resources and establish a new research team focused on ensuring the safety of artificial intelligence (AI) for humans. The organization aims to develop AI systems that can supervise themselves, addressing concerns about the potential risks of superintelligent AI.

In a recent blog post, OpenAI co-founder Ilya Sutskever and head of the alignment Jan Leike emphasized the immense power of superintelligence, highlighting the risks it poses to humanity, including disempowerment or even extinction.

“The vast power of superintelligence could … lead to the disempowerment of humanity or even human extinction,” OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote in a blog post.

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI and preventing it from going rogue.”

Experts debate OpenAI’s approach

According to the authors, superintelligent AI systems, which surpass human intelligence, may become a reality within this decade. To ensure the safe development of AI, breakthroughs are required in the field of “alignment research.” This research focuses on guaranteeing that AI remains beneficial to humans.

Microsoft-backed OpenAI has committed 20 per cent of its compute power over the next four years to address this pressing issue. Additionally, the organization is forming a dedicated team called the ‘Superalignment team,’ which will spearhead efforts in this area.

The primary objective of the Superalignment team is to create an AI alignment researcher capable of reaching a “human-level” understanding. This researcher will then be scaled up through extensive computational power. OpenAI plans to train AI systems using human feedback, employ AI systems to assist in human evaluation, and ultimately task AI systems with conducting alignment research.

However, not everyone is convinced by this approach. AI safety advocate Connor Leahy expressed concern that creating human-level AI without solving alignment issues could lead to unintended consequences and potential havoc.

“You have to solve alignment before you build human-level intelligence; otherwise, by default, you won’t control it,” he said in an interview. “I personally do not think this is a particularly good or safe plan.”

The potential dangers of AI have been widely recognized within the AI research community and the general public. In April, a group of AI industry leaders and experts signed an open letter calling for a six-month pause in developing systems more potent than OpenAI’s GPT-4, citing societal risks.

WRITTEN BY

Team Eela

TechEela, the Bedrock of MarTech and Innovation, is a Digital Media Publication Website. We see a lot around us that needs to be told, shared, and experienced, and that is exactly what we offer to you as shots. As we like to say, “Here’s to everything you ever thought you knew. To everything, you never thought you knew”
0

Leave a Reply

Your email address will not be published. Required fields are marked *