OpenAI launches ‘Preparedness team’ for AI safety, gives board final say

The artificial intelligence (AI) developer OpenAI has announced it will implement its “Preparedness Framework,” which includes creating a special team to evaluate and predict risks. 

On Dec. 18, the company released a blog post saying that its new “Preparedness team” will be the bridge that connects safety and policy teams working across OpenAI.

It said these teams providing almost a checks-and-balances-type system will help protect against “catastrophic risks” that could be posed by increasingly powerful models. OpenAI said it would only deploy its technology if it’s deemed safe.

The new outline of plans entails the new advisory team reviewing the safety reports, which will then be sent to company executives and the OpenAI board.

While the executives are technically in charge of making the final decisions, the new plan allows the board the power to reverse safety decisions.

This comes after OpenAI experienced a whirlwind of changes in November with the abrupt firing and reinstating of Sam Altman as CEO. After Altman rejoined the company, it released a statement naming its new board, which now includes Bret Taylor as chair, as well as Larry Summers and Adam D’Angelo.

Related: Is OpenAI about to drop a new ChatGPT upgrade? Sam Altman says ‘nah’

OpenAI launched ChatGPT to the public in November 2022, and since then, there has been a rush of interest in AI, but there are also concerns over the dangers it may pose to society.

In July, the leading AI developers, including OpenAI, Microsoft, Google and Anthropic, established the Frontier Model Forum, which is intended to monitor the self-regulation of the creation of responsible AI.

United States President Joe Biden issued an executive order in October that laid out new AI safety standards for companies developing high-level models and their implementation.

Before Biden’s executive order, prominent AI developers, including OpenAI, were invited to the White House to commit to developing safe and transparent AI models.

Magazine: Deepfake K-Pop porn, woke Grok, ‘OpenAI has a problem,’ Fetch.AI: AI Eye

Source: Read Full Article