Following an employee revolt, OpenAI introduces a new safety board.

OpenAI announced on Tuesday the establishment of a new committee tasked with advising the company’s board on safety and security matters, following the dissolution of a team dedicated to AI safety.

According to a blog post, the new committee will be chaired by CEO Sam Altman, along with Bret Taylor, the company’s board chair, and board member Nicole Seligman.

This announcement comes in the wake of the departure of Jan Leike, a key OpenAI executive focused on safety, who criticized the company for insufficient investment in AI safety efforts and escalating tensions with leadership.

Additionally, Ilya Sutskever, another leader of OpenAI’s “superalignment” team, has also left the company. Sutskever was instrumental in Sam Altman’s removal as CEO last year but later supported Altman’s return.

Earlier this month, OpenAI revealed plans to disband the superalignment team and redistribute its members throughout the organization, aiming to better align with its superalignment objectives.

In addition to these changes, OpenAI disclosed plans to train a new AI model to succeed the current ChatGPT model, moving closer to achieving artificial general intelligence.

The company emphasized its commitment to building and releasing models that are both advanced and safe, inviting constructive debate on the matter.

The newly formed Safety and Security Committee will initially focus on evaluating and enhancing OpenAI’s processes and safeguards over the next 90 days. Subsequently, their recommendations will be presented to the full Board for review, with OpenAI committing to sharing updates on adopted recommendations in a manner consistent with safety and security principles.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *