OpenAI forms AI safety committee after key departures

OpenAI forms AI safety committee after key departures

OpenAI CEO Sam Altman speaks during the Microsoft Build conference in Seattle on May 21, 2024
OpenAI CEO Sam Altman speaks during the Microsoft Build conference in Seattle on May 21, 2024. Photo: Jason Redmond / AFP/File
Source: AFP

OpenAI, the company behind ChatGPT, announced the formation of a new safety committee on Tuesday, weeks after the departures of key executives raised questions about the firm's commitment to mitigating the dangers of artificial intelligence.

The company said the committee, which will include CEO Sam Altman, is being established as OpenAI begins training its next AI model, expected to surpass the capabilities of the GPT-4 system powering ChatGPT.

"While we are proud to build and release industry-leading models on both capabilities and safety, we welcome a robust debate at this important juncture," OpenAI stated.

Comprised of board members and executives, the committee will spend the next 90 days comprehensively evaluating and bolstering OpenAI's processes and safeguards around advanced AI development.

OpenAI stated it will also consult outside experts during this review period, including former US cybersecurity officials Rob Joyce, who previously led efforts at the National Security Agency, and John Carlin, a former senior Justice Department official.

Read also

Musk plans largest-ever supercomputer for xAI startup: report

Over the three-month span, the group will scrutinize OpenAI's current AI safety protocols and develop recommendations for potential enhancements or additions.

PAY ATTENTION: Share your outstanding story with our editors! Please reach us through info@corp.legit.ng!

After this 90-day review, the committee's findings will be presented to the full OpenAI board before being publicly released.

The committee's formation comes on the heels of recent executive departures that stoked concerns about OpenAI's AI safety priorities.

Earlier this month, the company dissolved its "superalignment" team dedicated to mitigating long-term AI risks.

In announcing his exit, team co-lead Jan Leike criticized OpenAI for prioritizing "shiny new products" over vital safety work in a series of posts on X, the platform previously known as Twitter.

"Over the past few months, my team has been sailing against the wind," Leike said.

OpenAI has also faced controversy over an AI voice some claimed closely mimicked actress Scarlett Johansson, though the company denied attempting to impersonate the Hollywood star.

Source: AFP

Authors:
AFP avatar

AFP AFP text, photo, graphic, audio or video material shall not be published, broadcast, rewritten for broadcast or publication or redistributed directly or indirectly in any medium. AFP news material may not be stored in whole or in part in a computer or otherwise except for personal and non-commercial use. AFP will not be held liable for any delays, inaccuracies, errors or omissions in any AFP news material or in transmission or delivery of all or any part thereof or for any damages whatsoever. As a newswire service, AFP does not obtain releases from subjects, individuals, groups or entities contained in its photographs, videos, graphics or quoted in its texts. Further, no clearance is obtained from the owners of any trademarks or copyrighted materials whose marks and materials are included in AFP material. Therefore you will be solely responsible for obtaining any and all necessary releases from whatever individuals and/or entities necessary for any uses of AFP material.