American Artificial Intelligence (AI) research company OpenAI confirmed on Tuesday it has formed a new safety and security committee, which will be in charge of critical safety and security decisions.
The news comes as OpenAI has begun its next model for its famed chatbot ChatGPT. The company said “We anticipate the resulting systems bring us to the next level of capabilities.”
OpenAI’s statement explained it will “retain and consult with other safety, and technical experts to support this work.” The new safety committee will be led by existing senior members of OpenAI including Bret Taylor, Adam D’Angelo, Nicole Seligman, and Sam Altman.
Upcoming tasks for the committee include developing OpenAI’s safeguarding and processes. The newly formed committee coincides as countries and continents implement stricter policies around the use of AI, in a bid to keep up with the fast-paced technological advances and emerging security threats.
Earlier this week the UK’s Department for Science, Innovation, & Technology wrote how AI is a “vital technology for the UK economy.” Viscount Camrose, Minister for AI and Intellectual Property said the UK’s AI market is expected to reach over one trillion dollars by 2035.
However, he added that as the usage grows “we must ensure that end-users are protected from cyber security risks.”
The latest version of OpenAI’s ChatGPT model has advanced its understanding of images, video, sound, and text. The technology is constantly evolving with another version already pending release.
At least 16 countries have banned OpenAI’s ChatGPT including, Egypt, Syria, and Italy.
Some countries have said the decision to ban the chatbot service was over privacy concerns. Other countries like China and Russia claim the U.S., the creator of ChatGPT, use the software to spread misinformation.