AI News - Artificial Intelligence News / Generative AI News
- Zara
- Mar 12, 2024
- 5 min read
Updated: Apr 10, 2024
March 12th 2024
OpenAI announces new board lineup and governance structure

OpenAI has announced a refreshed board of directors and new governance structure following recent turmoil that saw CEO Sam Altman ousted, briefly recruited by Microsoft, and then quickly reinstated at the AI research company.
In a statement, OpenAI said Altman will rejoin the board alongside three new independent directors: Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, former executive vice president and general counsel at Sony Corporation, and Fidji Simo, CEO and chair of Instacart.
Bret Taylor, Chair of the OpenAI board, said: “I am excited to welcome Sue, Nicole, and Fidji to the OpenAI Board of Directors. Their experience and leadership will enable the Board to oversee OpenAI’s growth, and to ensure that we pursue OpenAI’s mission of ensuring artificial general intelligence benefits all of humanity.”
The previous board members who resigned amid the recent chaos were Helen Toner of Georgetown’s Center for Security and Emerging Technology, OpenAI chief scientist Ilya Sutskever, and entrepreneur Tasha McCauley.
Altman and Greg Brockman will continue to lead OpenAI as CEO and president respectively, working with the new board chaired by former Salesforce CEO Bret Taylor.
“We have unanimously concluded that Sam and Greg are the right leaders for OpenAI,” Taylor said..
Existing directors Adam D’Angelo of Quora and former US Treasury Secretary Larry Summers remain on the board.
An independent review by law firm WilmerHale found that while Altman’s termination was within the prior board’s discretion, his conduct did not necessitate removal.
The board shakeup follows a period of upheaval at OpenAI. Altman’s brief firing last month prompted an employee petition and public backlash over governance concerns.
In response, the board has announced the adoption of important improvements to OpenAI’s governance structure, including adopting a new set of corporate governance guidelines, strengthening the company’s Conflict of Interest Policy, creating a whistleblower hotline for anonymous reporting by employees and contractors, and forming additional committees like a Mission & Strategy group focused on implementing OpenAI’s core mission.
With a reset board, strengthened policies, and stated commitment to transparency, OpenAI aims to move forward from the saga under a new system of oversight and accountability.
“We recognise the magnitude of our role in stewarding transformative technologies for the global good,” concludes Taylor.
Google engineer stole AI tech for Chinese firms

A former Google engineer has been charged with stealing trade secrets related to the company’s AI technology and secretly working with two Chinese firms.
Linwei Ding, a 38-year-old Chinese national, was arrested on Wednesday in Newark, California, and faces four counts of federal trade secret theft, each punishable by up to 10 years in prison.
The indictment alleges that Ding, who was hired by Google in 2019 to develop software for the company’s supercomputing data centres, began transferring sensitive trade secrets and confidential information to his personal Google Cloud account in 2021.
“Ding continued periodic uploads until May 2, 2023, by which time Ding allegedly uploaded more than 500 unique files containing confidential information,” said the US Department of Justice in a statement.
Prosecutors claim that after stealing the trade secrets, Ding was offered a chief technology officer position at a startup AI company in China and participated in investor meetings for that firm. Additionally, Ding is alleged to have founded and served as CEO of a China-based startup focused on training AI models using supercomputing chips.
“Today’s charges are the latest illustration of the lengths affiliates of companies based in the People’s Republic of China are willing to go to steal American innovation,” said FBI Director Christopher Wray.
“The theft of innovative technology and trade secrets from American companies can cost jobs and have devastating economic and national security consequences.”
If convicted on all counts, Ding faces a maximum penalty of 40 years in prison and a fine of up to $1 million.
The case underscores the ongoing tensions between the US and China over intellectual property theft and the race to dominate emerging technologies like AI.
Ex-OpenAI researchers build AI for robots that can help them understand the world, talk like ChatGPT

The use of artificial intelligence (AI) and robots has been explored widely in popular fiction. The whole idea of robots taking care of basic human tasks has been there for a long time, and films and TV shows have experimented widely with it. In 2024, with tools like ChatGPT, Gemini and Bing AI, this fictional concept seems to be getting close to being a reality. And now, reports have surfaced that former OpenAI researchers have teamed up to create a new software that will help robots become more aware of their physical world and develop a deeper understanding of language.
According to a report in The New York Times, Covariant, a robotics startup founded by former OpenAI researchers, is applying the technology development methods used in chatbots to build AI that helps robots navigate and interact with the physical world. Instead of building robots, Covariant focuses on creating software that powers robots, starting with those used in warehouses and distribution centres.
The technology also gives robots a broad understanding of the English language, letting people chat with them as if they were chatting with ChatGPT.
The AI technology developed by Covariant allows robots to pick up, move, and sort items in warehouses by giving them a broad understanding of the physical world. The NYT report adds that the tech will also help robots understand English better, allowing users to chat with them similar to interacting with ChatGPT. In other words, the startup seems to be developing ChatGPT, but for robots. The viral AI tool was launched by OpenAI in 2022 and gained a lot of popularity for its human-like responses.
Similar to ChatGPT and other AI tools, Covariant's AI technology learns from analysing large amounts of digital data. The company says that it has gathered data from cameras and sensors in warehouses for years, allowing robots to understand their surroundings and handle unexpected situations.
The report also mentions that the company's technology is called R.F.M. (robotics foundational model) and it combines data from images, sensory input, and text, providing robots with a more comprehensive understanding of their environment. For instance, the system can generate videos predicting the outcome of a robot's actions. However, the technology is not perfect yet and can make mistakes.
Covariant aims to deploy its technology with warehouse robots initially, and it has received substantial funding for its development. The company's approach involves teaching robots through extensive data analysis, allowing them to adapt to various situations.
As researchers continue to train these systems with larger and more diverse datasets, they anticipate rapid improvements in the technology, making robots more capable of handling unexpected scenarios in the physical world.
Meanwhile, we all know by now that AI is a double-edged sword of sorts. And time and again, experts have warned about the harm that it can cause if used in the wrong way.
The NYT report also quotes Gary Marcus, an AI expert, sounding alarm over the technology going wrong. He said that the technology shows promise in environments like warehouses, where mistakes are tolerable. However, deploying it in more hazardous settings, such as manufacturing plants, could pose greater challenges and risks. In situations involving a 150-pound robot that could cause harm, the costs associated with mistakes become a significant concern, he said.
Коментарі