Former Employees of Top AI Firms Sign an Open Letter Warning about the Risks of AI
- 13 people including former employees of top AI firms such as OpenAI and DeepMind have signed an open letter, warning about the risk of AI.
- They also called out the companies for not being transparent enough and said that they should cultivate a culture that encourages employees to voice out their concerns about AI without fearing repercussions.
- OpenAI responded to this letter by saying that it has already taken steps to mitigate the risks of AI and also has an anonymous hotline in place for workers to share their concerns.
Former employees of top Silicon Valley firms such as OpenAI, Google's DeepMind, and Anthropic have signed an open letter, sending out a stark warning about the risks of AI and how it could even lead to human extinction.
The letter has been signed by 13 such employees. Neel Nanda of DeepMind is the only one among them who is still employed in one of the AI firms they wrote against.To clarify his stance on the issue, Neel also put up a post on X where he said that he only wants companies to guarantee that if there's a concern with a certain AI project, the employees will be able to warn against it without repercussions i.e. whistleblowing freedom.
He further added that there's no immediate threat that he wants to warn about, and that the letter is just a precautionary step for the future. Now, as much as I'd like to believe Neel on this, the content of the letter paints a different picture.
What Does the Warning Letter' Say?The letter acknowledges the benefits AI advancement can bestow upon society, but it also recognizes the numerous downsides it tags along.
The following risks have been highlighted:
- Spread of misinformation
- Manipulating the masses
- Increasing inequality in society
- Loss of control over AI could lead to human extinction
In short, everything we see in an apocalyptic sci-fi movie (such as Arrival and Blade Runner 2049) can come to life.
The letter also argued that the AI firms are not doing enough to mitigate these risks. Apparently, they have enough financial incentive" to focus more on innovation and ignore the risks for now. Worrisome, to say the least.
It also added that AI companies need to foster a more transparent work environment, where employees should be encouraged to voice out their concerns instead of being punished for it.This is in reference to the latest controversy at OpenAI, where employees were forced to choose between losing their vested equity or signing a non-disparagement agreement that would be forever binding on them.
The company later retracted this move, saying that it goes against its culture and what the company stands for, but the damage had already been done.
Among all the companies mentioned in the letter, OpenAI certainly stands out, owing to the string of scandals it has landed in lately.For example, in May pf this year, OpenAI disbanded the team that was responsible for researching the long-term risks of AI less than a year after it was formed.
However, it's well worth noting that the company did form a new Safety & Security Committee last week. It will be headed by CEO Sam Altman.
Several high-level executives also left OpenAI recently, including co-founder Ilya Sutskever. While some were let go of with grace and sealed lips, others such as Jan Leike revealed that OpenAI has digressed from its original objectives and is no longer prioritizing safety.
OpenAI's Response to This LetterAddressing the above-mentioned letter, an OpenAI spokesperson said that the company understands the concerns surrounding AI and firmly believes that a healthy debate over this matter is crucial.
Furthermore, they said that OpenAI will continue to work with the government, industry experts, and communities around the world to develop AI safely and sustainably.
We're proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk." - OpenAI
It was also pointed out that whatever new regulations have been imposed to control the AI industry have always been supported by OpenAI.
Quite recently, OpenAI disrupted 5 covert operations backed by China, Iran, Israel, and Russia that were trying to misuse AI-generated content and debug websites and bots to spread their malicious propaganda.
Speaking of giving employees the freedom to voice their concerns, OpenAI highlighted that it already has an anonymous hotline in place for its workers for this exact reason i.e. anyone can report their concerns regarding the company's dealings without revealing their identity.
While this response from OpenAI might sound reassuring to some, Daniel Ziegler, a former OpenAI employee who organized the letter, said that it's still important to remain skeptical.Despite what the company says about taking safety-oriented steps, we never completely know what's actually going on. This is perhaps the most frightening part-that we will perhaps never know of a big misstep in AI development until it's too late.
For example, although all of the above-mentioned companies have policies against using AI to create election-related misinformation, there's evidence that OpenAI's image-generation tools have been used to create misleading content.
The post Former Employees of Top AI Firms Sign an Open Letter Warning about the Risks of AI appeared first on The Tech Report.