Tech Giants Promise to Ensure Trustworthy AI in A Bold Step Forward
In a significant development reflecting the growing concern about the ethical use of AI, eight tech giants, including Adobe, NVIDIA, IBM, and Palantir, have voluntarily committed to upholding safety and trust in their generative AI tools.
The White House announced this groundbreaking initiative showcasing the commitment of the Biden-Harris administration to curb vulnerabilities and misinformation.
The move comes at a time when the priority for addressing the potential risks associated with AI-generated content is felt like never before.
The President has been clear: harness the benefits of AI, manage the risks, and move fast - very fast. And we are doing just that by partnering with the private sector and pulling every lever we have to get this done.Jeff Zients, Chief of Staff at the White HouseA statement from the administration states that the tech giants have agreed to red team' their AI applications. The companies would also invest substantially in research to enhance the reliability of these systems.
These commitments apply to a wide range of tech firms, each of which has its unique AI offering.
For instance, Stability AI and Adobe are known for their text-to-image products. IBM, Cohere, NVIDIA, and Salesforce develop customized language models for enterprise applications. Scale AI and Palantir, on the other hand, specialize in developing and integrating AI models for the U.S. government.
Corporates Willing To Get Their AI Tools AuditedInterestingly, the corporate giants expressed their willingness to get their AI tools audited internally as well as externally to ensure their integrity.
The tech companies, in order to protect their intellectual property and prevent unauthorized access to their systems, have vowed to keep their intellectual property secure.Independent experts would carry out these audits to evaluate the scope of potential misuse. They would scrutinize the smart systems regarding information that may help create biochemical weapons or exploit cybersecurity flaws.
The auditors would also investigate whether or not these AI tools provide users with the scope to be used to control physical systems or self-replicate. This may raise concerns about the unauthorized use of AI tools.
These companies will also develop suitable mechanisms for their users to report bugs or vulnerabilities. This way, they would be able to report any issue instantly. In this initiative, transparency is the key since the companies will publicly disclose the limitations and capabilities of their technologies.
Tech Giants To Research Civil And Societal Risks Of AIThe tech giants have also expressed their commitment to researching societal and civil risks associated with AI. They would particularly be focussing on data privacy concerns. Besides, generative AI often produces false information.
So, there's always a chance that these tools can be used to spread misinformation. This explains the importance of research to ensure the validity of AI-generated information.
Besides, the US government has called on Big Tech to develop watermarking techniques. This feature would help in identifying AI-generated content.
Recently, Google DeepMind announced the development of a tool called SynthID to distinguish AI-generated images from real ones. The government is further encouraging this type of innovation to ensure transparency in the use of AI-generated content.
The government has also urged these companies to commit to using AI for addressing issues like healthcare and climate change. These commitments don't involve any legal binding and are voluntary. However, they hold significant weight in shaping the development of AI tools.
The post Tech Giants Promise to Ensure Trustworthy AI in A Bold Step Forward appeared first on The Tech Report.