Article 6QDSS The US Government Will Get Sentient AI First – OpenAI, Anthropic Sign Key Deal

The US Government Will Get Sentient AI First – OpenAI, Anthropic Sign Key Deal

by
Aaron Walker
from Techreport on (#6QDSS)
a-dimly-lit-room-with-a-sentient-ai-in-the-foregro-N54HlZ_ASoKd9fyA7GRCHQ-4emVY_sySoG7Q8WeDAA_Cw-1200x673.jpeg

  • President Biden established the US Artificial Intelligence Safety Institute as part of the National Institute of Safety and Standards (NIST) in October 2023.
  • OpenAI and Anthropic have agreed to give the NIST early access to cutting-edge AI models like the upcoming Strawberry model.
  • The effort aims to improve safety standards for AI development but imposes few restrictions on what the NIST might do with AI.

a-dimly-lit-room-with-a-sentient-ai-in-the-foregro-N54HlZ_ASoKd9fyA7GRCHQ-4emVY_sySoG7Q8WeDAA_Cw-1200x673.jpeg?_t=1725281276

The US government has been known to drag its heels with new technologies; just look at the SEC vs crypto.

So why is it moving so fast on AI?

A recent agreement between the government and two leading AI developers gives the NIST nearly unfettered access to cutting-edge AI models before public release.

What's in it for Uncle Sam?Time for a closer look.

The Rise of a New Government Agency

The key player in the US government's involvement with AI is the US Artificial Intelligence Safety Institute. It's all in the name - the stated purpose of the agency, founded last year by an executive order from President Biden, is to ensure that key safety principles form the bedrock of AI development.

That emphasis on safety is shown through Biden's proposed AI Bill of Rights; the first point is the right of US citizens to safe and effective systems.'

To that end, the NIST negotiated agreements with Anthropic and OpenAI, two companies leading the drive toward artificial general intelligence (AGI).

The agreements cover:

  • Collaboration: Working together on AI safety research, testing, and evaluation.
  • Access to models: The institute will receive access to new AI models from these companies before and after public release.
  • Safety research: Focus on evaluating AI capabilities, safety risks, and methods to mitigate these risks.
What Will The NIST Do With AI?

The agreements between OpenAI, Anthropic, and the NIST are strictly voluntary. By entering into them, the AI companies receive a huge PR boost and the implicit blessing of the US government.

we are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models.

for many reasons, we think it's important that this happens at the national level. US needs to continue to lead!

- Sam Altman (@sama) August 29, 2024

But what does the NIST get? Virtually unfettered and unrestricted access to the latest models.

In other words, if OpenAI or Anthropic develops an AGI, the US government will get it first.

An AGI - artificial general intelligence - is a type of AI that can match or outperform humans in a wide range of intellectual tasks. Often, AI (narrow AI) is designed for particular tasks.

And interestingly, there's no requirement for the NIST to disclose what they would do with an AGI.

  • Will the NIST say no to a specific release, like the upcoming Strawberry model from OpenAI?
  • Will the NIST deploy an AGI for government use before public release?
  • And if so, would they even say anything?

One thing is clear: the agreement gives the US government, through the NIST, a direct voice in private AI and AGI development. It also sets the stage for collaboration with other governments, such as the UK.

Additionally, the US AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models in close collaboration with its partners at the UK AI Safety Institute.

A Soft Touch - Room for Innovation, or a Lack of Transparency?

The framework works as a soft touch' government regulation, allowing the NIST oversight without specific rules and regulations. However, while the framework provides valuable flexibility for AI companies, it comes at the cost of poor transparency.

In fact, the agreement raises the real possibility that the US government could obtain an AGI from Anthropic or OpenAI and deploy it with no one the wiser.

References

The post The US Government Will Get Sentient AI First - OpenAI, Anthropic Sign Key Deal appeared first on The Tech Report.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments