AI Weapons Among Non-State Actors May be Impossible to Stop
upstart writes:
Governments also have no theory on how nefarious groups might behave using the tech:
The proliferation of AI in weapon systems among non-state actors such as terrorist groups or mercenaries would be virtually impossible to stop, according to a hearing before UK Parliament.
The House of Lords' AI in Weapon Systems Committee yesterday heard how the software nature of AI models that may be used in a military context made them difficult to contain and keep out of nefarious hands.
When we talk about non-state actors that conjures images of violent extremist organizations, but it should include large multinational corporations, which are very much at the forefront of developing this technology
Speaking to the committee, James Black, assistant director of defense and security research group RAND Europe, said: "A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware: it's missiles, it's engines, it's nuclear materials."
An added uncertainty was that there was no established "war game" theory of how hostile non-state actors might behave using AI-based weapons.
Read more of this story at SoylentNews.