Article 4PP6X Information security and warfare metaphors: a toxic mix made in hell

Information security and warfare metaphors: a toxic mix made in hell

by
Cory Doctorow
from on (#4PP6X)

I once found myself staying in a small hotel with a "State Department" family whose members clearly all worked for some kind of three letter agency (the family patriarch had been with USAID with the tanks rolled into Budapest) and I had some of the weirdest discussions of my life with them.

The big one was about "cyberweapons" and whether the US should be developing them and what could go wrong from such a program. It was clear to me that these folks knew a lot about classic Cold War deterrence theory, and deep experience with how the military-industrial complex functioned (and didn't function) but that they knew virtually nothing about computers, and this deficit meant that they were terribly, awfully misled in their thinking on the matter.

It was clear that for them, a "cyberweapon" was just another R&D project: just as with the Manhattan project or the labs where they make better cruise missile guidance systems, cyberweapons were an invention that turned on discovering some property of physics and then using engineering to weaponize that property in order to project force over your adversary.

But that's not what a cyberweapon is at all. While it's exciting to read 40-year-old cyberpunk novels where console cowboys wield "ice breakers" to pierce their enemies' electronic defenses, the reality is a lot weirder and more mundane at the same time.

A cyberweapon begins with the discovery of a defect in a piece of software, preferably a widely used piece of software, like the Windows operating system. All code has bugs, and theoretical concepts from computer science like the "halting problem" mean that it's effectively impossible to root out all the bugs from a complex piece of software.

Once you discover the bug, you keep it a secret, and develop a piece of malicious code that exploits the bug to do something to the computer that program is running on -- crash it, take over its sensors, raid its hard-drive, turn it into a covert participant in DDoS attacks, etc.

But there's a problem with this model: we don't have "good guy" software and "bad guy" software. If the NSA (or some other agency nominally charged with the "security" of the people who pay its bills) discovers a bug in a widely used system, it's a sure bet that the people whom that agency is supposed to be protecting are also depending on that software, and so if someone else discovers these defects and weaponizes them, your own people will now be at risk -- and you could have prevented that risk if only you'd gone to the manufacturer when you discovered the bug and had them issue a patch.

This is the key difference between "cyber" and other forms of warfare: every offensive measure weakens your own defense.

The NSA has an official doctrine that tries to answer the thorny questions raised by this unfortunate fact. It's called "NOBUS" and it stands for "No One But Us." As in "No one but us is smart enough to discover this defect we just found, so we can warehouse it indefinitely until we need it and there's no risk that our own people will be attacked by adversaries who've made the same discovery as us and can therefore exploit the bug that we've deliberately left unpatched."

NOBUS is obviously wrong. It's not just that these defects are independently discovered (they are -- and thanks to research done on the Vault 7 and other leaks of US government cyberweapons, we know that any given defect has about a 1 in 5 chance of being independently discovered an weaponized in any given year). It's also that they leak, because the NSA is made up of unpredictable people who do unexpected things (see, for example, Edward Snowden).

NOBUS is obviously wrong not just in theory, but in practice: across America, entire cities have been taken hostage by ransomware that exploits leaked, US government cyberweapons -- which is to say, entire cities were vulnerable to takeover and the US government knew it and they did nothing to warn them because doing so would make it harder to play eighties retro-cyberpunk wargames with their "adversaries."

Writing in Wired, New America Cybersecurity Policy Fellow Justin Sherman describes how DC is gripped by Cold War metaphors that have totally distorted the debate about cybersecurity. He's very right. We've got this so very wrong, and it's costing us billions.

Cyberspace has been compared to the Cold War for well over a decade, especially comparisons between weapon stockpiling and information conflict. While she was Secretary of State, for instance, Hillary Clinton criticized Chinese internet censorship with strong references to an "information Iron Curtain." Noah Shachtman and Peter W. Singer thoroughly dismantled this misapplication of analogies back in 2011, writing for the Brookings Institution that with cyberspace, "the song is not the same and the historic fit to the Cold War is actually not so neat." As the explained, from the nature of global cyber competition, which centers on companies and individuals as well as governments, to the barrier to entry into that competition (much lower online than with building nuclear missiles), the analogy doesn't work. Nonetheless, Cold War comparisons to cyberspace persist, from CNN headlines to the mouth of chess champion Garry Kasparov. The allure of such analogies is apparently strong.

Artificial intelligence also regularly falls victim to Cold War analogies. Discussion of AI development, especially between the US and China, as an "arms race" or a new Cold War proliferate in op-eds, think tank reports, and the mouths of Trump administration officials. Yet AI tools (at least presently) can't kill like a nuclear weapon, and the development of AI tools isn't nationally isolated. With great interconnection between the US and Chinese technology sectors, science and technology research is anything but zero-sum. Moreover, AI capabilities are widespread in the commercial market and easily shared online-not exactly the case with ICBMs.

Cold War Analogies are Warping Tech Policy [Justin Sherman/Wired]

(Image: Marco Verch, CC BY, modified)

(Image: Cryteria, CC-BY, modified)

I2l_VZCr4Nk
External Content
Source RSS or Atom Feed
Feed Location http://feeds.boingboing.net/boingboing/iBag
Feed Title
Feed Link http://feeds.boingboing.net/
Reply 0 comments