Article 6YP5T Stopping The Rot When Good Software Goes Bad Means New Rules

Stopping The Rot When Good Software Goes Bad Means New Rules

by
janrinok
from SoylentNews on (#6YP5T)

Arthur T Knackerbracket has processed the following story:

Take the painter's palette. A simple, essential, and harmless tool [...] affording complete control of the visual spectrum while being an entirely benign piece of wood. Put the same idea into software, and it becomes a thief, a deceiver, and a spy. Not a paranoid fever dream of an absinthe-soaked dauber, but the observed behavior of a Chrome extension color picker. Not a skanky chunk of code picked up from the back streets of cyberland, but a Verified by Google extension from Google's own store.

This seems to have happened because when the extension was first uploaded to the store, it was as simple, useful, and harmless as its physical antecedents. Somewhere in its life since then, an update slipped through with malicious code that delivered activity data to the privacy pirates. It's not alone in taking this path to evil.

Short of running a full verification process on each update, this attack vector seems unstoppable. Verifying every update would be problematic in practice, as to be any good the process takes time and resources both for producers and store operators. You need a swift update for security and bug updates, and a lot of the small utilities and specialized tools that make life better for so many groups of users may not have the means to cope with more onerous update processes.

You can't stop the problem at the source either. Good software goes bad for lots of reasons: classic supply chain attack, developers sell out to a dodgy outfit or become dodgy themselves, or even the result of a long-term strategy like deep cover agents waiting years to be actuated.

What's needed is more paranoia across the board, some of which is already there as best practice, where care should be taken to adopt it, and some of which needs to be created and mixed well into the way we do things now. Known good paranoia includes the principle of parsimony, which says to keep the number of things that touch your data as small as possible to shrink the attack space. The safest extension is the one that isn't there. Then there's partition, like not doing confidential client work on a browser that has extensions at all. And there's due diligence, checking out developer websites, hunting for user reports, and actually checking permissions. This is boring, disciplined stuff that humans aren't good at, especially when tempted by the new shiny, and only partially protective against software rot.

So there needs to be more paranoia baked into the systems themselves, both the verification procedure and the environment in which extensions run. Paranoia that could be valuable elsewhere. Assume that anything could go bad at any point in its product lifetime, and you need to catch that moment - something many operators of large systems attempt with various levels of success. It boils down to how can you tell when a system becomes possessed. How to spot bad behavior after good.

In the case of demonic design tools, the sudden onset of encrypted data transfers to new destinations is a bit of a giveaway, as it would be in any extension that didn't have the ability to do that when initially verified. That sounds a lot like a permission-based ruleset, one that could be established during verification and communicated to the environment that will be running the extension on installation. The environment itself, be it browser or operating system, can watch for trigger activity and silently roll back to a previous version while kicking a "please verify again" message back to the store.

The dividing line between useful and harmful behaviors is always contextual and no automation will catch everything. That doesn't mean a pinch more paranoia in the right places can't do good, especially where limits can be set early on and patrolled by automation.

If you're riding herd on corporate infrastructure, you'll know how far the bad actor will go to disguise themselves, obfuscating egressing traffic and making internal changes look routine when they're really rancid. The bad guys learn about the tools and skills that can defeat them as soon as you do, and there's no automation that can change that. Elsewhere in the stack, though, there's still room to provide more robust methods of setting and policing behavioral rules.

After all, a demon-possessed color picker dropping a rootkit that opens the door to ransomware injection will make your life just as unpleasant as anything grander. Paranoia wasn't invented in the 21st century, but it's never been more valid as the default way to think.

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments