Notes on Addressing Supply Chain Vulnerabilities
One of the unsung achievements of modern software development is the degree to which it has become componentized: not that long ago, when you wanted to write a piece of software you had to write pretty much the whole thing using whatever tools were provided by the language you were writing in, maybe with a few specialized libraries like OpenSSL. No longer. The combination of newer languages, Open Source development and easy-to-use package management systems like JavaScript's npm or Rust's Cargo/crates.io has revolutionized how people write software, making it standard practice to pull in third party libraries even for the simplest tasks; it's not at all uncommon for programs to depend on hundreds or thousands of third party packages.
Supply Chain AttacksWhile this new paradigm has revolutionized software development, it has also greatly increased the risk of supply chain attacks, in which an attacker compromises one of your dependencies and through that your software.[1] A famous example of this is provided by the 2018 compromise of the event-stream package to steal Bitcoin from people's computers. The Register's brief history provides a sense of the scale of the problem:
Ayrton Sparling, a computer science student at California State University, Fullerton (FallingSnow on GitHub), flagged the problem last week in a GitHub issues post. According to Sparling, a commit to the event-stream module added flatmap-stream as a dependency, which then included injection code targeting another package, ps-tree.
There are a number of ways in which an attacker might manage to inject malware into a package. In this case, what seems to have happened is that the original maintainer of event-stream was no longer working on it and someone else volunteered to take it over. Normally, that would be great, but here it seems that volunteer was malicious, so it's not great.
Standards for Critical PackagesRecently, Eric Brewer, Rob Pike, Abhishek Arya, Anne Bertucio and Kim Lewandowski posted a proposal on the Google security blog for addressing vulnerabilities in Open Source software. They cover a number of issues including vulnerability management and security of compilation, and there's a lot of good stuff here, but the part that has received the most attention is the suggestion that certain packages should be designated critical"[2]:
For software that is critical to security, we need to agree on development processes that ensure sufficient review, avoid unilateral changes, and transparently lead to well-defined, verifiable official versions.
These are good development practices, and ones we follow here at Mozilla, so I certainly encourage people to adopt them. However, trying to require them for critical software seems like it will have some problems.
It creates friction for the package developerOne of the real benefits of this new model of software development is that it's low friction: it's easy to develop a library and make it available - you just write it put it up on a package repository like crates.io - and it's easy to use those packages - you just add them to your build configuration. But then you're successful and suddenly your package is widely used and gets deemed critical" and now you have to put in place all kinds of new practices. It probably would be better if you did this, but what if you don't? At this point your package is widely used - or it wouldn't be critical - so what now?
It's not enoughEven packages which are well maintained and have good development practices routinely have vulnerabilities. For example, Firefox recently released a new version that fixed a vulnerability in the popular ANGLE graphics engine, which is maintained by Google. Both Mozilla and Google follow the practices that this blog post recommends, but it's just the case that people make mistakes. To (possibly mis)quote Steve Bellovin, Software has bugs. Security-relevant software has security-relevant bugs". So, while these practices are important to reduce the risk of vulnerabilities, we know they can't eliminate them.
Of course this applies to inadvertant vulnerabilities, but what about malicious actors (though note that Brewer et al. observe that Taking a step back, although supply-chain attacks are a risk, the vast majority of vulnerabilities are mundane and unintentional-honest errors made by well-intentioned developers.")? It's possible that some of their proposed changes (in particular forbidding anonymous authors) might have an impact here, but it's really hard to see how this is actionable. What's the standard for not being anonymous? That you have an e-mail address? A Web page? A DUNS number?[3] None of these seem particularly difficult for a dedicated attacker to fake and of course the more strict you make the requirements the more it's a burden for the (vast majority) of legitimate developers.
I do want to acknowledge at this point that Brewer et al. clearly state that multiple layers of protection needed and that it's necessary to have robust mechanisms for handling vulnerability defenses. I agree with all that, I'm just less certain about this particular piece.
Redefining CriticalPart of the difficulty here is that there are ways in which a piece of software can be critical":
- It can do something which is inherently security sensitive (e.g., the OpenSSL SSL/TLS stack which is responsible for securing a huge fraction of Internet traffic).
- It can be widely used (e.g., the Rust log) crate, but not inherently that sensitive.
The vast majority of packages - widely used or not - fall into the second category: they do something important but that isn't security critical. Unfortunately, because of the way that software is generally built, this doesn't matter: even when software is built out of a pile of small components, when they're packaged up into a single program, each component has all the privileges that that program has. So, for instance, suppose you include a component for doing statistical calculations: if that component is compromised nothing stops it from opening up files on your disk and stealing your passwords or Bitcoins or whatever. This is true whether the compromise is due to an inadvertant vulnerability or malware injected into the package: a problem in any component compromises the whole system.[4] Indeed, minor non-security components make attractive targets because they may not have had as much scrutiny as high profile security components.
Least Privilege in Practice: Better SandboxingWhen looked at from this perspective, it's clear that we have a technology problem: There's no good reason for individual components to have this much power. Rather, they should only have the capabilities they need to do the job they are intended to to (the technical term is least privilege); it's just that the software tools we have don't do a good job of providing this property. This is a situation which has long been recognized in complicated pieces of software like Web browsers, which employ a technique called process sandboxing" (pioneered by Chrome) in which the code that interacts with the Web site is run in its own sandbox" and has limited abilities to interact with your computer. When it wants to do something that it's not allowed to do, it talks to the main Web browser code and asks it to do it for it, thus allowing that code to enforce the rules without being exposed to vulnerabilities in the rest of the browser.
Process sandboxing is an important and powerful tool, but it's a heavyweight one; it's not practical to separate out every subcomponent of a large program into its own process. The good news is that there are several recent technologies which do allow this kind of fine-grained sandboxing, both based on WebAssembly. For WebAssembly programs, nanoprocesses allow individual components to run in their own sandbox with component-specific access control lists. More recently, we have been experimenting with a technology called called RLBox developed by researchers at UCSD, UT Austin, and Stanford which allows regular programs such as Firefox to run sandboxed components. The basic idea behind both of these is the same: use static compilation techniques to ensure that the component is memory-safe (i.e., cannot reach outside of itself to touch other parts of the program) and then give it only the capabilities it needs to do its job.
Techniques like this point the way to a scalable technical approach for protecting yourself from third party components: each component is isolated in its own sandbox and comes with a list of the capabilities that it needs (often called a manifest) with the compiler enforcing that it has no other capabilities (this is not too dissimilar from - but much more granular than - the permissions that mobile applications request). This makes the problem of including a new component much simpler because you can just look at the capabilities it requests, without needing verify that the code itself is behaving correctly.
Making Auditing EasierWhile powerful, sandboxing itself - whether of the traditional process or WebAssembly variety - isn't enough, for two reasons. First, the APIs that we have to work with aren't sufficiently fine-grained. Consider the case of a component which is designed to let you open and process files on the disk; this necessarily needs to be able to open files, but what stops it from reading your Bitcoins instead of the files that the programmer wanted it to read? It might be possible to create a capability list that includes just reading certain files, but that's not the API the operating system gives you, so now we need to invent something. There are a lot of cases like this, so things get complicated.
The second reason is that some components are critical because they perform critical functions. For instance, no matter how much you sandbox OpenSSL, you still have to worry about the fact that it's handling your sensitive data, and so if compromised it might leak that. Fortunately, this class of critical components is smaller, but it's non-zero.
This isn't to say that sandboxing isn't useful, merely that it's insufficient. What we need is multiple layers of protection[5], with the first layer being procedural mechanisms to defend against code being compromised and the second layer being fine-grained sandboxing to contain the impact of compromise. As noted earlier, it seems problematic to put the burden of better processes on the developer of the component, especially when there are a large number of dependent projects, many of them very well funded.
Something we have been looking at internally at Mozilla is a way for those projects to tag the dependencies they use and depend on. The way that this would work is that each project would then be tagged with a set of other projects which used it (e.g., Firefox uses this crate"). Then when you are considering using a component you could look to see who else uses it, which gives you some measure of confidence. Of course, you don't know what sort of auditing those organizations do, but if you know that Project X is very security conscious and they use component Y, that should give you some level of confidence. This is really just a automating something that already happens informally: people judge components by who else uses them. There are some obvious extensions here, for instance labelling specific versions, having indications of what kind of auditing the depending project did, or allowing people to configure their build systems to automatically trust projects vouched for by some set of other projects and refuse to include unvouched projects, maintaining a database of insecure versions (this is something the Brewer et al. proposal suggests too). The advantage of this kind of approach is that it puts the burden on the people benefitting from a project, rather than having some widely used project suddenly subject to a whole pile of new requirements which they may not be interested in meeting. This work is still in the exploratory stages, so reach out to me if you're interested.
Obviously, this only works if people actually do some kind of due diligence prior to depending on a component. Here at Mozilla, we do that to some extent, though it's not really practical to review every line of code in a giant package like WebRTC There is some hope here as well: because modern languages such as Rust or Go are memory safe, it's much easier to convince yourself that certain behaviors are impossible - even if the program has a defect - which makes it easier to audit.[6] Here too it's possible to have clear manifests that describe what capabilities the program needs and verify (after some work) that those are accurate.
SummaryAs I said at the beginning, Brewer et al. are definitely right to be worried about this kind of attack. It's very convenient to be able to build on other people's work, but the difficulty of ascertaining the quality of that work is an enormous problem[7]. Fortunately, we're seeing a whole series of technological advancements that point the way to a solution without having to go back to the bad old days of writing everything yourself.
- Supply chain attacks can be mounted via a number of other mechanisms, but in this post, we are going to focus on this threat vector.
- Where critical" is defined by a somewhat complicated formula based roughly on the age of the project, how actively maintained it seems to be, how many other projects seem to use it, etc. It's actually not clear to me that this is metric is that good a predictor of criticality; it seems mostly to have the advantage that it's possible to evaluate purely by looking at the code repository, but presumably one could develop a metric that would be good.
- Experience with TLS Extended Validation certificates, which attempt to verify company identity, suggests that this level of identity is straightforward to fake.
- Allan Schiffman used to call this phenomenen a distributed single point of failure".
- The technical term here is defense in depth.
- Even better are verifiable systems such the HaCl* cryptographic library that Firefox depends on. HaCl* comes with a machine-checkable proof of correctness, which significantly reducing the need to audit all the code. Right now it's only practical to do this kind of verification for relatively small programs, in large part because describing the specification that you are proving the program conforms to is hard, but the technology is rapidly getting better.
- This is true even for basic quality reasons. Which of the two thousand ORMs for node is the best one to use?
The post Notes on Addressing Supply Chain Vulnerabilities appeared first on The Mozilla Blog.