Unikernels: rise of the virtual-library operating system

by
in code on (#3QG)
Mssrs. Anil Madhavapeddy and David J. Scott over at the Association for Computing Machinery (ACM) ask: What if all layers in a virtual appliance were compiled within the same safe, high-level language framework? Good question, and I suspect we'll find out soon enough, because the trend in virtualization seems to be leading us in this direction.
While operating-system virtualization is undeniably useful, it adds yet another layer to an already highly layered software stack now including: support for old physical protocols (e.g., disk standards developed in the 1980s, such as IDE); irrelevant optimizations (e.g., disk elevator algorithms on SSD drives); backward-compatible interfaces (e.g., POSIX); user-space processes and threads (in addition to VMs on a hypervisor); and managed-code runtimes (e.g., OCaml, .NET, or Java). All of these layers sit beneath the application code. Are we really doomed to adding new layers of indirection and abstraction every few years, leaving future generations of programmers to become virtual archaeologists as they dig through hundreds of layers of software emulation to debug even the simplest applications?
This project intends to reduce the different layers of software and operating system to simple-API systems that can be installed and used like virtual appliances, perhaps [ed. note: this is my analogy, not the author's] the way Busybox reduces the POSIX standard to a simpler and smaller binary executable.

Basic idea is sound but proposed plan is shitty (Score: 2, Informative)

by Anonymous Coward on 2014-07-13 21:45 (#2G6)

I'm so tired of ML hippies saying "If you use ML, you'll never make a mistake and your program will run in the best way possible." It's wrong on so many levels, I don't even want to talk about it.

In the article they talk about how the compiler would be able to optimize everything all the way down to the device drivers, then they say that they aren't going to HAVE any device drivers since that would entail a lot of constant work. How are the device drivers optimized in this case? You're still using the host OS device drivers.

Same thing with the context switches. They optimize everything together to run at the same protection level with just one register set, then they need to call the host OS to actually use the hardware. The host OS is supposed to be used with multiple VMs and is responsible for time-sharing the hardware. So, this again requires context switches. You could get away with just one system call for writing out a 1 MB chunk in a traditional system but now that you have optimized everything, you need as many system calls as the number of packets (or disk blocks) since you're now operating at the hardware level. Very nice.

Alternately, you group them together into one big request. This adds another unnecessary layer of complexity which is the article is trying to avoid.

How about reliability? A compiler just won't fix human stupidity. I'd very much like to see a compiler which detects my misunderstanding of a spec. People are messing up and bringing down whole systems just within their POSIX userspace confines (hence the need for virtual machines). How do they propose to find enough competent programmers who can write kernel code for everyday work? It's just a dream.

Setting up a VM for just one application is idiotic anyway. It has no advantage other than the marginal security gained by adding yet another layer between the user and the hardware. The proposed plan is really funny when you consider the single application case:

- We have a program running on some POSIX host
- Put it in a VM and run the VM on the POSIX host
- Make the VM smaller by compiling the guest OS and the program together
- Still run the resulting guest-program on the POSIX host

So, the program is still running as a simple process on the host OS, but with some bullshit OS code added in.
Post Comment
Subject
Comment
Captcha
Seventy three, seventy, sixty five or forty four: which of these is the biggest?