Trustworthy Systems


In the L4.verified project, we proved the functional correctness of the seL4 microkernel. The proof was completed in 2009, and this archival page describes the state of the formal verification at that time.

Since then, we have extended the proof in depth and detail to cover, among others, binary code, high-level security properties, user-level components such as system initialisation, multiple hardware platforms inlcuding ARM hypervisor extensions and Intel x64, and a multitude of new kernel features. See our page on the verification of seL4 for our current projects in this space.

  A Formally Correct Operating System Kernel

L4.verified The University of New South Wales
Function calls in the seL4 kernel [What's this?]

In current software practice it is widely accepted that software will always have problems and that we will just have to live with the fact that it may crash at the worst possible moment: You might be on a deadline. Or, much scarier, you might be on a plane and there's a problem with the board computer.

Now think what we constantly want from software: more features, better performance, cheaper prices. And we want it everywhere: in mobile phones, cars, planes, critical infrastructure, defense systems.

What do we get? Mobile phones that can be hacked by SMS. Cars that have more software problems than mechanical ones. Planes where computer problems have lead to serious incidents. Computer viruses spreading through critical infrastructure control systems and defense systems. And we think “See, it happens to everybody.”

It does not have to be that way. Imagine your company is commissioning a new vending software. Imagine you write down in a contract precisely what the software is supposed to do. And then — it does. Always. And the developers can prove it to you — with an actual mathematical machine-checked proof.

Of course, the issue of software security and reliability is bigger than just the software itself and involves more than developers making implementation mistakes. In the contract, you might have said something you didn't mean. Or you might have meant something you didn't say and the proof is therefore based on assumptions that don't apply to your situation. Or you haven't thought of everything you need. In these cases, there will still be problems, but at least you know where the problem is not: with the developers. Eliminating the whole issue of implementation mistakes would be a huge step towards more reliable and more secure systems.

Sounds like science fiction?

The L4.verified project demonstrates that such contracts and proofs can be done for real-world software. Software of limited size, but real and critical.

We chose an operating system kernel to demonstrate this: seL4. It is a small, 3rd generation high-performance microkernel with about 8,700 lines of C code. Such microkernels are the critical core component of modern embedded systems architectures. They are the piece of software that has the most privileged access to hardware and regulates access to that hardware for the rest of the system. If you have a modern smart-phone, your phone might be running a microkernel quite similar to seL4: OKL4 from Open Kernel Labs.

We prove that seL4 implements its contract: an abstract, mathematical specification of what it is supposed to do.

Current status: completed successfully.


seL4, its proofs, as well as userspace libraries and tools are open-source software and can be accessed from

More information: