# After RustLab

Last week I attended the RustLab conference for the first time. This report was going to be rather boring, but one of the presentations triggered me, so I'm going to put the (hopefully interesting) hot take right here in the front.

## Last-minute talk

The talk about "confidential computing" was not on the schedule. It took the slot of a cancelled one, after some confusion regarding lightning talks in the same spot. Please excuse me that I don't know the exact title or the author (I will update the post once I find the recording).

## Same, but different

The speaker asked the audience for a show of hands: who heard the term "confidential computing"? Very few hands went up.

Then he explained that "confidential computing" prevents anyone from accessing confidential data, and even the staff at the data center cannot access your computations. This is due to hardware features like TrustZone, hypervisors, Intel SGX, and similar.

Sounds familiar?

How many people would have put their hands up if he asked about "trusted computing"? Or "DRM"? Was the selection of the term intentional? I can only wonder.

Trusted computing can be used to empower. At Purism, my co-workers used trusted computing to give the user control over who can tamper with their OS. My own team added a smart card reader to the Librem 5 for similar purposes. But this is an exception, rather than a rule. Hardware manufacturers use trusted computing technologies to prevent people from loading their software of choice on the devices they own. Some don't give up, and eventually break into their own devices to gain full(-er) access.

The speaker completely ignored that topic, instead describing the rather tame use case where we pay someone else to do our computations – the classic cloud computing arrangement. Trusted computing then ensures that our contractor doesn't see or mess with our inputs or outputs. That's a quite reasonable take on things.

Except it's like presenting the scientific benefits of the ballistic rocket and staying silent on its applications in war.

## Root of trust

For a talk about controlling access to data, there was disappointingly little said about who's wielding that control. Even when someone from the audience (*moi*) asked about it, the answer didn't really include the core concept of processing secrets: the *root of trust*.

In very simple terms, the root of trust is the part of the system that the data cannot be hidden from.

As the presenter said, CPU doing the actual processing is one. Translating to more useful terms: you must trust the CPU maker to create a CPU free of bugs and not to intentionally exfiltrate your data. This is the CPU has direct access to your data, and verifying that the CPU is OK after buying it is damn near impossible.

But there's an even more important party here. It's the one who actually wants to process the data. The one who loaded the application. If the application could not access data it wants to confidentially process, it wouldn't be useful for much. So the application loader also holds the keys to the kingdom. Quite literally, because the CPU interface is designed to grant access to the confidential computing facilities to anyone who holds one of the cryptographic keys burned (in)directly into the CPU.

If you load the application, but haven't thoroughly verified its source code, then the authors of the source also become roots of trust – but that's a topic for another day.

## It's mine! No, it's mine!

So, who's the key holder in practice?

For Secure Boot on x86, Microsoft is one by default. Linux distributions like Fedora need to beg Microsoft to give them access, or otherwise installation would stop with scary warnings by default.

For Android boot, the phone manufacturer typically holds the keys. The firmware running in TrustZone typically cannot be replaced by anyone else.

But if you're the user, the situation is not hopeless. Many CPU manufacturers on the ARM side keep TrustZone accessible. On the Librem 5, we deliberately left all 4 key slots open, for situations where the owner wants to create their own trusted computing environment.

Some people say that not allowing the owner to access the keys is a good thing. After all, the keys protect the owner's confidential data from attackers, and if the users got the freedom to protect themselves, they would certainly hurt themselves, therefore they should not have the option. I will mercifully not comment on this kind of argument.

Instead, I will encourage you, dear hardware and software maker, to stop for a moment, and think. Are you using confidential computing to empower people who own their hardware, or to split them from the computing power they paid for and own? Are you letting them make decisions, or making it for them?

This talk forgot to place humans in context of confidential computing, even though all computing is one for humans. Please don't make that mistake and don't forget who your work serves.

Written on .


dcz's projects

Thoughts on software and society.

Atom feed