Wednesday, December 6, 2017

A Better Norm for Enforced Backdoors

This is the kind of joke you only can see in a Wonder Woman comic for what should be obvious reasons.

So various people in the government  think they can force a private American company to implement a backdoor in their product without a warrant. But they also say they haven't done this yet.

Part of the reason why is that doing classified work in non-classified environments comes with risk - i.e. part of the reason classification systems are effective is that people in the system have signed off on the idea. Threats of prosecution only go so far really as a preventative measure against leaks (as we are now hyper-aware?)

To wit, the other major reason is that as a matter of policy, forced backdoors are terrible in a way that is visibly obvious to anyone and everyone who has looked at them. The reason is that we want to claim a "Public Private Partnership" and that's a community wide thing, and this is a tiny community.

What everyone is going to expect with a public-private partnership is simple: Shared Risk. If you ask the Government if they're going to insure a company for the potential financial harm of any kind of operation, including a backdoor, they'll say "hell no!". But then why would they expect a corporation to go along with it? These sorts of covert operations are essentially financial hacks that tax corporations for governments not wanting to pay the up-front costs of doing R&D on offensive methods, and the companies know it.

The backdoors problem is the kind of equities issue that makes the VEP look like the tiny peanuts it is and it's one with an established norm that the US Government enforces, unlike almost every other area of cyber. Huawei, Kaspersky, and ZTE have all paid the price for being used by their host governments (allegedly). Look at what Kaspersky and Microsoft are saying when faced with this issue: "If asked, we will MOVE OUR ENTIRE COMPANY to another Nation State".

In other words, whoever is telling newspapers that enforced backdoors are even on the table is being highly irresponsible or doesn't understand the equities at stake.

Tuesday, December 5, 2017

The proxy problem to VEP

Ok, so my opinion is the VEP should set very wide and broad guidelines and never try to deal with the specifics of any vulnerability. To be fair, my opinion is that it can ONLY do this, or else it is fooling itself because the workload involved in the current description of any VEP is really really high.

One point of data we have is the Chinese Vulnerability reporting team apparently takes a long time on certain bugs. My previous analysis was that they used to take bugs they knew were blown and then give them to various Chinese low-end actors to blast all over the Internet as their way of informally muddying the waters (and protecting their own ecosystem). But a more modern analysis indicates a formal and centralized process perhaps.

So here's what I want to say, as a thought experiment: Many parts of the VEP problem completely map homomorphically to finding a vulnerability and then asking yourself if it is exploitable.

For example, with the DirtyCow vulnerability. Is it at all exploitable? Does it affect Android? How far back does the vulnerability go? Does it affect GRSecced systems? What about RHEL? What about stock kernel.org systems but with uncommon configurations. What about systems with low memory or systems with TONS OF MEMORY. What about systems under heavy load? What about future kernels - is this a bug likely to still exist in a year?

Trees have roots and exploits get burned, and there's a strained analogy in here somewhere. :)

The list of questions is endless, and each question requires an experienced Linux kernel exploitation team at least a day to answer. And that's just one bug. Imagine you had a hundred bugs, or a thousand bugs, every year, and you had to answer these questions. Where is this giant team of engineers that instead of writing more kernel exploits is answering all these questions for the VEP?

Every team who has ever had an 0day has seen an advisory come out, and said "Oh, that's our bug" and then when the patch came out, you realized that was NOT your bug at all, just another bug that looked very similar and was even maybe in the same function. Or you've seen patches come out and your exploit stopped working and you thought "I'm patched out" but the underlying root cause was never handled or was handled improperly.

We used to make a game out of second guessing Microsoft's "Exploitability" indexes. "Oh, that's not exploitable? KOSTYA GO PROVE THEM WRONG!"

In other words: I worry about workload a lot with any of these processes that require high levels of technical precision at the upper reaches of government.