Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Debian Open Source Security Software

Debian Working on Reproducible Builds To Make Binaries Trustable 130

An anonymous reader writes: Debian's Jérémy Bobbio, also known as Lunar, spoke at the Chaos Communication Camp about the distribution's efforts to reassert trustworthiness for open source binaries after it was brought into question by various intelligence agencies. Debian is "working to bring reproducible builds to all of its more than 22,000 software packages," and is pushing the rest of the community to do the same. Lunar said, "The idea is to get reasonable confidence that a given binary was indeed produced by the source. We want anyone to be able to produce identical binaries from a given source (PDF)."

Here is Lunar's overview of how this works: "First you need to get the build to output the same bytes for a given version. But others also must to be able to set up a close enough build environment with similar enough software to perform the build. And for them to set it up, this environment needs to be specified somehow. Finally, you need to think about how rebuilds are performed and how the results are checked."
This discussion has been archived. No new comments can be posted.

Debian Working on Reproducible Builds To Make Binaries Trustable

Comments Filter:
  • by Revek ( 133289 ) on Monday September 07, 2015 @10:25AM (#50471783)

    Would make it harder for them to exploit.

    • by Desler ( 1608317 )

      But that isn't the point of this. It's how to verify that your binary doesn't have tampered with source code.

      • That's a tricky problem.

        Countering "Trusting Trust" [schneier.com]

      • Re: (Score:3, Interesting)

        But that isn't the point of this. It's how to verify that your binary doesn't have tampered with source code.

        I care about this, too. That's one reason I run a source-based distribution. It's not the only reason. It's not even the main reason. But it's one reason.

        Anyone who really needs this kind of assurance was probably also building from source. You can do it once on-site, then make your own binary packages and push those to all of your other machines so it's really not bad. I think a much more insidious threat comes from malicious yet innocent-looking source, like what you find in the Underhanded C Con

        • I wonder, though, if an always-connected build machine could have compromised object files pushed onto it mid-build. It's a theoretical risk, but not one that couldn't be accomplished by a determined foe. The build process is well characterized for many projects, knowing when to push a rogue object file into the build directory wouldn't be that difficult.

          It means the penetrating entity would need to already have access to your system, but 'object pushing' would be a useful technique for escalating securit

    • Wouldn't virtual box be able to do this? Then one only has the trouble of validating the virtual box. While that is actually probably harder to do you don't have to do this as often and one could use some open source virtual box equivalent and compile that from source.

      On the otherhand I don't quite understand why, if one can compile the source, one needs to worry about untrusted binaries. Perhaps the intent here is for some master agency to watch for tinkered binaries or to post it's own Checksums apart

      • On the otherhand I don't quite understand why, if one can compile the source, one needs to worry about untrusted binaries. Perhaps the intent here is for some master agency to watch for tinkered binaries or to post it's own Checksums apart from Debian. Then everyone has two sources for validated checksums.

        Almost right, except without the master agency. This isn't for the incredibly paranoid types who would already be compiling from source. This is for the rest of us, the lazy people who would rather "apt-get install foo" and just assume the distro's doing things right. If the builds are reproducible then eventually someone's going to verify them. If no variations are discovered, the rest of us lazy masses can be a lot more confident that we're not running anything unexpected.

        • Re: (Score:2, Informative)

          by Anonymous Coward

          From the article the issue was that the cia had found a way to own the *compiler* binaries and each program it compiled would have a vulnerability added at build time.

  • Unless you freeze system time for the full duration of the build, every piece of code that builds in __TIME__ or __DATE__ macros, will screw with this. Other environment macros injected into the build ( like git revision etc ) as well.

    • by wolrahnaes ( 632574 ) <sean AT seanharlow DOT info> on Monday September 07, 2015 @11:11AM (#50472017) Homepage Journal

      Pages 6 and 7 of the PDF linked cover time-related issues and basically agree, anything that builds time/date in to the binary is a problem that needs to be fixed.

      Git revision on the other hand is a recommended solution, since it points at a specific state of the code and will always be the same if the code is unchanged.

      • __FILE__ has a similar effect, as it refers to absolute path. Absolute build paths would also be have to be created deterministically, no temp directories or anything like that. And then, make sure everything that gets linked in statically, including the toolchain bits follow the same rules.

        • I know this is Slashdot and all, but you really should RTFA. Again that's covered and a variety of solutions are offered, but you are basically right in that doing this right requires that those things all be the same where they're used.

          The tricky part here is determining in which cases those sorts of macros are actually required and thus must be worked around versus where they can be replaced with something else to achieve the same goal (replacing time/datestamped builds with git commit IDs for example) v

    • Unless you freeze system time for the full duration of the build, every piece of code that builds in __TIME__ or __DATE__ macros, will screw with this.

      And I have run into this at my current employment, when checking that I had successfully selected the correct version of an archived source and reproduced the binary build. The source apparently had an instance of __DATE__ in it, and would compile differently on different days.

      But the datestamps - at least in the tools I was using - were always the same leng

      • The other thing is to hack your build of the toolchain, so that __TIME__ and __DATE__ and __FILE__ could be stubbed and/or overridden by command line. Havent looked at GCC or clang codebases, but i would think it wouldnt be too hard.

      • When I needed to compare binaries, I wrote a script that would clean the source of __DATE__/__TIME__ and RCS/CVS style $Id stuff.

        While the $Id was okay for source comments, it was common practice to add something like "static char *rcs_id = "$Id";" to each .c file in such a way that you'd get a bunch of these in the final binary.

        This script can be run recursively on a copy of the source tree. Or, it can be done "on the fly" by having the script masquerade as gcc. Then, do the build.

        This works a bit better

  • Awesome (Score:5, Informative)

    by trawg ( 308495 ) on Monday September 07, 2015 @11:01AM (#50471979) Homepage

    I was thinking about this being a problem a while back - how to deal with building something from source and knowing I was getting the same output that the developers wanted me to have. Coincidentally about the same time, a href="http://developers.slashdot.org/story/13/06/20/1548228/are-you-sure-this-is-the-source-code">this article popped on Slashdot and introduced me to Ken Thompson's article Reflections on Trusting Trust [cmu.edu] - a great read and something that really opened my eyes (in that wide-open-because-of-terror kind of way).

    Also from that thread came this email [stanford.edu] from one of the Tor developers talking about their deterministic build process to do the same thing.

    I think this is a problem that would be really great to solve as soon as possible. I very much hope that once we start seeing more reproducible builds we don't suddenly find out that certain compilers have been compromised long ago.

    • So long as two or more independently developed, self-hosting compilers for a language exist, with at least one as publicly available source code, a Ken Thompson attack on the public-source one is infeasible. David A. Wheeler proved it [dwheeler.com]; here's the gist:

      1. Use Visual C++, Intel C++, and Clang++ to compile g++. The binaries you get in this stage will differ, but if VC++, Intel C++, and Clang++ are uncompromised, they will have exactly the same behavior.
      2. Use each of the three copies of g++ you compiled earlier to compile g++, disabling timestamps in the output. Because they all have the same behavior (the behavior of g++), they should all produce the same the output. Thus the binaries you get in this second stage will be identical unless one of the first compilers is compromised.
      • No he didn't prove it is infeasible. For one, that would require a method to prove that the compilers are indeed wholly independent, which hasn't been provided. Also, note that people in some sub-field of technology tend to move around. An engineer who has worked on one compiler is *more* likely to also work on another compiler at some stage than any random engineer. The DDC technique *assumes* that diverse compilers are independent - it takes it on trust. Wheeler's work if anything re-inforces the essence

        • And the end of that comment still sounds more dismissive than I wanted... Take 2:

          I'm not being dismissive of DDC. Distros regularly attempting to get reproducible builds with diverse compilers will raise the bar and make attacks harder if it can be done, and additionally it will help catch bugs. However, DDC does not fully counter Thompson's attack, and it is good to remain aware of the assumptions it operates under.

          I.e. could be a very nice step forward, though it is important to note the "fully countering

    • The problem solution can get messy.

      Most packaging systems have a control file for each package that specifies dependencies for other packages with versions, conflicts, etc. They specify deps for stuff they care about (e.g. gtk3 version X needs cairo version Y) but they don't always specify the version of gcc et. al. that they need, because that's not important from their perspective. That is, they're happy to build with gcc 4.0.0 or 4.1.0 or whatever. Sometimes the deps are specified as "I need package X

  • by trawg ( 308495 ) on Monday September 07, 2015 @11:10AM (#50472011) Homepage

    I was thinking about this being a problem a while back - how to deal with building something from source and knowing I was getting the same output that the developers wanted me to have. Coincidentally about the same time, this article [slashdot.org] popped on Slashdot and introduced me to Ken Thompson's article Reflections on Trusting Trust [cmu.edu] - a great read and something that really opened my eyes (in that wide-open-because-of-terror kind of way).

    Also from that thread came this email [stanford.edu] from one of the Tor developers talking about their deterministic build process to do the same thing.

    I think this is a problem that would be really great to solve as soon as possible. I very much hope that once we start seeing more reproducible builds we don't suddenly find out that certain compilers have been compromised long ago.

    • Since you mentioned Reflections on Trusting Trust, that issue is easy enough to avoid. There are some simpler and more clever methods, but consider this:

      Use Borland 1.0 to compile llvm.
      Use this new llvm binary to compile gcc.
      Chain a few more in you want to.

      You don't need to trust the first compiler. It could be trojaned so as to trojan new copies of itself. You'd only be concerned if you thought that Borland 1.0 was trojaned in such a way as to add a trojan to the code of a compiler that didn't yet exis

      • Why do you think a new trojan can not infect old binaries?

        The Thompson attack is what we would recognise today as a class of virus. Indeed, as Thompson's point was a general one about the unavoidable need to trust others, if one did not build every component capable of basic logical manipulation oneself, to fully counter Thompson's attack you would have to be able to counter every possible kind of virus and rootkit - and not just of the software, but also of any other firmware and microcode that might handl

        • > Why do you think a new trojan can not infect old binaries?

          CD, and floppies with the tab set, are read-only. Unless this virus changes the physical properties of aluminum, your old Borland CD isn't going to get infected.

          • You can't run a compiler from read-only media though.

            • Of course you can, it just needs a writable working directory.

              • Perhaps I wasn't being explicit enough.

                The CDROM might be read-only, but the software has to be copied into memory by something in order to run. As per Thompson's original point, it isn't sufficient to protect one piece of the system. As he stated, his attack implies that *every* programme that is involved in the handling of software must either be validated to the same level as having written it yourself OR you must invest trust:

                In demonstrating the possibility of this kind of attack, I picked on the C co

      • Good thing there are no well-known, stable hooks in programmes to allow code to be run in a generic fashion, as part of, say, binary file formats. Oh wait...

        • Are you under the impression that the DOS .exe files produced by Borland 1.0 are approximately compatible with Linux ELF files? Maybe you're thinking that because neither are Windows, Linux must be DOS? No, there's nothing "stable" between the two completely different formats. So the Borland compiler couldn't possibly include a trojan for an operating system that didn't yet exist, using an executable format that didn't yet exist.

          • I'm not familiar with DOS exe format. However, there must be some well-defined entry point.

            Thompson's attack doesn't mean that any subversion of the Borland 1.0 compiler is limited to when the Borland 1.0 compiler was created. Thompson was making an extremely general point about security in programmable systems: You either build pretty much all of it yourself, or else you must invest trust in others.

            • Well, since you're not familiar with either format, let me give you an analogy. Go build a mold for making intake manifolds to fit all 2040 model year cars. That's essentially equalivent to what Borland would have had to do in order to include a Linux elf trojan in the 1980s.

              The Thompson paper reminds us that the normal workflow involves trusting the toolchain. It in no way indicates that we can't choose a paranoid workflow instead. One type of paranoid workflow involves validating our modern tools by u

              • Not sure what car manifolds have to do with it - argumentum ad vehiculum.

                Again, you're assuming that an old toolchain can only have old attacks. That's a flawed assumption. A modern attacker can subvert your system so that old toolchains are subverted to apply further subversions.

                Are there practical steps we can take to raise the bar and make such attacks much harder to execute. Sure. Can we guarantee our system is free of such subversions, without either trusting others to some degree or building the syste

                • > A modern attacker can subvert your system so that old toolchains are subverted to apply further subversions.

                  Explain please, how you imagine the silver in a pressed Borland Turbo CD or the DOS CD it runs on, is going to get new malware added to 20 years after it was pressed.

                  The stock Borland Turbo and DOS disks are read-only. That means they can't be changed. I'm not sure what part of read-only you don't understand.

                  • The system is subverted, e.g. command.com has been modified, so that when Borland Turbo is loaded into memory it too is subverted. Alternatively, DOS 22h is replaced with a version that checks every disk write to see if it is the beginning of a DOS executable, and if so, subverts it. Alternatively, ... etc.

                    There are surely many ways. Otherwise, you are arguing that DOS is not vulnerable to a broad range of all-powerful subversions, which is patently untrue.

                    • > command.com has been modified

                      I'm not sure if you're just trolling or if you really, truly don't know what a CD-ROM is, what read-only means.

                      Before iphones - I mean before the very first iphone, and before Windows 7 or 8, you couldn't download apps. Instead, apps were made out of aluminum- metal. The metal was inside of some plastic. You had to physically walk into a store to buy your apps, and you'd walk out with these metal and plastic circles. Those circles had the apps. You couldn't change

  • What does this solution provide that checksums do not? If you trust Debian's repositories, and they publish checksums and build sizes, you can independently verify that the package you downloaded is the one they published. As an additional level of security, they can use signed binaries, where they encrypt the binaries with a private key, and then, if you trust Debian's repositories, you decrypt their binaries with their public key before you install them. IIRC, Debian already uses both of these.

    I mean, c

    • by jopsen ( 885607 )
      validation of source to binary step... This is so you don't have to trust their binary... but can read their source and trust that it is in fact the source for the binaries..
  • This only works if Debian can guarantee the integrity of the development tool chain. See this [c2.com] >30 year old talk/paper [cmu.edu] by Ken Thompson describing the problem. Once inserted, the malware is persistent and invisible. Re-compiling your compiler and applications from known-good versions doesn't help.
    • This only works if Debian can guarantee the integrity of the development tool chain. See this [c2.com] >30 year old talk/paper [cmu.edu] by Ken Thompson describing the problem. Once inserted, the malware is persistent and invisible. Re-compiling your compiler and applications from known-good versions doesn't help.

      The problem got a lot more complicated for the attacker today... Thompsons attack works well if there are only a few architectures and only a single compiler. But the attack complexity grows exponentially in the presence of multiple architectures (that can be used to cross-compile each other) and multiple compilers (that can compile each other). Now you need a compiler virus that not only compiles on all architectures well, it also needs to detect all kind of compilers that are there and works on all versi

  • Compromised hardware (Score:3, Interesting)

    by fabrica64 ( 791212 ) <fabrica64.yahoo@com> on Monday September 07, 2015 @12:21PM (#50472299)
    What about compromised CPUs? If you are the NSA I think it's easier to build a backdoor into the CPU than try to keep up with ever changing software builds. Isn't it? CPUs are totally controlled by three or four U.S. companies, are closed source nobody has ever seen into it...
    • by caseih ( 160668 ) on Monday September 07, 2015 @12:40PM (#50472391)

      A partial answer to this is to build your own CPU and system in software. Like Bochs. But you could build this virtual system on any number of other completely incompatible platforms for verification. Would be slow. But at least it would be consistent and verifiable. You couldn't use hardware virtualization for this. Would have to be completely implemented in software. And if different people implemented the same reference platform independently (using their own preferred language and programming techniques) that would add an additional layer of verification. Even the deepest NSA compromise would have a hard time completely influencing this.

    • If you're worried about compromised CPUs being used to compile executables that are used by others, then reproduceable builds are a great countermeasure. Just use reproduceable builds on many different CPUs, and compare them to ensure they are the same (for a given version of source and tools). The more variations, the less likely that there is a subversion. If what you're compiling is itself a compiler, then use diverse double-compiling (DDC) on many CPUs.

      If you're worried that an INDIVIDUAL may en

  • I was doing this 15 years ago. Your build document specifies the build computer (Brand, model, OS version, etc.), the tools needed to do the build (Compiler(s), Script to code translators, etc., with exact versions), and the source version, with instructions on how to pull that version from the repository. And all the steps in the build.

    Our build package included copies of the OS install CD, install media for any tools needed, and the complete code set as text files.

    You wipe the disk on the build machine

    • by allo ( 1728082 )

      But i guess your binaries still had a different checksum. For example because of timestamps. So you need to analyse byte-by-byte, what are the differences and if they are unimportant. Now you get the same binaries and do not need to check anything further.

  • I used to work for a manufacturer of poker(slot) machines and hybrid/table casino games, and this was a non negotiable requirement. A given set of source code had to produce exactly the same binary output, to the point where you'd get identical checksums when verifying it. Furthermore, external test labs responsible for certifying the software also needed to be able to build the software from source, and verify the binary in the same way.

    The biggest headache in the process was anything that included a
  • Part 1 [torproject.org]
    Part 2 [torproject.org]

  • I applaud this initiative, and it may make me switch back to Debian as my OS of choice.

    The trusting trust problem is a serious one, and if you can't rely on being able to build a
    byte-for-byte identical unit from source, you can't really have any confidence that you're
    running code that represents what the authors intended.

      - I used to be a perfectionist - now I am much better; I know how to compromise.

Avoid strange women and temporary variables.

Working...