Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Kurt Seifried On The Danger Of Binary RPMs 177

Curious__George writes "Kurt Seifried on SecurityPortal (4-11-01) writes on a topic that deserves wider notice in the Linux world: The potential dangers of installing BINARY RPMs (as opposed to SOURCE RPMs). Here is the article in standard and printer-friendly versions." (Read more below.)

"He begins: 'I'm always amazed at the lack of articles on topics like RPM and PAM. These are basic systems components and tools that people use every day but which, generally speaking, are poorly understood (if at all). Prepare to be educated.. . . Generally speaking, RPM's must be installed as root, which means that RPMs can do anything on your system: install new files, overwrite files, reconfigure system settings, add new users, etc.

Why is this important? Because many people download RPMs from semi-trusted or untrusted sources and blindly install them.'

Curious George points out: "The potential for danger (with many newbies and not-so-newbies making use of binary RPMs from untrusted sources) is that Linux could develop an unwarranted reputation for problems by someone (or perhaps some Corporation?) purposely diseminating RPMs with built in vulnerablilities or exploits. We should do our part to educate and spread the word on this issue."

This discussion has been archived. No new comments can be posted.

Kurt Seifried On The Danger Of Binary RPMs

Comments Filter:
  • by Anonymous Coward
    One annoying thing with prepackaged software is that it's hard to mix it with stuff you compile. You'll either have to compile everything from scratch or use nothing but binary packages.

    Why? Dependencies. For instance, if you're trying to install GTK as a RedHat package, but you've already compiled GLIB using the source. Now, the GTK package won't install because it can't find GLIB in its "registery" (shudder)...

  • An issue I have with binary distributions that I have encountered in the wild is that in most cases there is little indication of which headers and libraries the binaries were built against.

    An rpm may have been built against the original Redhat 6.2 release, RedHat 6.2 with a couple of the ( rather extensive ) updates applied, redhat 6.2 with glibc 2.2.x installed from source, redhat 7.0, mandrake etc..

    At least a source rpm can be rebuilt against the libraries and headers on *your* machine.
  • The source code could have just as easily been modified. A couple of lines in some obscure part of it is all that would be required to do the same thing any RPM install script could do. Quit being so damned paranoid already or just get your RPMs from a trusted source like the author or a major distribution's site where they check them.
  • Yeah, and guess what.

    Those Makefiles run as root on your system, and can therefore do anything, bla bla bla.

    Besides, all an rpm file is, is a fancy cpio archive. You can rip it apart and not let rpm do anything extra.
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Anybody can sign a package. That doesn't mean shit. As another posting mentioned, any fool can setup a GeoCities page, or similar, and provide their keys for verification, and still provide trojaned packages.

    Why do some people think package signing is an end-all, be-all solution to trojan binaries? It's just one more way to lull yourself into a false sense of security. And of course, I think it's funny that people bitch that the Debian package format doesn't support it, considering that for all practical purposes, it gains you absolutely ZIP.

    Maybe the Debian way has some advantages - keep all your packages in trusted archives, where they've been verified and tested, instead of relying on digital signatures to keep bad things from happening. You'll note having a real pen-and-paper signature from a person doesn't stop them from doing bad things either.
    _____
  • Why? Dependencies. For instance, if you're trying to install GTK as a RedHat package, but you've already compiled GLIB using the source. Now, the GTK package won't install because it can't find GLIB in its "registery" (shudder)...

    That's why I prefer dpkg. For thopse few times where I do have to compile a newer version of a standard packege, it's a simple matter to unpack the standard binary version, copy in the updated results of compiling, bump the version number and have a nice shiny new .deb that will correctly update the database.

    This mostly comes in handy when I need something available in unstable, but prefer to use stable for everything else (production box). It's much easier to certify one package for production than an entire (unstable!) distro.

    I know it can be done with RPM as well, but it seems to be a bigger pain to get it right. I also like having packages that can be manipulated using utilities that are a standard part of Unix (ar, tar, gzip).

  • Dpkg won't solve this. It will only resolve dependencies when you use dpkg to install software, same as RPM.

    No, it won't, but building the compiled source into a binary package with an appropriate version number will. It's easier to do that with a .deb than with a .rpm (just like I said). That WILL make the package manager recognize it.

  • Good idea. But who are you?

    (I don't mean that snidely -- I'm serious. What are you going to do to assure that trustedrpm.net means anything?)
  • by mattdm ( 1931 ) on Sunday April 15, 2001 @02:43AM (#290705) Homepage
    1. Get the GPG key of the maintainer from a trusted source.

    2. Add it to your GPG keyring using "gpg --import keyfile.txt"

    3. Do "rpm -K somepackage.rpm".

    4. If it's signed properly by someone on your keyring, it'll say "md5 gpg OK". If not signed at all, it'll just say "md5 OK". If signed, but not by someone you know, or if it's been tampered with, it'll say "md5 GPG NOT OK".

  • I use several packages that are a real pain to install-- ALSA and Livid are two examples. Lately, I've been trying out quasimodo and ardour (two sound recording applications.) Most of these packages come from cvs, primarily because I'm interested in the source. ALSA is very unstable, and because it is kernel dependent-- the updates are as frequent as kernel releases.

    But each of these packages consists of several subpackages-- each of which must be installed, before compilation of the next subpackage can proceed.

    ALSA consists of alsa-driver, alsa-lib, and alsa-utils. LiViD consists of libcss, (optionally) libmpeg2, oms, and omi. Quasimodo has about six different sub libraries (at least one of which depends of ALSA), plus associated applications. Ardour depends on Quasimodo...
    The only sane solution I've found is to totally automate the process with a loop along the lines of.
    #!/bin/bash
    for d in libcss libmpeg2 oms omi
    do
    echo "entering package $d"
    cd $d
    autogen.sh
    make clean all
    su -c"make install"
    cd ..
    done

    While I do have to type the root password several times, it is a bit more convenient than compiling source as root. But is it reall more dangerous than typing "rpm -Uvh *.rpm"?

    Probably the best solution would be to set up a user accessible (/usr/cvs perhaps) hierarchy that doesn't require root access to write to e.g. /usr/cvs/bin, but still, one needs to install libraries with the (root only) libconf tool. On the other hand, "configure" defaults to installing in /usr/local ..

  • The vast majority of Linux users could no more spot a trojan in an SRPM than in an RPM. Trying to convince them to use SRPMs is only going to make things worse. Also, what about systems that don't have compilers?

    The right thing to do is to fix any problems with package signing, and to make the system transparent. (So if I install a Redhat system out of the box, it should come with a Redhat public key, and rpm should warn me if I install anything not signed with that key.)

    Danny.

  • I'm always resorting to a hunt for packages on rpmfind, where their origin is less clear.

    Huh? rpmfind tells you *exeactly* where it got the RPM from. What's the problem?

  • what is RPMs answer to dselect

    autorpm [kaybee.org]

  • What is wrong with requiring a compiler in the OS?

    Imagine user-friendly applications that can rely on the existence of a compiler to produce their own plugins or compile user instructions into very fast machine code.

    And even dumb users may appreciate the ability to get an application in a small package that they know will work on their system.

    It is true that the compilation should be hidden from the user. The download should be a single file that you double-click, it then compiles with a nice "N% done" meter, it then pops up a window with two buttons, one says "try the program" (which runs it, and it can be pushed repeatedly). The other says "install the program", this then asks for the root password and does the installation.

  • Back to the time when men were Stallman!

    Sorry. Couldn't resist the pun.

    --
  • by FFFish ( 7567 ) on Saturday April 14, 2001 @11:25PM (#290712) Homepage
    And these self-same newbies are going to be able to download source, *inspect the code for trojans*, and compile?

    Security is as security does. Downloading a binary from an untrusted source isn't a whole lot more risky than downloading an indecipherable source code to compile.

    Unless you're some sort of Code Hero, you just gotta trust that people aren't out to screw ya. I'll take the convienence of binaries over the P.I.T.A. of source, any day...

    --
  • You're talking about using a disassembler on the binary, right? Those produce rather hard to read but decipherable assembler code. Just getting the source in C (or even the original source in assembly with comments if the author cares not for cross-platform portability) would be at least 10 times easier.

    But there is a huge difference between machine code (1's and 0's) and assembly language. There are only a very few highly strange people that can glean any meaning at all from machine code.

  • Source RPMs solve nothing. If you are retreiving any software from an untrusted source, then running that either as root or as any user that ever switches to root from their account, you can be owned quite easily. The only solution is to verify that the software does what you want by reading the source, or having someone you trust (like a distribution or local admin) do it for you.
  • I'll have to get back to you guys on this one. I've been too busy reviewing the SRPM for Emacs. Another two years and maybe I'll be ready to compile it.
  • GNU stow [gnu.org] - manages your /usr/local hierachy with ease.

    You install your programs into /usr/local/stow/foo/*, and then from /usr/local/stow type stow foo, which then creates symlinks into the /usr/local hierachy. To remove the symlinks, simply cd /usr/local/stow and stow --delete foo, and then you can safely remove /usr/local/stow/foo.

    Very simple, very effective.

    Go you big red fire engine!

  • There have been several people pointing out that they never inspect the source anyway, son binaries are just as safe. I don't think that's quite true. At least the source normally comes from a relatively 'trusted' source, whereas if you're donwloading a binary rpm from a 'contrib' directory, you may not be able to find out who wrote it.

    However, I'd be more interested to hear if anyone has been stung by malicious rpms - is the problem widespread at all?
  • by landley ( 9786 ) on Sunday April 15, 2001 @02:36AM (#290718) Homepage
    Remember Ken Thompson's trojaned compiler that realised when it was recompiling itself and patched the trojan in so it wasn't anywhere in the source code?

    It's possible to be paranoid enough to qualify for medication and STILL not be 100% sure they can't "get you". NSA spooks sniffing your monitor's EM radiation to see your screen. A virus in your flashable bios. Those fun kernel module rootkits you can't detect without a clean boot disk and a hex editor (which is a fun way to inspect a 40 gigabyte drive...)

    Personal inpsection simply doesn't scale. It hasn't been a realistic option for anybody who couldn't afford to do it as a full time job since the days of the commodore 64.

    What you do is you find somebody to trust who will do this for you. Then you have to trust them. If Red Hat doesn't do it, than somebody who makes a distro based on paranoia (Red Helmet?) will make money by being, basically, paranoid and proud of it.

    Red Hat obviously isn't it. It installs nfs by default. It installs a network writeable LPR even if you haven't installed printer support. That's just wrong...

    But RPM's are just Linux's equivalent of running normal programs under an MS os. You can get viruses/trojans/whatever we're calling the variant this week. That's what being root means, and some things HAVE to have that access. (By definition, configuring the system requires the ability to make changes to it. Why does this seem to suprise people?)

    Rob

  • While a binary is easy to install, your download should also include the source.

    If for some reason you don't want to make the code, don't. (Don't know how, don't care to. don't think is a hole, that cool.)

    But if you're the least concerned about security or just run on a different platform, you should be able to examine (and possibly tweak,) and then compile the code to check if it matches the binary so you're sure of what you've got..
  • Why? Dependencies. For instance, if you're trying to install GTK as a RedHat package, but you've already compiled GLIB using the source. Now, the GTK package won't install because it can't find GLIB in its "registery" (shudder)...


    Nonsense. Have you ever used RPM?

    What will happen in this circumstance is that RPM will tell you EXACTLY what dependencies are not met, which gives you the option to use a single command line option to tell it "ok, I know this dependency isn't met, go ahead and install".

    BTW, dependencies can be created for things like "glib >= 1.2" or the like, which would match files installed from any other RPM, and even "requires
    /usr/lib/libglib-1.2.so.0" which would find that file no matter how it got there.

    So what you're decrying is:

    1) Your own lack of knowledge regarding how to use the program.

    2) The lack of knowledge or bad decisions of people who created certain packages.

    In other words, FUD. RTFM and you won't have these problems.

    -
  • The problem is, with a vast majority of linux software, there is no FM to read.

    [smcmahon@qward smcmahon]$ rpm -ql rpm

    (much deleted, for brevity)

    /usr/man/man1/gendiff.1.gz
    /usr/man/man8/rpm.8.gz
    /usr/man/man8/rpm2cpio.8.gz
    /usr/man/pl/man8/rpm.8.gz
    /usr/man/ru/man8/rpm.8.gz
    /usr/man/ru/man8/rpm2cpio.8.gz

    (the rest deleted)

    We're not talking about the vast majority of Linux software, we're talking about RPM.

    However, if we were talking about the vast majority of Linux software, you'd still be wrong.

    "man man" and learn something.

    -
  • Half of the installs I try I have to force because they require "/usr/bin/perl". I installed perl from source, and there is, in fact, a /usr/bin/perl which is the full binary.

    Give me one example. I'll bet you when I check it, it doesn't require /usr/bin/perl, it requires some perl RPM. In any event, so what? It tells you what it requires, you assess it visually, and then you install. No problem.

    What do you want, quiet mangling of software in the background without letting you control what's going on, or complete control of the process? If you install anything outside the control of the package manager, you may have to take an extra step once in a while. The package manager can't prevent that, it can only make it possible to proceed despite it, and that's what it does.

    RPM has easy-to-understand tools that you and the original poster don't seem to understand how to use. Who's fault is that?

    -
  • Ok, I'll concede to the fact that RPM has a pretty good man page written for it.

    Thank you. See, now that wasn't so hard, was it?

    Go check out the stuff on freahmeat. A good chunk of that crap doesn't have a man page written - and if you're lucky, the developer happened to include a README.

    What does that have to do with Linux? Other than the fact that those are written FOR Linux, not as a part of it.

    Should we judge Windows by the quality of the documentation that comes with freeware for it, too?

    Linux is incredibly well documented. Far more so than any version of Windows, ever. Some people have written programs that they have not documented well. That is their problem, not Linux's.

    Most Linux distributions include mostly software that is well documented. Windows has commands that have existed for a decade and aren't documented, such as "subst", but I don't hear you decrying it as poorly-documented.

    A very good case could be made that Linux is the most well-documented operating system ever written.

    -
  • 'course if you're using NT etc. it can be configured to only let admins run installshield.
  • In Win2k we're warned when we try to install an unsigned driver. I've heard that in WinXP we won't have the option to install one.

    What does that give us? It makes sure that the binaries we're installing are safe and virus free. (unless you consider any M$ software a virus, as the popular saying goes.) :-)

    Granted M$ could abuse this process by making it hard for certain companies to get their binaries certified, but that may be a price to pay for safe software. (and this is another tangental discussion.)

    Now this takes care of drivers, but doesn't seem to account for applications. Win2k doesn't care what application you install. It will happily let you install viri that make modifications to /winnt/system32. I don't know about XP. (installing the beta is on my todo list, as XP is in my company's migration path) I wouldn't mind a bit if XP /warned/ users before installing an unsigned binary application. Then again I'd be pretty pissed if it /prevented/ me from doing so.

    So maybe Microsoft has a point. I know RPMs freak me out. I'd feel safer installing only signed, certified "virus-free" software on any OS.
  • I used to use Redhat exclusively. It's on my servers, my workstations, and was on my laptop. But lately, I have been running into the "linux speed of change factor". Running redhat and trying to do cutting edge development doesn't mix. downloading the newest libs means fighting with trying to install into the non-standard redhat file-layout. (Example: download and compile the latest dev apache. It doesn't want to install to /home/httpd and /etc/httpd it wants to be where the developers told it to be and that is not where redhat put's it.) Digging deep into the embedded linux world, binary RPM's are 100% useless, embedded is a completely different beast...

    binaries, oh binaries.. good for an appliance user. Ok/notok for a real user. (DEF: real user, person who compiles their own kernel, configures things by hand to makes them work perfectly, and digs around in the bowels on occasion.) and bad for a developer. I dont know where it will all go, companies want to hold tight to that "Golden IP" thing they think is valuable but in reality is worthless(Sorry ATI, your drivers are not making you billions..), and companies are never run by visionaries... they are run by self centered businesspeople (Of course they are, they wouldnt be a business if they weren't) who will not share with others unless it makes them more money or is worthless anyways (and even then they are usually greedy.... anyone seen dBase release the code to version 1.0?? nope and they never will.... greed...)

    The world will be forever split. Opensource over here and corperate over there... It's more important that the Opensource people keep a lot of pressure on the Corperate people than try to "play nice"......

    Because...Corperate will NEVER play nice. (It's not in their best interests.)

  • Oh, please. Debian won't solve what he is trying to accomplice (and I'm using Debian so..). He wants to compile from source (as in a tar.gz package), have the package manager recognize it. Dpkg won't solve this. It will only resolve dependencies when you use dpkg to install software, same as RPM.

    When I used RedHat, I used to compile from source RPM all the time. No problems.
  • dselect has the worse userinterface ever. You're the first person who has said anything remotely nice about it.

    Which is why Debian is moving away from dselect to aptitude.

  • If there is a problem that reduces the trustability of PGP Signatures in RPM files, or that reduces the liklihood that a signature can be verified, then we should develop an RPM web of trust for RPM signers. I'd be willing to front the webspace and database is such a thing doesn't already exist.

  • I'm just an RPM packager and user. No one can say, "you can trust my RPMs" but it would be great if there were a system available by which a packager could build a trust relationship with users, and by which users could verify the claims RPM packages.

    If it's needed TRUSTEDRPM.NET can be used to start a mailing list and website so other people interested in building a network of trusted RPM packagers can dicuss what could or should be done. I can provide any other resources that are needed (application server, databaser server, bandwith, etc.) and do the development work too. I'd really like to see this happen now, not later.

    What's needed isn't resources of course. It's qualified and enthusiastic participants.

  • It is easy to verify the signature of an RPM produced by a major vendor. However we need a system by which individual packagers can build trust with the users of their RPMs. Perhaps a "web of trust" type system would work, or perhaps someone can suggest a more appropriate solution.

  • by winterstorm ( 13189 ) on Sunday April 15, 2001 @06:20AM (#290732)

    The problem still remains. It lies not with the technology but with the lack of a trusted source. Are there any existing systems for building RPM signature trust?

    I have just registered TRUSTEDRPM.NET for the purpose of hosting an application to help build trust amoung RPM signers and users, assuming somethign doesn't already exist.

  • by HeghmoH ( 13204 ) on Sunday April 15, 2001 @06:48AM (#290733) Homepage Journal
    The point is, options 1 and 2 are equally secure. There's nothing about source RPMs that make them better unless you're going to go through and examine every line of code. If you're just going to take standard precautions, then there is no difference, security-wise, between binary RPMs and source RPMs.
  • This approach of trusting only certain authorities is very limiting; it means that only well-known groups could provide me with software. Signatures protect against tampering between the source and destination, but what about attacks from the source? In this modern age, I would like to be able to try software from some random dude in Tuva or Timbuktu. How can I do that without exposing myself to attacks?

    One approach that I've wanted to try should be helpful, but I haven't had time to build it. Others have talked about this elsewhere, I'm pretty sure, but I still don't have the app in my hands. Do any of you have time to write it?

    Most software I install is installed by root, but will then be run by a random user. Let's suppose that I'm willing to trust code as an unprivileged user but not as root.

    The fundamental approach is sandboxing. Don't trust software based on the name of the author (and the signatures that authenticate it), but on what the software does. We're going to run the installer (e.g. rpm or configure/make/make install) from within a specially-configured subterfugue [subterfugue.org] process. Subterfugue intercepts system calls and can replace the requested actions with custom ones. open,write,mkdir,mknod,rmdir,unlink,etc. will all be intercepted.

    I would configure subterfugue such that:

    1. read/write access to files and request for network connections is deferred until permission is granted at a prompt
    2. when write access is granted, the file is copied to a sandbox and the installer is allowed to open the copy.

    At this point, no files outside the sandbox have been modified. Post-modification checks are done here, optionally examining the "diff" relative to the real file. If any of these tests fail, the software should not be installed. If it is to be installed, then the modified files are moved from the sandbox to the real locations.

    Whether installation is aborted or completed, clean-up is merely erasing the entire sandbox directory.

    This sort of thing is complicated to use, but that's because it's asking for complex judgement from the user. It's begging for a clean front-end with permission rules ("let it create new executables in /usr/local/bin", "allow all writes to this directory subtree", etc.). And if anyone has a simpler way to achieve this goal (other than the current "hope you're safe").

    Furthermore, this approach only protects from installer attacks that are trying to gain root access. It doesn't protect against attacks against run by the user account (which could even be root!). If one were really committed, one could place a subterfugue wrapper around all programs, tuning them for permission only for the files and network access you decide they need. But there you may have an unacceptable slowdown from the interception of all system calls.

  • Not really, the fact of the matter is that you can inspect the SRPM but you really can't inspect the binary RPM. You don't need to inspect every line of the source tarball, you only need to verify that it matches the original. Of course if you don't trust the original writer of the software, like some random app you saw on freshmeat.net once, then of course the app could be a trojan but that is beyond the scope of this discussion.

    Anyway, if you can verify the main source tarball then the only things you have to manually check are any patches to the source, and the spec file. That is much more doable then auditing the entire source. Compile the package as a non-root user and your entire visibility is in the pre/post install scripts and any source patches that trojanize the app. That's all the article is saying

  • by Raven667 ( 14867 ) on Sunday April 15, 2001 @12:44AM (#290736) Homepage

    It didn't get much mention, but if you get your packages from a trusted source and check the GPG signature you should be OK. Most of the article detailed all the problems you have getting packages from untrusted sources.

    And these self-same newbies are going to be able to download source, *inspect the code for trojans*, and compile?

    OK, here's a challenge, think of a better way. I'm not trying to be a PITA but there isn't any easier way to verify that you aren't installing a trojan. You have three options:

    1. Install binaries from trusted sources, check GPG signatures.
    2. Compile source packages, inspecting .spec file and patches, checking the MD5 sum or signature of the origional source tarball
    3. Install binary packages from untrusted sources and eventually get trojaned.
    If you can think of a fourth option, I would love to hear it
    Unless you're some sort of Code Hero, you just gotta trust that people aren't out to screw ya. I'll take the convienence of binaries over the P.I.T.A. of source, any day...

    Hey, I'm sorry man but look around you, this isn't the Internet of 10 years ago. I build firewalls for a living, a job that wouldn't exist if some people out there weren't out to screw other people, most time just for kicks. The phrase "Script Kiddie" would never have entered our lexicon.

    This is the one reason that I recommend Linux distributions with large trusted package repositories (Debian) to new users, esp. if they aren't going to be doing much of their own system administration. There's nothing better than being able to just apt-get install any arbitrary piece of software that they hear about and have it just work.

  • Only one problem: having the package be signed doesn't tell you anything about how safe it is to install it. All it tells you is that the package was built by the person who claims to have built it. It doesn't tell you whether you can trust the person who built it not to have trojaned it, or even whether the person actually is the person they claim to be ( see the recent obtaining of official Microsoft certificates from Verisign by parties not affiliated with Microsoft for an example ). Given that, signatures on the binary RPMs are completely inadequate for security ( just as signatures on ActiveX components are completely inadequate for security ), and if you do the things needed to make them secure then you wind up not needing the original binary RPM at all anyway.

  • That's one of the points. You know it hasn't been tampered with since it was signed, but you've no assurance that someone didn't unpack it, trojan it, repack it and sign it with their certificate ( which happens to look very similar to the one that should be on it, perhaps even being in the right company name ). Nor does it give you any assurance that the server you got it from wasn't compromised and not only the packages altered but the certificates and signatures/fingerprints on the server as well. You end up having to go obtain the packages and the certificates to verify them directly from the original author to get those assurances, and do it over an untamperable connection with verification of the sender's identity ( not a certificate, actual out-of-band verifiable proof of identity ), at which point you don't need a digital signature on the package for either authentication of source or detection of tampering.

  • Compiling from source is a much better alternative in any case. The only time I RPM anything is when it takes 15 packages to let me use a certain piece of software (KDE for example), but even now I'm thinking that compiling would save me headaches.

  • "RTFM and you won't have these problems"

    The problem is, with a vast majority of linux software, there is no FM to read.

    Read the source? Give me a break.

    ~dlb
  • Ok, I'll concede to the fact that RPM has a pretty good man page written for it.

    But,

    Man pages are the worst injustice to Linux when it comes to someone trying to figure something out. That's assuming whoever wrote the damn thing even bothered writing a man page. I'm surprised that a "Certified Perl Developer" hasn't come to that realization.

    They're written by someone who generally hasn't a clue how to put themselves in the shoes of the poor sod who doesn't know the developer's code inside and out.
    I'm not saying we should aim documentation at the lowest common denominator, but please, "read the source" is not an acceptable answer for "Do you have some documentation for this piece of software". I don't feel like reading your uncommented spaghetti code just to figure out how the damn thing works.

    Don't kid yourself about how crappy the documentation is in general for Linux software. Go check out the stuff on freahmeat. A good chunk of that crap doesn't have a man page written - and if you're lucky, the developer happened to include a README.

    I love Linux, but apart from using "man" as a reference, Linux documentation in general sucks.

    ~dlb
  • At least with RPM you can see whats it going to do
    and what files the package holds
    eg rpm -qpl package.rpm
    rpm -Uvvv --test package.rpm

    I wish redcarpet had this
  • Duh,

    Never install an RPM unless:

    1) You wrote it (my favorite)
    2) The vendor (Red Hat, Mandrake et al) released it (and the sig verifies)
    3) The developor released it with thier software (think fetchmail)

    Even then #3 can be dangerous. Since RPMs install as root you can really kill yourself if you install a fscked up RPM.

    Take APC for example. They have no concept of how (Red Hat) Linux does things and the early versions of their RPMs for their PowerChute software showed. It always made me nervious when I installed their stuff.
  • Sounds like it would be a porn centered
    distribution... Divx and newsreading etc
    in the default install!
  • by listen ( 20464 ) on Sunday April 15, 2001 @02:34PM (#290745)
    Unfortunately, a strap-on security model
    is always very sucky. Look at ACLs on any
    unix or even NT. Your model sounds like a
    typical hacked up pile of shite.

    What you want is capabilities.
    See www.eros-os.org

    Basically - a capability is an
    unforgeable object reference.
    In a true capability system,
    that is the only way to do
    anything outside your own address space.

    This allows anyone to implement a very
    simple security model which is much
    easier to verify. It also gets
    rid of all the dumb ass home grown
    authentication systems on unix. (ie Each
    app authenticating people in a funny way)
  • As fas as i know even RedHat distributes many "not so easyly trustable" RPMs with its distribution all the time. Thank god i am using slackware and compiling from source!

    Do you have specific examples? I know that Red Hat takes great care of all packages on the main Red Hat CD:s, and make all of them themselves. I believe the Powertools CD:s are the exception, but I think it's explicitly stated that those are from third parties.

  • Yeah, we know. Red Hat can't even keep their dirty hands out of the linux kernel source tarball. They're not as bad as SuSE in that fashion (screwing with the 'make config' script to compltely foul up certain options- the 'we know better than you' bullshit), but they're pretty close.

    So what is your suggestion? Should they ship a kernel in their distro with known bugs, when they instead could easily fix it by applying a patch?

    If you want a pristine kernel, download one from kernel.org [kernel.org] yourself or rebuild the kernel source rpm with whatever options you want. If you want a stable and well-tested kernel that works well with your distro and is built with the necessary options, use the kernel that comes with the distro. If you want to know what changes were made, just read the documentation for the kernel source rpm.

    Don't confuse your distro with kernel.org; they fill different purposes. kernel.org has to distribute pristine kernels, but they can have serious bugs, that's not their problem. Your distro on the other hand does not have to ship pristine kernels, but they should not contain any serious bugs and it's their problem to make sure it doesn't.

  • One annoying thing with prepackaged software is that it's hard to mix it with stuff you compile. You'll either have to compile everything from scratch or use nothing but binary packages.

    Why? Dependencies. For instance, if you're trying to install GTK as a RedHat package, but you've already compiled GLIB using the source. Now, the GTK package won't install because it can't find GLIB in its "registery" (shudder)...

    It's not a "registry", it's the rpm database. Yes, rpm tries to solve dependencies, and no, it doesn't include some obscure logic to figure out what you have compiled and installed on your own on your system and what belongs to what of that. So it has to trust its own database over installed packages.

    If you find it annoying that rpm tries to maintain dependencies, there's a very elegant solution: create your own packages. This will require some basic knowledge about rpm, but the advantage is clear. You just have to make an additional step when compiling your software, but it will keep rpm happy, and you can actually make use of the rpm system: Removing software will be a lot easier, and you won't accidentally break something because you removed software that other software depended upon, just as an example.

  • Umm, rpm has supported signed packages for quite some time now. Red Hat does sign all their packages using GPG. This is the signature information included with every official Red Hat update:

    These packages are GPG signed by Red Hat, Inc. for security. Our key is available at:

    http://www.redhat.com/corp/contact.html [redhat.com]

    You can verify each package with the following command:

    rpm --checksig

    If you only wish to verify that each package has not been corrupted or tampered with, examine only the md5sum with the following command:

    rpm --checksig --nogpg

    So it's not exactly a new idea :-)

  • It seems to me that I would be hard-pressed to go out and find a piece of infected software, for pretty much any operating system. If I want some program taht does something, I stick it into google, get the homepage. The bins/source on that homepage isn't gonna be infected, cause they want people to use their program, not NOT use it. Even tucows/sunsite/sourceforge or whatever isn't gonna have viruses. I think that we're talking about a completely nonexistant problem here.
    I would naturally think twice about installing something I got from somebody on IRC, but come on, I think that when you get something off a large software repository, it's not gonna be infected.

    What does matter is better linux games...
  • Binaries can be audited too.

    Not legally in the US anymore, thanks to the DMCA. At least, not if the license says you can't.

    Not to imply that I agree with Kurt's article in any way...
  • The fact is, you HAVE to trust, no matter what operating system you use. There are lots of people out there who, believe it or not, use linux, and don't know a thing about reading source code in C or any other language. (I am not one of those... I know C quite well.)

    There are other people who just don't want to look at the source... or fill their disk with it, or whatever. Installing binaries comes with the same risk as installing any old piece of software for any other popular OS. (Windows, MacOS). Millions of people take that risk every day, without problem.

    Are there dangers? Hell yes there is. Do most people know this? Yeah, they probably do. Do most of them care? Probably not. Personally, for my desktop machine, I really don't care. For the critical systems I maintain... yeah, I do care. But Joe Average nonprogrammer doesn't, and even if he did care... he can't understand the source anyway.

    Yes, using binaries is dangerous, but comparatively not that dangerous at all.

    On the issue of package signing. Well, I can set up a geocities website... make a trojan RPM, give it away... and even sign it with GPG and give away the public key all on the same site... and some of the comments here seem to imply that there are users out there who think merely having a signature on the package makes it all good

    Think again. ANYONE can sign a package... and anyone can set up a website to give away binaries and GPG keys that sign them. Even better, one can provide source that doesn't have the trojan... to make people feel more comfortable. Maybe even make it broken a little so that people try out the binary to see if it works any better.

    I can think of a dozen ways to get people to use binaries over source. Some are even familiar... remember the first versions of BladeEnc?

    So, in the end... you take a risk anyway.. You can lower it, even to zero... but most people will take the risk.

  • Isn't he the one that cried wolf over ssh not too long ago? If we can't trust the binaries, how are we going to trust the source?

    Quick poll, raise your hand if you inspect EVERY piece of software you install for security problems. I thought so.

    If are going to be that paranoided, we should do one of two things: 1) Throw our computers away and never use them again because we don't know what we might be installing. 2) Go back to the time when men were men and everyone wrote their own programs, compilers, shells, etc.

    Give me a break...

  • by coyote-san ( 38515 ) on Sunday April 15, 2001 @07:35AM (#290758)
    In this case, the "problem" can also be the "solution."

    I've written some (Debian) packages which do nothing but set/clear the immutable flags of selected files when they're installed/removed.

    Some examples:

    lockdown-kernel will "chattr +i" the kernel images and modules, plus the configuration files used by the module loader. This helps stop installation of malicious kernel modules.

    lockdown-users will "chattr +i" much of the user database. (Not /etc/shadow, for obvious reasons.) This helps stop malicious installation of users, or compromising existing ones.

    lockdown-apt will "chattr +i" the configuration information used by "apt". This helps stop malicious modification of my system upgrade process.

    lockdown-lib will "chattr +i" the core libraries, and the ldconfig information. This helps stop malicious modification of the core libraries - or the library search path.

    I'm sure you get the idea. If I really do want to modify something, e.g., adding a user or installing a new kernel, I just remove the "lockdown" package, do the work, then reinstall it. This approach isn't perfect - unless I also change the capacities bounding set a malicious package can simply call "chattr" itself - but it's a good first step.
  • Yes but how many windows lusers who are not even capable of changing their default homepage are going to do that.

    It has been said on slashdot a billion times. People can not be expected to actually mess with their software and that's why people use windows. You wouldn't let your mother use linux and your mother would not be able to mess with NT or windows 9x. Windows is an OS for people who don't care and who can't deal.
  • I got my mother a MAC. No more tech support call sto me! Really a mac is a great choice for mom or dad. I still don't think I would go with linux because dad likes AOL (go figure).
  • by Pingo ( 41908 ) on Saturday April 14, 2001 @11:43PM (#290761)
    This rpm issue is simply not just about if RPM's are binary or source.

    The issue is, can I trust this software to be free from trojans or other evil stuff?

    It's like using credit cards, you will happily use your credit card at some well reputable store but will be reluctant to use it in some shady dark alley business.

    It's just the same with software. A well known softwarehouse such as RedHat, SuSe, Mandrake ... have everything to loose if intentionally delivering shady software. Just look how anxious they are wat providing security updates to their products for faults that no one really can blame them for.

    Also single persons with a good reputation such as Wietze can easily be trusted since this guy has so much reputation to loose if he intentionally does an evil act with his software. The same goes for other people that seems to be proud of what they are doing.

    It's all about trust. You can trust the person or organisation that has much to loose if they provide software with trojans etc.

    These softwarehouses signs all their RPM's cryptografically with the company key and you can easily check that it's a genuine package made in good faith.

    //Pingo

  • I dont understand, how is this any different to InstallShield overwriting and destroying your windows settings?
  • dude, that's the point, as much as there is one.
  • It sticks out like a sore thumb!

    <sarcasm>
    Don't you mean it stick's out like a sore thumb?
    </sarcasm>
    ------

  • there is no way to ever discover it.

    Machine code is no less logical than source code (and it can be more logical in some cases). A lot fewer people can understand machine (or disassembly) code, but there's nothing inherently hidden about it.

    Assembly language is easier than you think. Really. Try it.
    ------

  • Binaries can be audited too.
    ------
  • You wouldn't let your mother use linux and your mother would not be able to mess with NT or windows 9x.

    Whoa. Time out. I let my mother use Linux all the time (administer? No. Use? Yes.).

    Which is more reassuring? "Go ahead, try it. You can't break anything important" or "Eek! Don't touch that! You'll break something!"

    Windows is an OS for people who don't know any better, and don't have access to anyone who does.
    ------

  • Why does everyone use make install? How do you people keep the cruft from building up on your system when you upgrade and/or remove programs?
    ------
  • Yeah, I know. But obviously some people don't use it, because a few times I've done a "make install prefix=/usr/local/stow/progname", and the thing just installs itself in /usr anyway. Very annoying.
    ------
  • You're talking about using a disassembler on the binary, right?

    Well, yes, but I don't have to be talking about it for my point to stand.
    ------

  • Yeah, but (if you guys in the US do your jobs and get on your politicians backs,) that won't be true for very long.
    ------
  • README and INSTALL are written by the maintainer of the source code and explicitly refer to the source code, something an rpm may have no intention of doing.

    And they are not untouchable. Malicious code can be obscured in the Makefile just as easily as in an RPM. The README and INSTALL files are not always indepth guides to what is changed on your system. The only accurate check is to look over anything that will be run during the install process, and even that isn't completely safe.

    You sound like you've never even played around with a source distrobution... Maybe you should get some more experience under your belt before you pretend to have any expertise.

    I don't claim to be an expert at packaging, but I have prepared both debian and redhat packages for applications, and am well aware of how they work. Personal insults aren't particularly useful.
    treke
    Fame is a vapor; popularity an accident; the only earthly certainty is oblivion.

  • by treke ( 62626 ) on Sunday April 15, 2001 @12:20AM (#290775)

    Yeah..I agree..but with source distribution you usually get README and INSTALL files that tell you exactly what is going to happen during the installation process.

    If the RPM cant be trusted, then neither can the README or the INSTALL file. Even the source can't be trusted following this line of though. Might I recommend you read what Ken Thompson has written on the topic of trust? http://www.acm.org/classics/sep95/ [acm.org]. Eventually you have to decide who you are going to trust, if the packages are from the softwares author, then I see no reason to doubt the RPMS above the source code. Same with the authors of your distribution. I'd be more wary about packages without any form of acknowledgment from someone related to the program itself.
    treke
    Fame is a vapor; popularity an accident; the only earthly certainty is oblivion.

  • Consumers think they want binaries.

    In reality, consumers don't know the difference (or care) as long as it works and explains what they need to know to them. Give them a graphical program that does a ...

    ./configure; make; make install

    ... for most packages and they'll be happy. I've taught many people to install source packages without any knowledge of C, Makefiles, etc.
  • May I say that Linux needs an OpenBSD-style Linux distribution?
  • Your other option is to use an "install" binary that supports removal of software by package name (this would be a nice standard enhancement, btw).

    For example (wishfull thinking):

    install --package=gimp --version=4.2.6b7 ...
  • The point I made was the distribution of source. I made no claim to this being more secure or less, simply a retort to the person who claimed that users don't want source because binaries are easier.

    If you want the point, however, it means that everyone has the sources. Its a bit like everyone having guns but not knowing how to use them ... when the police state happens ... ;-)
  • Dependancies are easier to track at compilation time as well.
  • This is quite valid.

    I'll admit that for some projects, source distribution is unwieldy (although I compiled my own versions of glib, gtk and gnome-lib for most versions on a K6-200), but Make's ability to track dependancies removes the need for recompiling the whole project for a patch.
  • you cannot possibly inspect the source of every program you install.

    However, they overlook one thing: If the source is out, it CAN BE audited


    Hrm. Perhaps others capable of source audits can download the soruce while I download the binary? Is it just me or this completely obvious?

    That sounds more like an argument for using Open Source software rather than an argument for downloading SRPMs.
  • Unfortunately, a strap-on security model is always very sucky. Look at ACLs on any unix or even NT.

    And? What makes them strap on? So Unix wasn't designed to be a secure OS. Neither was it designed to be anything more than a time sharing system.Its evolved beyong those functions and works great. The ACLs on an Unix, NT, 2K, and VMS work very well.

    What you want is capabilities.

    I want them too, but they don't replace ACLs. Capabilities are already in kernel 2.4.

    I have some Word templates on a SMB server. I want:
    * The software to be owned by the admin account
    * The software group to have read and write permission, and SetGID properties
    * The staff group to have
    * Other users to have no access.

    RWX doesn't allow me to do this in anyway which isn't vastly hackish.

    This allows anyone to implement a very simple security model

    ACLs are as simple or fine grained as you need them to be, and a single line ACL is quite obviously simpler than an RWX permission.

    which is much easier to verify.

    Bah. Same thing - again, ACLs can be less compels than RWX if that's all you need.

    the dumb ass home grown authentication systems on unix. (ie Each app authenticating people in a funny way)

    Damn straight. Things like Squid, and BIND, and ...wait, they all use ACLs already.

    Capabilities are nice. They don't replace the need for ACLs, and while they work well and should be used, the RWX / Capability combination is more complex than a single tight ACL, combined perhaps with capabilities should you need them.

  • A lot of posters claim that having the source does no good, since you cannot possibly inspect the source of every program you install.

    However, they overlook one thing: If the source is out, it CAN BE audited. Which means, that if the rumour shows up that vendor XY somehow tries to sneak in some spy-ware / trojan into your system, anyone can verify it (and it's easy to find those worrysome socket/bind calls in a media player's source)

    That is, it won't protect you from trojans, but once you suspect a program, you can easily verify *exactly* how your system is compromised, and the vendor who shipped it gets a BAD reputation from it.

  • Look, the point of the article is pretty simple. Only install binaries from trusted sources. Otherwise, get the source from a trusted source, compile and install it.

    And if you can't get the source from a trusted source, DON"T INSTALL IT.

    I would take this a step further, as a measure for improving the solidity of an installation. Only get binary RPMs from your distribution. NEVER get them from anyplace else. Your distribution is the best place to insure that everything is solid and working together well (especially if you run Debian:stable). Anything else, let the configure script figure out the details during a compile.

    And if you hose your system by using third party binary packages or untrusted binaries, you will need to re-install. And in that time, you can think about what steps you can take to avoid re-installing again in the future.
  • You're ignoring or forgetting one crucial difference.

    If I install an untrusted binary rpm, there are scripts that get run as root. Not a lot of people understand them, look at them, even know how to look at them.

    If you install from source, what gets run as root. ./configure, make, and a couple of shell commands to copy the binaries and documentation into their proper place. If the source is trojaned, it's still going to be inside a user sandbox. If ./configure is trojaned, at least it's a lot more likely someone will notice that. I've never once looked at the install scripts in an rpm, but I've more configure plenty of times... *shrug* so I do think installing from source is more secure.


    "That old saw about the early bird just goes to show that the worm should have stayed in bed."
  • Consumers dont want source.....they want binaries.

    Asking your average consumer to compile their software is very, very unfriendly....prone to error and generally a bad idea if you really want Linux to be a consumer desktop.

    If linux is ever going to make it as a consumer OS then there is going to have to be some sort of compromise between ultimate security and ultimate ease of use.
  • I agree with the posters above that grabbing a binary RPM from an untrusted source isn't any better than grabbing source from an untrusted...uh...source.

    Anyway, what hasn't been mentioned is something I've been thinking about around virii. I think a big part of the reason a Linux virus has been unable to get off the ground is because we basically don't pass around binaries. When was the last time you gave anyone a binary? When was the last time you received one? We've all gotten so used to "./configure; make install". Writing a virus that attacked autoconf source code would in theory be possible, but, again, the distribution vector isn't very good, because we don't even pass around the source code.

    The way software gets distributed on Open-Source systems is, I tell a friend of mine, "Dude, check out this software...it's called Blady-blah. Just find it on freshmeat." Even more than the permissions system of Unixes, I think this reliance on compiling from the original source (because it's the easiest thing to do) is what keep virii out of the Linux swimming pool.

  • by psocccer ( 105399 ) on Saturday April 14, 2001 @11:35PM (#290807) Homepage
    I personally use source tarballs all the time, since slackware's "package" management is just tarballs with an installer bootstrap. Do I feel any less succeptable?

    NO

    Why? I'm using the source right? Well, it's not like every time I download a new program I inspect every line of code. That would be rediculous. I don't know how everyone else is, but if there is a configure script, hell, I won't even read the INSTALL file. If I can get it working the first time, I won't even read the README. I've been through enough 'configure ; make ; make install' cycles that I try what looks right first and ask questions later. And I'd bet most others do to. Which makes us all just as vulnerable to attack as the 'binary downloading newbies'.
  • Not all software needs to be installed as root, you could do something like add certain users to a group which has write access to /usr/local. That still wouldnt stop install scripts being able to run a bindshell-type of program though.
  • Actually, what I described is a standard mandatory security/integrity model. [cmu.edu] (Ken Biba, circa 1976)

    The trouble with fine-grained security is that it requires fine-grained administration. This is the big problem with access control lists. Capabilities are a mechanism, not a policy. Once you have capabilities, you have to figure out how and when to issue them. That's hard. You tend to end up with a big, complex ticket-issuer.

    A valid security policy needs transitivity. That is, if A can't do X, then A must not be able to induce B into doing X. Only mandatory security models enforce such properties. This is why they work, but they're a pain. Without this property, untrusted software can be exploited by attacks.

    EROS is a reasonably clever idea, but it's not going anywhere. (KeyCos, its predecessor, was a good idea, too, but nobody could understand Norm Hardy.) EROS has persistent objects instead of files, which was probably a mistake. (In terms of persistence, what we seem to need today is very efficient support of non-persistent objects, like CGI programs. that do quick transactions and exit. Most of those programs should be executing with very limited privileges.)

    The problem with a mandatory security model is that everything else, like package managers, web servers, browsers, etc. has to be reworked to live within it. (It's mandatory, so you can't go around it.) We need to find out if the current generation of widely-used open-source tools can be made to live within a strong mandatory security model. If they can, the security problem becomes far easier; we only have to fix little trusted apps, not big untrusted ones.

  • by Animats ( 122034 ) on Sunday April 15, 2001 @07:51AM (#290814) Homepage
    The real problem is installing as root. The security model for most existing OSs, Linux included, is so weak that there's no real way to handle installs securely. But it could be done:

    • NSA Linux mandatory security controls should be used.
    • Files, processes, etc have an associated "integrity level" and "integrity compartment". Data cannot flow into an integrity compartment or upwards in integrity level except through the assistance of a trusted "upgrader/downgrader" program, which is seldom used and requires very explicit user invocation.
    • Integrity levels might be KERNEL_INSTALL, KERNEL_CONFIG, TRUSTED, REGISTERED, UNTRUSTED, HOSTILE. Software running at a given level cannot affect data at a higher level.
    • Each software vendor with a known digital signature gets an "integrity compartment". Installs go into that vendor's integrity compartment. The install program is thus locked out of affecting software from any other vendor. Such installs go in at REGISTERED level.
    • Untrusted software gets a new integrity compartment for each install, and goes in at UNTRUSTED level. Games and web content work this way.

    Basically, this doesn't let application A mess with application B's stuff. It enforces a rigid model on package structure, which may not be a bad thing. It forces vendors to clearly identify apps which need to mess with other apps, because they'll need extra privileges, which users might not want to give them.

  • This is one of the key advantages of FreeBSD's ports system. It installs everything from source, but first performs a checksum validation on the tarball. Expert users can inspect the code manually if they want, but novice users can rest assured they're using untainted code if the checksum test is passed.
  • Mr Seifried may well be right when he says that checking the GPG signature is not totally secure, but I am sure that it is better then nothing. However, I never check the signature on any of the RPMs I download because I can't figure out how. I have read the "PGP SIGNATURES" section in the RPM man page but it doesn't make any sense to me. I would love to implement the fix in the security portal advisory refered to, but I can't understand it.

    What I (and many other idiots) need is a step by step idiot's guide to getting RPM signature verification working. Someone must have written such a thing, does anyone know where to find it? It won't allow me to achieve the supreme security level advocated by Mr Seifried, but I will be much better off than I am now.

  • by e_n_d_o ( 150968 ) on Sunday April 15, 2001 @08:33AM (#290839)
    I can't believe that there isn't a single mention of Red Carpet in this entire thread except for some poor bastard who got moderated down to "0".

    The obligatory link:

    http://www.ximian.com/apps/redcarpet.php3

    Anyway, this is THE TYPE of product that is necessary for newbies to use Linux. It lets you install packages you have downloaded yourself in addition to providing "channels" which contain many of the packages you might want. It also alerts you to updated software (possible security fixes) in those channels. Before installing a package, it automatically checks crytographic signatures. I think that a product like this with access to a broader software library than Ximian provides would take a significant chunk out of this problem.


    ---
  • Kinda funny that you mention Wietse here, as his sources were once trojanned (by someone who hacked the main distribution ftp site). So besides trusting the author, trusting the system the file came from is a requirement as well.
  • If there's a mechanism for signing rpm's (like there is for deb's) then that helps a little, if you have the public keys already to make sure nobody has messed with the rpm's you've downloaded. The big distros (Red Hat, Debian, etc.) should make some effort to get their key fingerprints widely publicized so people can check signatures easily.
  • OK, so what if you do always compile your code first. So what? Unless you're an expert, you still won't find security problems. There's no benefit to compiling your own source code first if you can't read it.

  • Don't trust them.

    Believe me... I understand that main sites can me loaded. Never download from a site or mirror you don't trust. A few sites on my "trust" list:
    ftp.ibiblio.org
    ftp.redhat.com
    ftp.slackware.com (ftp.freesoftware.com)
    ftp.cdrom.com

    One bad thing is that these are all *very* popular sites! Therefore, they are targets for someone who would want to do this. The upside is, they are highly maintained & secured.

    Some say "source RPMs fix the problem!!"
    I say "your full of crap!!"

    A source RPM can be backdoored in both the sourcecode, or the rpm spec file. It could contain the exact same sourcecode that the main distribution contains, but have a modified spec file that emails /etc/shadow to some hotmail account.

    Source RPMs are not any more safe than RPM files. Sure, if the person was stupid, they may have included the source for the modified version of the trojaned code, but more than likly, they are going to compile that object statically, and put the original source files in the RPM *with* the .o file. Then you would compile their binary into the other binaries built from source *WITHOUT KNOWING*! Most spec files do not do a make clean before they build, and this (of course) could be taken out *because* it is open source. Open source solves a whole lot of problems... problems that are becoming more and more important every day. But it also opens up new problems that don't exist in commercial products. Don't be a OSS zealot... Most of the time you are wrong.

    The only way you can know for certain that the file you have is unmodified is to use crypto. If you don't trust that Redhat/whoever will stop at nothing to be sure that you don't get backdoored software, you shouldn't be using thier products. That is one of the reasons I hate MS software. It is known for it's easter eggs/backdoors. It makes me uncomfortable.

    A point to note... This is exactly the reason the US is going to abolish crypto laws. It will help protect both consumers, and corporations.


    -EvilMonkeyNinja
    a.k.a. Joseph Nicholas Yarbrough
    Security Grunt by Day
    Programmer by Night
  • It makes no sense to mention Debian DEB (not dpkg, which is just a tool) packages in this context and not say more about them!

    Both DEB and RPM support signed packages, but at least most software installed on a Debian system over the Net is by apt-get, which uses specific sites to fetch most of the packages you'd ever want. On RPM based systems, I'm always resorting to a hunt for packages on rpmfind [rpmfind.net], where their origin is less clear.

  • It only helps you for other people to audit the source in a package if you carefully download only signed packages. In this, there's STILL no benefit to source code tarball (tgz) so-called packages over signed binary packages from a known provider. Carefully choosing the contents of your apt sources.list on a Debian [debian.org] based system will save you a lot of manual package signature inspection you'd have to go through with RPM or tarballs.

When you are working hard, get up and retch every so often.

Working...