Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Binary Package Formats Compared 292

jjaimon writes "There is a document on different package formats used in Linux/Unix systems. Worth reading." Another reader sends in this guide to creating Debian packages which seems apropos here.
This discussion has been archived. No new comments can be posted.

Binary Package Formats Compared

Comments Filter:
  • Why don't you just ask what everyone's favourite distro is?

    Apt-get, no emerge, no make world, ARRRGH!
  • Slackware packages (Score:2, Insightful)

    by argan0n ( 684665 )
    No matter how many "packages" I see. Still love slack-packs the best.

    Come with everything you need, esp created to make you learn what the hell your doing before you do it.
  • Hmmm... (Score:3, Funny)

    by wbav ( 223901 ) <Guardian.Bob+Slashdot@gmail.com> on Friday July 11, 2003 @03:48PM (#6418485) Homepage Journal
    It seems to me, if you want to cover everything, you would put a debian package inside a RPM.

    Why? I have no clue, but you could if you wanted all the features. Kind of like putting a roll bar into a SUV, so that you can start amature racing.
  • by slackr ( 228760 ) on Friday July 11, 2003 @03:50PM (#6418502)
    First time I've ever seen guys compare packages and size doesn't seem to matter.

    Not compared: my favorite package [atomandhispackage.com]
    • First time I've ever seen guys compare packages and size doesn't seem to matter.

      From what I hear, it only matters to women, and we all know how many of them hang around here...

  • Whats the point? (Score:3, Interesting)

    by ePhil_One ( 634771 ) on Friday July 11, 2003 @03:51PM (#6418511) Journal
    I'm just not sure what the point of this is.

    While at first glance it seems heavily slanted towards Debian's .deb packages (Its first and contains more "YES"'s than the competition), as a developer I'd be far more concerned with basics like "market penetration" than whether it allows me to assign my package a "priority" over other packages.

    I suppose it might be of use to folks building their own distribution, but I expect thats a pretty short list.

    But personally, I tend to grab the source when I'm adding something that RedHat didn't include (or seems woefully out of date)

  • Deb vs RPM (Score:5, Interesting)

    by GoRK ( 10018 ) * on Friday July 11, 2003 @03:52PM (#6418520) Homepage Journal
    I'm sure this will erupt into a huge flamewar like the last time it was posted when all it boils down to is that it doesn't really matter much to end users about the package format as long as the installation and upgrades are made easy. For me, aptitude/apt with .deb packages has proven easiest, but a lot of people like apt with RPM or RedCarpet or rpmfind or whatever else is out there. Lindows users use the 'one-click' install thing not even caring that there are .deb's behind the scene.

    Part of the reason, I think that the deb format has always seemed to hold together really well is that most all of teh deb using distributions are so tightly integrated with the main debian distribution that packages are always totoally interchangeable (and are very good to notify you when they will not work.)

    RPM on the other hand is adopted by many different and sligtly incompatible distributions that often finding the libraries and applications you want to install is difficult not because RPM's are hard to find but because RPM's that work in your current setup are hard to find.

    This is simply why the management tool(s) on both ends (creating packages and maintaining installed packages) matter way more than the package formats themselves. Deb's are very compicated but sometimes easier to deal with because of all the good debhelper tools. RPM's are most often more 'hand-crafted', but they are a lot easier to create from scratch for many people.

    The thing I really hate about deb's is the lack of signature verification. It's absolutely central to the development/upload/build process but until very recent efforts has been a total pain to use on the installation front. There is no good reason for this either.

    ~GoRK
    • You cannot get your binary into the main site without going through a social screening process.

      This allows more standardization than amongst the various .rpm based installations.

      Standardization means fewer problems for the rest of the users.

      But it means more work for the people developing the packages.
  • by Speare ( 84249 ) on Friday July 11, 2003 @03:52PM (#6418529) Homepage Journal

    There are different levels of package management which often confuses the newcomer into believing (dogmatically) that one is better than the other.

    The installable packages themselves have to have flexible dependency markings and coherent version markings. The low-level package tool has to be able to install and uninstall packages cleanly and repeatably. Seems like the dpkg/deb suite and the rpm suite are quite comparable here.

    The package manager has to be able to build a requirements tree for a desired package, and then fetch all of the required packages to fulfill those dependencies on the local system. It should offer trust or signature verification to ensure only trusted repositories and trusted packages are used. The apt tool seems to be cross-platform, while non-Debian distros often spin their own service model here: up2date, Red Carpet, and whatever Mandrake and Lindows offer are each commercialized with some amount of sample access.

    Lastly, the most important criteria, is the repository itself: it should contain packages which are clean and trustworthy. There have been cracking incidents, and there will be more. The quality of code between distro-produced packages and externally-produced packages can be as different as night and day. The package's meta-data and manifesting information can be crap, or it can be carefully constructed. The embedded installation scripts can be trivially exploitable or they can be carefully scrutinized against unexpected results.

    Even if your package format is cool, and your package manager is cool, consider the repository. If the repository is not secure and offers poorly tested packages, many folks are going to unfairly blame it on the tools.

    • by arose ( 644256 ) on Friday July 11, 2003 @05:09PM (#6419442)
      Since when is urpmi (Mandrake's RPM tool) commercialized?
    • This is where Gentoo seems to really win out - IME the Gentoo source (soon to be source+package) repository seems to be second to none. It's well organised, easy to search, centralised, always up to date and very very fast.
    • While I agree with your statements, I just thought I'd butt in and fill in the "???" blank: up2date. Yes, it requires an RHN subscription (but if you're seriously using a redhat distro, $5/mo (1yr@$60) is not the end of the world), but in its basic function, it performs the way apt-get or emerge does. "up2date evolution" will grab evolution plus all of its dependencies, and will block you if you have any conflicts.

      My biggest gripe is not the commercial aspect of it, but it would be nice if you could add
      • If you were bothered to read the text, there's a reason I put ??? there. up2date is only for Red Hat Linux users, and only covers Red Hat Linux official packages. I use it. I subscribe. But it's not the only tool out there and it doesn't cover all it should.

  • Click here [216.239.37.104]

    >> or for those with text browsers or aol

    http://216.239.37.104/search?q=cache:x0Hrwxt5378 J: www.kitenet.net/~joey/pkg-comp/+&hl=en&start=1&ie= UTF-8

    Have a nice day
  • 5 years old! (Score:5, Informative)

    by spotter ( 5662 ) on Friday July 11, 2003 @04:00PM (#6418600)
    Joey Hess created that document (at least the first revision) around 1998 IIRC, so it's not so much new news (guessing it's been posted here before, but probably around then as well)
  • by Arslan ibn Da'ud ( 636514 ) on Friday July 11, 2003 @04:00PM (#6418603) Homepage
    What Linux really needs is a dir-independent application running
    system. Imagine a package of...oh, say, g++, where g++ runs properly
    even if you move the whole g++ package to a different dir (say from /usr/bin to /usr/local/bin). Most packages, including g++, configure
    themselves to run in one location, and they'll get confused if you
    move 'em.

    Some packages (eg Tomcat) let you move them and they'll still
    work...but only if you set an environment variable (eg TOMCAT_HOME)
    so that Tomcat now knows where it lives. In a proper environment, an
    application could easily & consistently know where it currently
    resides on the filesystem *cough* OSX *cough*.

    What Linux needs is some standard 'run-app' script that would inform
    a package of its location. For instance:

    % run-app tomcat

    Run-app would be simple, say, the following:

    #!/bin/sh -f

    $app = shift
    $location = `which $app`
    env {$app}_home = $location $location/bin/app

    That would enable Linux to devise a package format (or better yet,
    improve rpm, deb, etc) for more flexible package management.

    A package would no longer need to place its binary, libraries,
    manpages, etc. all in hardwired locations in the OS...it could just
    leave them in its original dir. (or maybe create a 'obj' dir that you
    can remove if you wish to clean up the package.)
    • by leviramsey ( 248057 ) on Friday July 11, 2003 @04:16PM (#6418771) Journal

      RPM's are relocatable (at least most, if not all of the packages Mandrake distributes are; hardcoded directories are against Mdk policy and caught by rpmlint). Just edit your .rpmmacros and set macros like %{bindir}, %{libdir}, etc.

      • Just edit your .rpmmacros and set macros like %{bindir}, %{libdir}, etc.

        Yes, that's so much easier than just dragging an icon a la OS X.

        The point is not that it's theoretically possible to move apps or RPM under linux, or it can be automated if you do some fiddling under the hood (and anything that involved touching a file that starts with a '.' is almost by definition under the hood), but that Linux should offer this functionality automagically. Installing or moving apps in Linux can be a nightmare. In
    • Check out GNU Stow [gnu.org] for one simple implementation of this; also, see the appdir functionality in OS X, where all the resources an application needs (binaries, shared libs, pixmaps, etc.) are bundled into a single directory structure, which is made opaque at the Finder level. For Linux, the ROX Filer [sf.net] project is trying to do something similar, but also has the advantage of backwards-compatibility with traditional (i.e., '/usr/bin', '/usr/local/bin') installation paths.

      Personally, I find the directory layout of most Linux systems to be painfully baroque, with the BSDs just a step behind. Both kick the crap out of Windows system layouts, esp. when it comes to quick configuration tweaks and the like, but the simple fact that you have to know how to do shell scripting to install applications for yourself only is rediculout IMHO. I can do it, but it'd be a lot easier for people I'm trying to get started on Linux to never have to worry about entering a password every time they want to install a new version of the Same Game.
    • Why? What benefit is there to moving binaries around like that?

      -molo
      • Running out of space on a given partition. Deciding you want to put gcc, ld, and friends inside a "development" directory and "xpdf" inside a "viewers" directory. Keeping configuration files with the program that uses them so when you start using a new machine you can copy over one bundle of files and everything works the same...

        • Why would you want to put different programs all over the place? Your PATH would be huge.. and there's no point. There is a standard that describes how the filesystem should be laid out. Linux FHS [pathname.com]

          Running out of space? Increase your logical volume size and grow your filesystem.

          Want to copy your configuration over? Copy all of /etc. Don't want to backup data that you can get from your distro? Just backup /home /var /etc and /usr/local. Very simple, well-defined ways of using the filesystem.

          Guess w
    • My project is working on a library that will allow programs to find out where they are installed to. At the moment it's fairly crude, and needs to be improved, but volunteers are required :)

      I'd note that libprefix(db) is not so users can pointlessly drag icons around all day and mess about with filing system structures in the process. It's so you can install to peoples home directories, or /opt, or /usr, or /usr/local, or perhaps a path on another mounted drive. Having relocatable programs is just conveni

    • Most packages, including g++, configure themselves to run in one location, and they'll get confused if you move 'em.

      Huh? No they don't. Most packages don't give a rat's ass where the binary is located. Historically, GCC was one of the few that did, and recent versions have changed that so that you can move the install tree around.

      The remaining few packages that care are mostly just suffering from bad design. Fortunately, as you say, they usually pay attention to environment variables telling them

      • I wish.

        On Linux at any rate, virtually an program that uses glade, or loads data files/artwork from an external file, will have paths hard coded into it by the C preprocessor at build time. Removing these hardcodings is a royal pain in the ass.

        I'm hoping once we get a nice API and strong implementation projects will begin to deprefix their software with our library.

        • On Linux at any rate, virtually an program that uses glade, or loads data files/artwork from an external file, will have paths hard coded into it by the C preprocessor at build time. Removing these hardcodings is a royal pain in the ass.

          I've seen this, too, and along with hard-coded absolute paths throughout GNOME files and library .la files, it makes the Windows registry look pleasant by comparison.

          I swear that somewhere along the line, open source took a really big step backwards with resepct to libr
  • by stomv ( 80392 ) on Friday July 11, 2003 @04:00PM (#6418608) Homepage
    Because there are folks that I do trust. I don't trust the latest nightly build of Mozilla, but I do trust the most recent stable release after it's been out a week or so.

    You see, I know there are folks out there like you... so I don't have to be like you too. Enough hobiests and security folks will bang on popular newly released code to pacify my concerns. For specialty apps coming from unknown sources, care is taken and sourcecode reviewed. But, for code from the likeness of a major OS distributor (RedHat, Debian, etc) or a major code project (Moz, Apache, etc), I don't have to bother.

    I want it fast and pain - free. Binaries please.
  • RPMs, an' all. (Score:5, Informative)

    by caluml ( 551744 ) <slashdot@spamgoe ... minus herbivore> on Friday July 11, 2003 @04:03PM (#6418641) Homepage
    How to roll your own RPMs [kernelnotes.de]. Very useful. You can open up a package, say postfix, or mozilla, customise the config files for your organisation, and re-make it. Then you can just install at your leisure.

    Best thing about RPMs? GPG signatures built in. Try rpm -K whatever-x.x.x.rpm next time. Second best thing? rpm -Va.

  • Ah yes, packaging (Score:5, Interesting)

    by Dark Lord Seth ( 584963 ) on Friday July 11, 2003 @04:04PM (#6418649) Journal

    Definitely one of the features which makes Linux a powerful OS. A good and well-configured packaging system can be a blessing, automagically resolving dependancies or at least telling you where and how things will fuck up. The problem with package managers is that there are quite a few of them around. Normally, diversity would be a good thing but those package managers don't seem too willing to process eachother's packages...

    For example, RPM packages are almost common these days; most open source software has a few packages ready to be implemented. Pretty much the same thing with Debian packages, because of the large userbase. Chances are that a few hours after the release of a major product someone has made a .deb somewhere, ready for you to install. However, if you'd look beyond these two packaging systems, you'd get a few nasty surprises...

    The TGZ packaging scheme (also mentioned in the article, along with RPM and DEB) just... Well... Sucks. Or at least in Slackware, I don't know if any other distributions use it differently but lets use Slackware's TGZ system as an example for now. What's wrong with it, you ask? First of all (and possible the foremost reason) it's almost unused. Apart from the Slackware packages itself, I've never seen anyone distribute something in the TGZ format which worked. That excludes the few things which I found and simply refused to install. It doesn't do dependancy checking, conflict-checking, heck, it doesn't do anything or so it seems. I'd continue but ranting about the bad parts of Slackware isn't the issue at hand.

    The issue at hand are the two remaining package systems, which might be technically sound and quite useable, but they still won't have allot of use. Who here has ever heard of SLP and PKG packages? And even then, who here knows of any major applications which distribute their software using those package systems? Sure, SLP and PKG might be a dream to use, but without any actual packages to install, they're (possibly sadly?) not really of any value.

    Which brings us back to RPM and DEB, apparently two of the most common systems, courtesy of Red Hat and Debian. Looking at the list of summed up data, it's really not a miracle those two are more common: Both support mostly options listed, both are backed by a large amount of users/developers and both are relatively easy to use, yet still distinct. Perhaps a system which allows multiple systems to cooperate (regarding dependancies and conflicts and the like) would be a nice compliment to both RPM and DEB?

    • Re:Ah yes, packaging (Score:5, Informative)

      by dissy ( 172727 ) on Friday July 11, 2003 @04:16PM (#6418770)
      > The TGZ packaging scheme (also mentioned in the article, along with RPM and DEB)
      > just... Well... Sucks.

      Not intended as an attack aginst your comment (You are fully correct, it sucks) but to clarify a point: .tgz is not really a package format. .tgz (.tar.gz, or a compressed tape-archive) is no more a binary package format than .zip is for windows.

      Slackware created a rather elegant hack at the time, of having a /package/install.sh script in the tarfile which is run with a bash shell after the files are extracted to do any command based setup, but that is it.

      Imagine if you will, a .zip file that states 'uncompress this file exactly here C:\windows\system\whatever\foo.bin '
      That is all a .tgz can do.

      This is why it supports no dependencys or checking, because its just an archive file.

      Technically speaking, this isnt a package format as much as a creative way to run a shell script after extracting some files.

      * I realize you were just replying to the articles claim that it is a package format, and from your own experences. I just wanted to explain why your experences sucked... It was more of a design flaw to use an archive as a package format, then the package format sucks.
      From an archive stand point, .zip and .tgz do great.
    • SLP is the Stampede Linux Package format. Just slightly more advanced than a slackware .tgz. And I mean just slightly. It was a .tar.bz2 with a struct fwrite()'ed to the end of it. It worked off of a "feature" in tar/bzip2 where it would ignore any data beyond the end of where tar thought the file would end data, so you could extract data from them with just standard tar.
      The other cool part about them, was that the final byte in the file told you what version of SLP the package was, and a single fread() ca
  • by softweyr ( 2380 ) on Friday July 11, 2003 @04:05PM (#6418665) Homepage
    Gee, I would've loved to have seen this include the BSD package format, and perhaps the Mac OS X one too. Sigh. For what it's worth, they score fairly well on the feature comparison chart, similar to RPMs.

    All of these formats could be done better. The OpenPackages project had a design project underway to consider the features of an ideal, multi- platform package format early last year but it seems to have died from lack of input. It'd be great to see it get a breath of new life. If nothing else, this article could serve as a starting point for what we do and don't like about current formats.

    • OSX Packages (Score:3, Insightful)

      by MBCook ( 132727 )
      I LOVE the way software is done on OS X (and infact the whole Mac platform for a VERY long time). To install a program, you drag it's executable somewhere. To delete a program, you drag the executable to the trash. Sometimes it's in a folder with some other stuff, but whatever. No apt-get blah. No emerge blah. No d:\setup.exe crud. Just drag and drop. Same to erase. Same to move the program around on the disk. No hidden config files. No registry. No DLLs hidden in c:\something\somewhere. Just simplicity. An
      • To bad it doesn't work on an open system.
      • I don't agree. To install software on a Mac, you have to:

        1) Open up your web browser
        2) Find it on the web.
        3) Download the DMG
        4) Open the DMG (i know safari does that for you, good usability that, magic self destructing folderfiles)
        5) Decide - is the icon an AppFolder, or an installer? Installers are rather common on MacOS these days, primarily because a pure appfolders system is too limited.

        5a) If it's an AppFolder, open up the finder
        5b) Navigate to Applications
        5c) Drag and drop the appfolder in

    • Ah, another linux-only discussion

      One problem with humans is that they easily forget alternatives in light of the current "one true way". Linux will become the next Windows. Just wait.
  • by Ann Coulter ( 614889 ) on Friday July 11, 2003 @04:07PM (#6418681)
    are better than packages because you get more control over what is installed onto your file system. If you really want to the slight advantage that package formats provide over ports, use SRPMS. You will have some of the customizability of using ports and all the features that RPMs provide.
  • I'm primarily a Mac user here, but can someone please explain to me why the Linux equivilent of InstallShield(or VISE, or anything else like that) hasn't been created yet? While it isn't the best solution when you want to deal with versioning, dependancies, and top-notch security, it still seems silly that such a program isn't around, when it's obvious that most everbody that isn't tech-head wants such a program. Is there really such an evil reason that people can't have simple, easy to use installers on Li
    • http://freshmeat.net/search/?q=installshield
      http ://freshmeat.net/search/?q=installer

      Everyone and their brother is probably writing an installer (although more people are apparently writing MP3 jukeboxes, Web image galleries, and CMSs. Trust me.) Can't say I'm seeing a "clear winner" though, which is also the case with apt front-ends.

    • Re:InstallShield (Score:2, Insightful)

      by Anonymous Coward

      [...]

      hasn't been created yet?

      Each of these does what InstallShield et al. do: install a program, keep track of how & where it was installed, and give the user the the ability to uninstall it.

      I've had problems with un-installing things on my Windows XP box at work: too many software makers try to get around it, and there's no way to figure out where things were put. At least with Unix, I can do a find(1) before installing, then after, do a diff(1) and figure things out from there if I don't trust t

  • Technically... (Score:5, Informative)

    by serial frame ( 236591 ) on Friday July 11, 2003 @04:13PM (#6418741)
    From a technical standpoint, I find that creating a Debian package would be far easier than creating a Red Hat package. Essentially, a packager does not require any special tools to create a package; all one needs is ar, tar, and gzip, and a text editor to write the package control data. This feature-by-design allows poor ol' me, without any Debian machines, to create packages, assuming a similar development environment and libraries as the Debian environment I'm targeting (which really should not be hard, provided the libraries I use are of the same version).

    Similarly, I could install a Debian binary package if that were all that existed for my particular environment, with a simple

    $ ar p data.tar.gz package-arch.deb | (cd /; tar -p -zxf -)

    (I digress, simple may be relative)

    On the other hand, since RPMs have a special binary header, the lazy would be forced to install RPM and Berkeley DB on non-Red Hat-machines in order to build an RPM package. Though it is possible to extract the gzip'ed+cpio'ed data in an RPM without using rpm.

    So, in my view, Debian has a bit of an upper hand in simplicity, from a technical standpoint, but not by much.
    • Re:Technically... (Score:5, Insightful)

      by debrain ( 29228 ) on Friday July 11, 2003 @04:59PM (#6419297) Journal
      Actually, the point goes much further than this. If you are on a RPM system and you lose control of RPM, either through library problems or dependencies, or what have you, you suddenly are robbed entirely of your ability to control the packages on your system, usually including the ability to fix the system itself.

      This is not something anything but highly technical users, or even faint of heart, will encounter. However, it is something that has undermined my ability to recover from catastrophic failures on machines with RPM that do not have CD or network access. I have even been reduced to binary manipulation of RPM files to extract the cpio compatible archive (not a task I would undertake lightly).

      In contrast, with Debian packages, I have been able to rebuild a machine from scratch with ar, tar, and gzip, which are extraordinarily unlikely to break. Even in the event that they are unavailable, one can copy them to lightweight media, statically compiled, and then they have no real dependencies. Even if dpkg or apt fails (the latter more likely than the prior, in my experience), it is almost always possible to recover from catastrophic mistakes.

      In summary, .deb seems to be a far superior package format to recover from catastrophic failures in system utilities such as the package manager. Of course, as you may have ascertained from my comment on cpio, I have experiences precluding bias. ;)
  • by eyegone ( 644831 ) on Friday July 11, 2003 @04:16PM (#6418766)
    Is that the rpmlib API is almost completely undocumented. As GoRK pointed out, management tools such as apt, rpmfind, up2date, etc. are far more important than the underlying package format.

    But it's very difficult to create those management tools for RPM when the API is a "black art" known only to a few. Questions on the RPM mailing list/newsgroup will generally be met with the advice to "use the source, Luke"--all several hundred thousand lines of it!
  • Slashdotted..... (Score:3, Informative)

    by cansecofan22 ( 62618 ) on Friday July 11, 2003 @04:32PM (#6418952) Homepage

    Here is the link to googles cache of the site:
    CLICK HERE [216.239.57.104]

  • I cant read the article since it seems to be slashdotted, but in my opinion the RPM's and DEB's etc. seem all equally good, the difference comes from how easily you can get them and whatever the package depends on. I wouldn't care if my Debian box would use RPM's, as long as I could still use apt-get to grab them and the dependencies.
    • apt-get and rpm (Score:3, Insightful)

      by TeknoHog ( 164938 )
      apt is not tied to the deb package format. There is apt for rpm, but the lack of apt-gettable rpm sites is a problem. On the other hand, Mandrake has urpmi which is similar in functionality.
  • Cause: The word "apropos" is used in The Matrix: Reloaded
    Effect: The word apropos is used on Slashdot.
    • Concordently, the irrevocable flaws of the open source package systems are inevitabily expressed as both beginning and end. The chemical precursors undoubtably signal the quintessential human delusion that is simultaneously the source of our greatest strength and greatest weakness, our contingent affirmation from profound attachment to our software, vis a vi "love of open source".
      Ergo, the otherwise contradictory systemic anomaly displays the emergent grotesqueries of our race, thus leading to the far slowe
  • The article must be slashdoted because i can't get it.

    The RPM/DEb ideas are really good. The main problem i have however , and don't really know how can be solved is combining binary packages with source code packages (eg. when you compile you own X). When the time comes and you want to update , let say Libc, then you will be unable to do so because the dependecies include almost every package.
    Or when you compile a program that is listed as a dependency to another program , you can not install that other

  • Where Is that Lib. (Score:2, Informative)

    by ratfynk ( 456467 )
    Problems with package formats is the endless failed dependancy problems caused between different distros. Take a simple little apps like Kmol, a good simple little mol wieght calculator, it will run fine if you use the usual sane kde lib paths and do not suffer from rpm obfuscation disease. So I just make darn sure I read the deps to ./configure --with before I compile from source. What I find rather disturbing is that good freeware is being made difficult to use by the constant path alterations by the majo
  • While technically a "binary package format", Gentoo's Portage has a way to create binary tarballs of merged programs. However, this is not meant to be a way to distribute software to random users; instead it's for setups where you have a bunch of similar machines and want to compile once, install many.

    When you merge a package, do "emerge -b package". It builds and installs the program like normal, then creates a signed tarball file that you can use to install on other machines. emerge can then take that
  • What we need (Score:3, Interesting)

    by anshil ( 302405 ) on Friday July 11, 2003 @05:32PM (#6419708) Homepage
    is to distance ourselfs from the system V filesytem, and have each package installed in it's own dir, this would make things so much easier.

    And instead of the old PATH environment variable idea, think of something new, how about a central file (with user modifyable sub-files) that contains a list of all binaries to be called by default.

    Or about a package tree in the bash memory, that holds the information which binaries are callable... etc.

    There are so much ways to get rid of PATH, and with PATH away, nothing speeks anymore against installing every package in it's very own directory, making administration and package management so much easier...
  • Zero Install (Score:5, Interesting)

    by tal197 ( 144614 ) on Friday July 11, 2003 @05:38PM (#6419759) Homepage Journal

    Zero Install [sf.net]

    "The Zero Install system makes software installation not merely easy, but unnecessary. Users run their applications directly from the Internet from the software author's pages. Caching makes this as fast as running a normal application after the first time, and allows off-line use."

    • No new commands to learn. All software in the world appears as part of your filesystem (/uri/...) and is cached on demand.
    • Secure: nothing is run as root (only as the user who runs it), and GPG signatures can be checked automatically when upgrading
    • Network efficient: only what is needed is fetched (no documentation or headers until you need them, then they get fetched automatically)
    • Faster: no searching for resources -- everything is referenced by URI; only download package indexes per-site, not for the whole distribution.
    • Easy to package for: just make your tree available via an HTTP server.
    • No need for depends/recommends/suggests: whatever is needed is fetched when you access it.
    • Easy to uninstall: remove anything you don't want from the cache at any time -- it will be fetched again later if needed.

    Experimental, but give it a try. See especially the comparison with apt-get [sourceforge.net] and the security model [sourceforge.net] documents.

    • Re:Zero Install (Score:2, Interesting)

      by MntlChaos ( 602380 )
      and you thought cross-site scripting was bad? now we have executables getting copied to your computer from god knows where (read 10 layers down in the dependency tree). now you have to worry about not only what program you are running, but every program it claims to depend on (etc, etc.). one word: ouch.
  • by joey ( 315 ) <joey@kitenet.net> on Friday July 11, 2003 @06:06PM (#6420049) Homepage
    I (author) am currently enroute to Norway, only found out I was slashdotted in the airport. I don't really understand why they posted it today, and not some time in the past 5+ years.. Anyway, I will respond to anything worth responding to sometime later.
  • I'm surprised nobody mentionned uPM [u-os.org] (u as 'micro') so far. I don't know this packaging system very well, but it was discussed on /. a few times IIRC. Seems to be part of uOS, but maybe it can be used on any distro (at least, I see an uPM ebuild on my Gentoo, so I assume this is true) ?

    Does anyone have more informations to add about uPM ?
  • The header said:"... Another reader sends in this guide to creating Debian packages which seems apropos here. " Apropos is not appropriate in this context. Surely what was meant was: Another reader sends in this guide to creating Debian packages which seems apt here.
  • by sfgoth ( 102423 ) on Friday July 11, 2003 @07:05PM (#6420473) Homepage Journal
    Todo.
    * relocatable packages
    * support for arch name in metadata, arch indep packages
    * multiple version of the same package can be installed simultaneously (is this really a package format issue?)


    Sigh. The guy has an entire section on how well "standard" tools can manipulate these file formats, as if the typical user has any desire to do home surgery on their software.

    (Well, why shouldn't he? The typical linux user does want this level of control...)

    But there, at the end, in the neglected "ToDo" section, are the real issues. Features that put the user in control of their software instead of the other way around. Is anyone ever going to write a package management system that addresses the needs of the user, instead of the sysadmin?

    -pmb
  • by mindstrm ( 20013 ) on Saturday July 12, 2003 @01:31AM (#6422312)
    I've used many systems, and many package systems.. from old machines where there really was no concept of a package, to debian, with it's superb package management, and everything in between.

    The only conclusion I've come to is this: The package format itself isnt' so important.. what matters is the whole system approach to packaging and distribution.

    Take Debian. Everyone agrees, I think, that the debian package format, and apt, together make for a great system.. but that's because of the method of package distribution and tracking, not the packaging system itself.. that and the fact that it's fairly universal in the debian world. Several apt repositories make up basically all software available for debian... and it's a lot. SO the overall experience is "Great package management". It's not just about the format, but the people.. people know what's in the standard packages, and can refer back and forth to them, checking for compatability and whatnot. The overall appraoch to package management is what rocks.. not the binary format.

    Look at OSX.. they have fink. Fink, if you don't know, is basically apt-get for OSX. Works fine, no problem... except, it puts stuff in it's own folder (/sw) and it doesn't necessairly know about apple stuff already installed.. it only tracks stuff that is in the fink repositories.
    In other words.. it's useful, but it doesn't have the feel of a really great package system.. because the system itself isn't based on it.

    People say "ports rocks" in bsd land... but why? Becasue it's superior? No.. just because it's a big collection of useful stuff that handles dependencies well. The actual package management system is extremely basic. But the system is more or less based on it, so it works very, very well.

    Redhat.. is kind of a mess. Is it because rpm sucks? Heck no.. it's just because, well, the overall approach wasn't right.

    OSX.. (yeah okay I'm a mac fiend now.. I admit it.). What package management? Apps tend to be one single file, which is a package containing all the bits and pieces. No real package management system to see what's installed or not.. and who needs it.. you can just go to the Apps folder and toss stuff in the trash to get rid of it. The system was designed to work that way. so it works really well. You don't say "Gee I wish the system tracked apps" because it's so very simple to get rid of them, and to ferret out any pieces they may have left behind, which is rare..

    So overall... the complexity of the package management isn't as important as everyone sticking together on how things are going to be installed and removed. If everything works the same way, it doesn't really matter how sophisticated it is.

  • OOOOPS (Score:3, Interesting)

    by Allnighterking ( 74212 ) on Saturday July 12, 2003 @01:55AM (#6422400) Homepage
    Couple of points.

    He states that rpm is not unpackable by standard tools.
    Can an experienced user, when presented with a package in this format, extract its payload using only tools that will be on any linux system? They can remember a few facts to help them deal with the format, but remembering file offsets and stuff like that is too hard.

    First problem I have is with the "any linux system" Ummmm I've a Linksys router running Linux that can't do jack with any of these. Next an RPM is actually a cpio archive rpm2cpio is actually just a tool to shortcut what is doable with cpio. This applies as well to all of the "standard tools" statements. I also would like to point out that standard depends on which standard you use. Posix, LSB etc. In that rpm is a standard of LSB but not of Posix.

    His statement that binary programs are not allowed.
    Must these programs be scripts, or can compiled binaries be used as well?

    This is very, unclear. Can I execute a binary from within rpm. The answer is yes. I do it all the time. Can RPM be made directly from binaries (skiping all of the build etc.) Yes it can. Can I embed the binary in the RPM and not have it ever get installed... no. But I can run it then remove it before RPM finishes.

    Suggestions ... he states that RPM doesn't have them.
    A suggestion says a package may sometimes work better if another package is installed. The user can just be informed of this as a FYI

    This is really the fault of the packager not of the product. There are two areas for comments which can give you this kind of data ... but it's up to the packager to use the tool. Second the author needed to get a little deeper into rpm's queryformat (info here) [rpm.org] He would have found much of what he needed.

    Statement that RPM can't do Boolean Relationships.
    This means that a package can depend, conflict, etc on a package AND (another package OR a third package). Any boolean expression must be representable, no matter how complex.

    RPM does have the conflicts and the depends paramaters that can be set. Once set you can't install a without b and c, plus removing y and x.
    HOWEVER he is very right about the boolean "or" being missing... I've been championing this one for a while (I've talked with some of the developers.) but it seems it hasn't up till now been high enough on the horizon to have someone take a shot at it. (Sorry but it's beyond my ken to work on this personally) So I will keep politely advocating this until it does break the plane of need.

    New Section

    Sorry but this stament is just too nebulous. It's been coping with the unforseen for years. Just as debian has. That's why it get's upgraded. That's in fact a lot of the reason for the new version coming out now. To make the format more modular. and easier to mutate as times change.

    All and all the article seems well done. However I'd say the chances are pretty strong that the author is a Debian fan. My personal recommendations would be. One lose the subjective nature of a number of statements. Next, when doing research be careful how you ask a question. Often times asking "Can this product do X" will yeild a no. But if you ask .. On a system that uses this product how do you do X" will yeild a completely different answer.

    I'd give this article all in all a 6 on a 1 to 10 scale for research. a 3 for new info. and a 7 for layout and style.

Scientists will study your brain to learn more about your distant cousin, Man.

Working...