Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Is RPM Doomed? 691

Ladislav Bodnar writes "This is an opinion piece offering solutions for all the ills of the RPM Package Manager. It has been written with Slashdot in mind - it is a fairly controversial topic and I would like to hear the experiences and views of other users who have tried different package formats and different Linux distributions. The conclusions are pretty straightforward - either the big RPM-based distributions get together and develop a common standard or we will migrate to distributions offering more sophisticated and trouble-free package management. Note: the main server allows a maximum of 100 simultaneous connections. To limit the /. effect, here are two other mirrors: mirror-us and mirror-hu (the second one has larger fonts). Thanks in advance for publishing the story."
This discussion has been archived. No new comments can be posted.

Is RPM Doomed?

Comments Filter:
  • by peterdaly ( 123554 ) <{petedaly} {at} {ix.netcom.com}> on Sunday June 16, 2002 @09:55AM (#3710648)
    I administer a few RedHat servers, mostly 6.2, and 7.2 which each perform a different function. If an RPM is offered for a piece of software I need to install, I usually download that first.

    If the rpm install fails, I will spend about 3 minutes troubleshooting the issue. If I can't get it to go, I download the source and compile from scratch. 9 times out of 10 this works without having to figure out dependancies.

    RPM works great when the envirnment is exactly the same as the build envirnment. When it's not...well, it just plain sucks. Source almost always works without incident.

    Really, there is nothing to difficult about:
    ./configure
    make
    su
    make install

    Although it only works for products where the source is openly available.

    RedHat needs a compile from source package format that most people can figure out. srpms may do it, but I have no clue how to use them.

    -Pete
  • Why RPM's? (Score:1, Interesting)

    by Wouter Van Hemel ( 411877 ) on Sunday June 16, 2002 @10:10AM (#3710690) Homepage

    I really hope that plain old source tarballs will stay, I've noticed with recent releases of several software packages, the rpm was released _before_ the plain source, or even only the rpm. It scares me, I really don't want to be forced to use rpms for my system (slackware/linux from scratch). You loose a lot of freedom, deciding what/where/...

    I don't understand why people have to offer tarballs, rpm's, deb's, slp's, ... these days. It's a mess, no standards as usually.

    Take a look at Ximian's ftp server. For every new version, they have to build specific packages for every distribution, and even every different version of that distribution.

    People like me who either like to build from source, or don't have one of those 'supported distributions' - like my slackware and lfs - can't install ximian. Or have to go through hell and back to trick the crap out of the installation-scripts.

    I don't want to start a flamewar, but this is not 'free software', if you need specific systems and/or packagers to install it. If it only supports commercial systems like redhat/suse/..., and not me with my little self-made linux system that I made just because of the philosophy gpl-given liberty and the fun of doing it, than it doesn't follow the philosophy I see behind the gpl - or only partially.
  • by handsomepete ( 561396 ) on Sunday June 16, 2002 @10:13AM (#3710695) Journal
    There's been quite a discussion on the installer issue in the Gentoo forums (the thread can be found here [gentoo.org]). The general consesus from the users seems to be that they like Gentoo being kind of a "niche" distro. If the idea of the source based distro really appeals to you, I would suggest giving it another go and leaning very heavily on the forums (if you need to). Gentoo's Forums [gentoo.org] have the most helpful and friendly user base I have ever seen on the internet. I have yet to see a single person give a n00b a hard time (outside of the occasional rtfm...). I realize that it's not for everyone and that it takes a little bit of work, but I think Gentoo is definitely worth it after the dust settles. It's nice to install an OS and feel like you actually accomplished something.

    Oh yeah, and I don't like RPMs.
  • Automaticness (Score:3, Interesting)

    by Apreche ( 239272 ) on Sunday June 16, 2002 @10:23AM (#3710715) Homepage Journal
    What we need is to get rid of the entire packaging system all together. I know I'll probably get toasted for this. But software should install in linux the same way it installs in windows. There should be one file, like setup.exe. I should take that file, execute it, it will ask me what parts of the software I want, and where I want to put it, etc. From my experience there are two pieces of software for linux that do this, the Tribes 2 server, and Mozilla.
    The entire packaging system is just a pain in the butt. This depends on that depends on this. urpmi, rpm -i, rpm -U, things not working with no explanation. In Windows I never have to worry about one thign relying on another thing. Because just about everything uses DirectX. And directX COMES WITH anything that uses it. And it has a simple graphical isntallation.
    There should be one downloadable file for each piece of software I want. It should install on its own, on any linux machine, easily and graphically. And all of my library packages like glibc, etc. Should transparently update themselves to the newest versions all the time. I dont' want to have to worry about that stuff. Drivers in linux are incredibly difficult to install. They should become a simple right click, install driver. Done. I want all that other crap taken care of for me. I don't have time to change paths in config files, tinker with code, look up crazy commands and recompile crap.
    I feel the package system is the real place in which linux fails. Most distros, lets use Mandrake as an example, have graphical easy installations. But when you get to the package selection phase you're stuck forever weeding through thousands and thousands of checkboxes. Not cool.
    One piece of software should be one checkbox. KDE alone has like 20+ rpm files. There should be one file. KDE3setup.exe.
    You know that installshield that almsot every piece of windows software has? Maybe someone could code that for linux. I would, but I have no idea how to do something like that. But I know someone reading this does. And if you want to save your open source os, I suggest you do.
  • by Arethan ( 223197 ) on Sunday June 16, 2002 @10:24AM (#3710720) Journal
    RPM by itself isn't the real problem here. The author is complaining that installing applications in Linux is a pain in the ass, because the system often doesn't have all of the required libs installed.

    I admit, RPM doesn't make this an easy problem to solve. Any normal Windows app would simply package the required libraries with it. Thus if the lib doesn't exist, it can install it. But RPM doesn't work that way. RPMs can only hold one logical unit. So one app, or one library, or one set of platform independent support files. RPM builders could include more, but doing so will likely break the RPM dependancy tree.

    The real problem in all of this is the destinction between applications and the system itself. Is grep part of the OS, or is it an addon app? How do you tell? Most would argue that grep is a part of the OS, but you can easily install linux without grep, so it must not be essential. But if packages expect it to be there, then it must be essential. But if it's not part of the OS, then they shouldn't have expected it to be there in the first place, so now it is their fault for not thinking ahead... This problem just goes in circles all day. The worst part about this is that my use of grep is just an example. This problem applies to literally all packages outside of the kernel itself. Don't believe me? How about init? Do you think that init is essential? I agree, but what version? Do you want a SysV init, or a BSD style init? Technically you can have either.

    To solve this whole problem, we really need to take two steps. First we need to define a base Linux system. And I don't mean a completely solid, unwavering, definition either. Standards that never evolve are quickly dubbed 'legacy'. The trick is to define a complete base install. Everything from the kernel, to the version of GCC (and no RedHat, gcc 2.96 isn't going to cut it), to what version of X is installed, to what "expected unix utilities" are available, and what libraries are available. Feel free to change the standard, but each time you do so you must raise the bar somehow, either by making it more reliable, or faster, or adding features, or some combination of the above. There is only one last key item to making this system work. You must retain backwards binary compatability for long periods of time. Feel free to completely break legacy systems, but make sure that you only do so after you've had at least 5 to 6 years of stability.

    Then there is the second step. RPM is a nice system management system, but it is a shitty application packager. Mostly because of the dependancy issues and the fact that each RPM package can only hold one logical unit. We really need an install shield like system for applications (both gui and console installs in the same package). Feel free to keep track of what is installed, and what files belong to who, but you really need to separate the system from the applications. Once you have a base defined, keeping the system and apps under the same packaging system no longer makes sense. The absolute need for it is removed.
  • by grazzy ( 56382 ) <(ten.ews.ekauq) (ta) (yzzarg)> on Sunday June 16, 2002 @10:24AM (#3710723) Homepage Journal
    True. I've been working with this exact method for years aswell.

    I see nothing wrong about RPM compared to other systems, I can see why people running say Debian whine about RPM because .rpms isnt supported on THEIR system.. and I dont see as many .deb out there as there are .rpm ..

    So either way if rpm is worse or better than .deb its a standard. And standards are standards because people use them - and like them. Its not .rpm that should change, maybe its .deb.

  • by 0x0d0a ( 568518 ) on Sunday June 16, 2002 @10:26AM (#3710730) Journal
    I think the biggest thing we need with rpm (and other distro systems) is standardized package locations.

    That's already done in the LSB.

    The problem is that each rpm is required to contain a static list of files it installs *with pathnames*. The nice thing about this is that it lets you run rpm -qip foo.i386.rpm without executing any code (sandboxed or otherwise) to see the list of files. The stupid thing is that there then has to be a totally different rpm for every distro and every maintainer.

    In addition, it means that the maintainers need to keep *two* lists of what files are in the package -- one list for "make install" and the other for rpm. This is probably the most annoying design decision of RPM I've seen. There needs to be a FILES file with a list of installed files with a gen-files script (that runs sandboxed to build FILES for not-yet-installed packages and is run at package installation time to generate FILES). Have the Makefiles read this for make install. This would make life easier for maintainers (one list of files to install), would make RPMs more reliable (no accidental adding of a file to the Makefile but not to the spec file), and would let an RPM work on any distro (if we ever get the gcc-2.7, gcc-2.96, gcc-3 stuff worked out).

    even though the newer libraries could do the job of the older ones

    This is true for minor version number increases, but for a major version number change, newer libraries cannot simply link to the program.

    Also, the registry is a fucking stupid idea. (despite the fact that GNOME and KDE are mindlessly cloning it). The registry causes more problems than anything else I've seen on a Windows system. The MacOS did things right -- let all your centralized databases just be caches for data that can be rebuilt from files around your system. If something gets borked or corrupted...that's okay. Absolutely do *not* make your single copy of data a registry -- put the masters around the system, and let the centralized db be rebuilt if necessary.

    Also, registries require "installations" and "uninstallations" instead of just copying files. You can just copy appropriate files from one system to another and run code on a Linux or MacOS box. On a Windows box, you're in for running installers to poke at the registry. And finally, I've seen tons of broken Windows installers that poke at registry entries and end up completely screwing up data that some other app uses. For example, a friend once had Sonique and WinAmp installed, but couldn't associate mp3s with either. I took a look at the registry -- Microsoft's two-entry file association scheme let the extension entry point to a nonexistent application entry, IIRC. As a result, the mp3 entry didn't show up in the Folder Options dialog in Explorer, and couldn't be reassigned, and WinAmp and Sonique kept giving errors when trying to grab associations.

    The day any distro starts requiring a registry is the day I never touch that distro again. Right now, I can just uninstall GNOME if I want to do so.

    Oh, and another thing. The Windows registry is a *massive* shared database. As a result, tons of stuff modifies it and causes internal fragmentation and loss of physical continuity between related keys. Then all apps use the registry heavily (God, I hate apps that poll it), so you get slow app launch times, that annoying disk churning that you hear on Windows boxes...rrrgh.

    Take a look at .dll registration. On Windows, the only way the OS knows about a .systemwide dll is when you've added an entry to the registry for it. On Linux...run ldconfig, and it rebuilds the systemwide cache (ld.so.cache), which is significantly faster (contiguous, not incrementally modified, not modified by all sorts of other apps storing filename associations and the like) to read.

    The registry is basically a hack, because Windows *used* to have what MS considered a worse scheme (.ini files). It isn't a very well thought out system.
  • by BrokenHalo ( 565198 ) on Sunday June 16, 2002 @10:28AM (#3710736)
    I was just about to post something to the same effect. Having spent many hours trying to fix or work around broken package dependencies with RPM on RedHat and more recently Mandrake, I recently ditched both in favour of Slackware which I have found _much_ easier to maintain. Good to see I'm not alone in this...
  • by Vlad_the_Inhaler ( 32958 ) on Sunday June 16, 2002 @10:31AM (#3710744)
    Package A will have been written for and tested on the API as it was defined at the time.
    The author(s) could not know what part of the API was going to become obsolete.
    I work on mainframes where things are a lot more stable, but occasionally some interface is changed and something has to be done. If we are lucky (most cases), that 'something' is just recompiling. If it goes beyond that, then most times the software vendor warns us of impending problems.
  • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Sunday June 16, 2002 @10:35AM (#3710753) Homepage
    (Or how I learned to stop worrying and trust apt-get)

    When I first tried linux oh so many years ago, I tried THE distrobution (from what I knew) called RedHat. I quickly learned to dispise RPMs, although for a n00b like me they were better than source.

    I soon tried Caldera and a few others, I liked RH better, I don't remember why.

    Then one day I tried Mandrake. In my oppionion this is a GREAT distro. In fact, it's the one I would use today if it wasn't for RPM. No matter how nice a distro it is, I couldn't stand fiddling with RPMs, downloading everthing I could find and still having it not work.

    So I went to another famous distro: Slack. I liked slack alot, especially not having to fiddle with RPMs. Their packages had no dependencies, but they weren't RPMs. I used this distro for a few months, and it was nice, but I wasn't satisfied.

    So out of sheer desperation, I tried the ultimate distro, Debian. I had heard it was tough to install. While it was no RedHat or Mandrake, by now I knew quite a bit about Linux and it didn't give me any problems. But what won me over was (like I assume was for so many others) apt-get. You just can't be typing "apt-get install gimp" to get the gimp. All dependencies resolved, all problems taken care of.

    Now it's true that Potato (aka Stable) is out of date. It's stable as hell, but I don't think any desktop user should use it (servers are another story, as is often the case). "Testing" has been just as stable for me as full releases of Mandrake and others, that is no crashes. I've never used unstable, but hey, I've got a extra box lying around here somewhere.

    So in short all my distro hunting can be put in a few simple steps:

    • apt-get remove RPM
    • apt-get install debian
    • which means...
    • apt-get remove all_problems

    This is purely my oppionion blah blah blah....

  • Hated RPM issues (Score:2, Interesting)

    by JamesGreenhalgh ( 181365 ) on Sunday June 16, 2002 @10:37AM (#3710765)
    rpm -i

    Sorry, you need libpng x.y.z_e, but you have libpng x.y.z_c.

    Above is not of course technically accurate, but many MANY times I end up annoyed with RPMs since theyre put in a requirement for a SPECIFIC named package and version (on the builders system) version of something. You can end up needlessly having to upgrade libraries when you already had an entirely adequate version for the package in question.

    Solaris package management works. It can't really help us here though, since Solaris installations are generally very generic things - linux machines can be any one of thousands of combinations of package versions. Back to linux-land, and apt-get with debian mostly works, but a few times I've seen a debian machine decide to upgrade more or less the entire base dist for a trivial tool due to versions, and break in the process while replacing libc. Not fun.

    The only workable solution I've seen thus far, is the freebsd ports system. Grabs the generic source and builds it in such a way that it only upgrades backup tools and libraries when it really needs to. I've NEVER had a serious issue in years of using this system. That's not to say it's perfect of course, still suffers the issue of you not being able to easily revert to your old setup if an installation breaks somethings, and of course it can be pretty slow.

    Something does need to be done though. A Windows using friend of mine tried to install Mandrake recently, which he did all on his own without issues. He wanted an IRC client, I recommended x-chat. We tried using RPM and it failed, so we grabbed the source and then had to go about installing a set of development tools on his machine. It took a *long* time before the gcc package would install due to some idiot deciding headers should be split from main packages for the sake of a few kb of diskspace. Even then x-chat wouldnt build, due to things like the gettext rpm not having msgfmt (part of gettext), someone having decided it lived in an openwin tools rpm, which would no doubt have wanted lots of openwin rubbish installing. Eventually we ended up splatting source versions of common tools on top of the rpm installed ones to resolve several instances of missing header files and scripts. Finally, x-chat built...

    It made *my* head hurt let alone his - and I've been working with *nix machines for years. It almost put him off trying to use linux any further straight away. Linux is never going to start making any non-techie inroads unless someone sorts out a decent packaging system, and fast.
  • .deb (Score:4, Interesting)

    by Simon Brooke ( 45012 ) <stillyet@googlemail.com> on Sunday June 16, 2002 @10:38AM (#3710767) Homepage Journal

    I started my Linux experience with SLS and a 0.99 kernel. Then I switched to Slackware, then flirted with Caldera. Then for a while I ran RedHat on my servers, before switching in about 1999 to Mandrake on all machines.

    And then I decided to experiment with Debian on a test box, and fell in love. I now have it on my desktop, my laptop, and three out of my five servers.

    Why?

    The package manager. It just works. It just works reliably, installing all the right stuff, resolving all the dependencies. When there are conflicts (not often) it reports them and suggests remedies. In short, the Debian package manager is to all other UN*X package systems I've ever seen as a computer is to a tally-stick. No-one who has used dselect will ever go back to RPM.

  • by reflective recursion ( 462464 ) on Sunday June 16, 2002 @10:42AM (#3710784)
    I don't know if you've noticed lately, but libraries _are_ packages today. GTK+ for example. Qt, ncurses, etc. And if a package creates a _new_ library, then not many people are going to depend on it. And if they _do_ depend on it, they might as well depend on the entire package being there--since the library is a _part_ of the package.

    The idea of sharing arbitrary library code is a failed experiment. If I create MyProgram and then I create MyProgramLib.. not many people will ever use the library. The only case they _will_ use that library is if I _package_ it seperately, and make it a coherent entity itself--with documentation. This is why, IMO, going package-only and dropping the various */lib directories can only be a Good Thing. And this is how Red Hat, etc. do it today. They create dependencies between _packages_. If I create an app in RPM format that needs, say libgimp, then my package will depend on the _entire_ gimp package being installed. Not just libgimp. Why not just handle packages naturally?

    I'd also like to point out the benefits of doing this:

    - Package corruption will be detected immediately. When something depends on a package and a file is missing or corrupt then the package can be determined corrupt.

    - Dependencies handled naturally. When a program complains that a file doesn't exist, I can pinpoint _exactly_ which package the file is in and can simply reinstall the package. No need to hunt down which file belongs to which package.
  • by zpengo ( 99887 ) on Sunday June 16, 2002 @10:51AM (#3710813) Homepage
    It's just a shame that Linux doesn't have a clean install/uninstall system like Microsoft Windows, which gets it right every time.

    Err...nevermind.

    Seriously, though, the magic of open source is that if something doesn't work well, people can develop an alternative to it. As Ashcroft would say, "If you don't develop innovative new technologies, then Microsoft has already won...."

  • by HiThere ( 15173 ) <charleshixsn@@@earthlink...net> on Sunday June 16, 2002 @11:13AM (#3710872)
    Say your package directory was /usr/app (or whatever, there are standards for these things, y'know) libpng would live in /usr/app/libpng, qt would live in /usr/app/qt. Things could still dynamically link them, and it would still Just Work. The only difference is that you don't have four hundred files all crammed in /usr/lib.

    Almost. I think that he was really proposing that libpng version n.m would live in /usr/app/libpng/n.m . Which is only a refinement, but which is much safer. In the case of large packages this would cost a lot of disk space (how many versions of KDE or Gnome do you want to keep on your computer?), but OTOH it would be a lot safer. You could keep multiple versions of even so prevasive a package as KDE or Gnome during development, and if one didn't work, you could revert to an earlier version. (Yes, something like this is done during development anyway, but that requires special fiddling, and changing the directories around when it finalizes, etc. This approach wouldn't. And deleting an obsolete version would be nearly as easy as removing the directory (well, you *would* need to check for dependencies).

    I guess that a side effect would be for the /usr/bin directory to become composed entirely(?) of links. Still, I've done that already when trying out a new version of Python, and it didn't seem to cause any problems. (I suppose that the other bin directories probably wouldn't be affected that way. Especially /bin and /sbin, since they might be needed when other partitions weren't mounted.

  • by uhoreg ( 583723 ) on Sunday June 16, 2002 @11:25AM (#3710906) Homepage
    The biggest problem I found with RedHat's packaging is that they don't do versioning properly. For example, I was trying to install GNOME a while back on an RH6.2 machine, and it needed [library] (I forget exactly which library it was) version x, but some other package on the system needed [library] version y. This meant that I couldn't install the prepackaged version of GNOME, and had to build it myself. This is just like DLL hell in Windows.

    On the other hand, in Debian, if you have two versions of a library, and their API's are incompatible (which is the only reason packages would need to depend on a specific version of a library), you will have one package called, say, [library]1 and one package called [library]2, and packages can just depend on [library]1 instead of [library] version x. This way you can have both versions installed at the same time, and everyone's happy.

    My second pet peeve with RPM is that you need to be root to build an RPM from source. In Debian, you just need to use "fakeroot", which means that the build process thinks you're root, but you can't accidentally do anything too nasty too your setup.
  • Re:apt-get is nice (Score:3, Interesting)

    by coats ( 1068 ) on Sunday June 16, 2002 @11:26AM (#3710911) Homepage
    ...However, Abiword seems broken -- can't get the latest installed...
    Can't even get it to build from source, on either Mandrake 8.2 or RedHat 7.1 developer systems... their autoconf stuff is broken in the .src.tar.gz. (This is a reported bug on their bugzilla...)

    I've seen a lot of this "dependency hell" and it makes me really hate dependency on .so's: with a statically-linked build, it either works -- reliably -- or it doesn't work at all. I've heard all the .so justifications before, and from my point of view as a practicing fifty-year-old mathematician, computer scientist, and environmental modeler, it is all a lot of bunk when it comes up against the real practice of computing.

  • by FreeUser ( 11483 ) on Sunday June 16, 2002 @11:55AM (#3710999)
    Well, not quite, but now that I've got your attention... :-)

    It isn't the packaging format really ... most of the issues raised are inherent to binary based distros, which with todays processors really should become a thing of the past.

    Source Mage [sourcemage.org] and Gentoo [gentoo.org][1] are two excellent source based distros that avoid these classes of problems altogether, and unlike RPM (or debs[2]) add no burden to the upstream software developer.

    Shawn Gordon of The Kompany touches on this when he says (from the article, you did read the article, right?)


    So rather than providing a myriad of different binary RPMs for the dozens of different Linux distribution, The Kompany, which is a commercial entity developing Linux applications, reluctantly decides to give away the source code to paying customers. [Emphesis added]


    Source based distros like Gentoo and Source Mage have packaging systems that automate the process of downloading, configuring, compiling, and installing all of the software on their systems from source (pedants will note there is the occasional binary package, e.g. NVidia drivers, but for the vast, vast majority of software my point holds). Indeed, this approach makes the packaging system itself less important (so long as it works properly) than the overall engineering and organization of the distro itself, and completely irrelevant to the software developer (as it should be).

    This has a couple of disadvantages, and a whole bunch of real advantages. So much so that almost no one who has used a source based distro will go back to a binary based distro once they've tried it, despite the cons (in fact, of the numerous people I know who've tried Source Mage and Gentoo, both very different from one another BTW, I know of not a single person who has gone back to their old binary favorite, be it Suse, Mandrake, Red Hat, or Debian).

      • CONS of source based distros

      • Initial install typically requires source to all of the system, which is generally downloaded from the net. I.e. in most cases requires a fat pipe for installation.
      • The installation is time consuming, due to the fact that each package must be compiled. For modern CPUs this isn't such a big deal (a day will suffice, most of which you can spend away from the computer while it chugs away), but for older CPUs like an AMD K6 233 I have, the initial install can literally take days.
      • PROS of source based distros

      • Updates and upgrades typically require much less bandwidth than their binary equivelents, as only the new package's source needs to be downloaded.
      • The software is compiled optimized for your hardware. Typically such systems run 20-30% faster than their binary equivelents, based on some casual benchmarking I and a few others have done.
      • The software is compiled against the exact library versions installed on your system, so no subtle incompatabilities arise due to slightly mis-matched binaries. This eliminates a whole class of bugs, and a whole host of problems that can affect stability and reliability.
      • In the case of Gentoo, you have very precise control over the configuration of your system, and what is installed vs. what is not, as well as where it is installed to.
      • In the case of Source Mage, the system is auto-healing, meaning that if and when a new library is installed and the older one removed, all packages that rely on that library are recompiled against the new library. This makes upgrades (on Source Mage) very easy.
      • Upgrades are very easy. In the case of Source Mage they are virtually automatic (you select the package to update and everything is taken care of for you), in the case of Gentoo they are less automatic and require some care, but are nevertheless easier than with any binary distribution I've ever tried (and I've used all the major ones at one time or another), and with Gentoo the flexibility of having multiple versions of libraries and even runtime apps is very useful.
      • Security is improved in one way: the ease and ability to keep up with security updates. Binary distros are still trying to get this to work smoothly (and mostly not succeeding, or requiring a tradeoff like Debian Stable, in which one must run 2 year-old software to enjoy that level of security). This is really a side effect of the previous point, but is significant enough to deserve sepearate mention.
      • The ability to run current hardware. Again, this goes back to the ease and stability of upgrades inherent in source based distros like Source Mage and Gentoo. Source Mage had X 4.2 out a day after its release, giving its users the advantages of all the new features and bug fixes it had to offer. Ditto for KDE 3. Gentoo had these packages out almost as quickly. This means users get the latest features, and the latest bug fixes, almost immediately, in contrast to binary distros that typically require 3-6 months (worse for some distros. I still recall the Debian developers irate answer to a user's question on when thye could expect X 4.2 support in the experimental version of Debian ("unstable"), to the effect of "leave me alone, it will be months!")

    There are numerous other advantages I could add here, but you get the idea.

    The entire article on the flaws of RPM might better be entitled "The Flaws of Source Based Distributions" which, in the age of Free Software and source code availability, coupled with todays fast processors, really ought to become a thing of the past. In fact, it wouldn't surprise me at all to see Debian, Suse, Mandrake, and Red Hat all embracing the notion of source-based distros sometime in the future ... as processors get even faster, the day long install (on my dual 1 GHz P3), which has already shrunk to less than half a day on the dual 2GHz Athlon I have at work, will shrink even more, to a couple of hours or less.

    And the advantages in speed, stability, and ability to keep current with new software releases in a timely manner will only become more acute as time goes on.

    So while binary based distros are by no means dead (despite my rather provocative headline), it is my opinion that the writing is certainly on the wall, and the ovservant person can already mark the shifting change in the wind.

    [1]There are other source based distros as well, including Linux from Scratch and Lunar Penguin, and likely others as well.
    [2]Though in fairness the Debian developers take up most if not all of that burden
  • by HalfFlat ( 121672 ) on Sunday June 16, 2002 @12:14PM (#3711049)

    I guess that a side effect would be for the /usr/bin directory to become composed entirely(?) of links. Still, I've done that already when trying out a new version of Python, and it didn't seem to cause any problems. (I suppose that the other bin directories probably wouldn't be affected that way. Especially /bin and /sbin, since they might be needed when other partitions weren't mounted.

    I run a system based loosely on Linux from scratch [linuxfromscratch.org], which adopts a link farm approach like you describe. My /usr/bin (and /usr/blah directories generally) indeed do have hundreds and hundreds of symbolic links. This probably impacts performance, but I've not noticed it on my K6-3/400 PC with old slow IDE disks. Using some simple perl scripts to create, retarget and clean up symbolic link farms, package management is simple. The key benefit is that the metadata associating a file with its package is the symbolic link itself - it is logically incapable of becoming out of sync.

    My work-around for the root file system is as follows. Each package I keep in /usr/pkg/packagename-version. Things destined for /usr/bin live in /usr/pkg/packagename-version/bin and so on. Things which need to end up in (say) /sbin live in /usr/pkg/packagename=version/root/sbin. I cp -a the contents of these root subdirectories into /.

    This mechanism is a comprimise, but works quite well. I can compare files in root fs directories against those in /usr/pkg/*/root to find which file came from which package. Updating is a simple cp -a.

    Why not do the same for /usr, and avoid the symbolic link farms? Primary reason is that while copying into the root fs those files that need to be there might take up 30MB or so, doing the same for /usr would mean an extra 500MB or more of duplicated data. The other reason is that for those packages which aren't too tied to their location in the filesystem, differing versions can be present on the system simultaneously.

  • My one gripe (Score:3, Interesting)

    by cjpez ( 148000 ) on Sunday June 16, 2002 @12:25PM (#3711097) Homepage Journal
    The one thing that I really don't like about any package manager is rigid dependency checking. It only really occurs when you try to act outside of the "accepted" package system. For instance, back in my Redhat and then Debian days, I was content to let the base system get installed by RPM or apt. I also loved, especially in Debian, the ability to use apt to just install an app I wanted to use. However, for a long time, I used the DRI XFree86 that came from CVS and got compiled by hand. So I was stuck with two options - either don't install the X packages, or install them anyway but install X by hand on top of it. In the first case, it was really difficult to install any package that relied on X. On RPM, I had to turn off dependency checking to do it (which meant that the primary purpose of the package management system was bypassed, IMO), and with apt, it was nigh-impossible (I never did figure out how to get apt to install something despite dependency issues). On the second case, whenever the package management system decided to upgrade my X, then my hand-installed stuff would get overwritten.

    What I'd love to have in a package manager is a more intelligent dependency check. Like, instead of just saying "I need this version of X," it would also just check for the existance of /usr/X11R6. Or if a package requires BerkelyDB, after checking "inside" the package manager, just try and see if there's a libdb.so somewhere in the LD search path. And then mark down "inside" the package management system that the "BerkelyDB" or "XFree86" dependency seemed to be fulfilled by a manual installation.

    That would be the ideal system for me.

  • SRPMs (Score:2, Interesting)

    by Al Al Cool J ( 234559 ) on Sunday June 16, 2002 @12:45PM (#3711201)
    I'm amazed the article doesn't mention SRPMs which I've found to be a very reliable way of getting the newest and latest software to work on my Mandrake 7.2 boxes. It can be a pain getting all the *-devel packages you need, but once you do, you get your own nice shiny RPM that you can drop onto on any identical system that you're running.

    What scares me off using something like apt-get is that my home computer is on a dial-up. I don't want to unleash some automated system that's going to go and stupidly try to jam 50MB worth of packages down my pipe. With RPMs I can control how much gets downloaded and when. And I have the nice SRPM fallback when things don't work.

    How easy is Debian to maintain on a dial-up?

  • by Webmonger ( 24302 ) on Sunday June 16, 2002 @12:48PM (#3711207) Homepage

    Actually, there is a limitation of .rpm that hinders the APT4RPM functionality-- file dependencies. .rpm archives depend on specific files, while .debs depend on specific packages. This can be worked around, essentially by creating a list that maps files-that-are-depended-upon to packages-containing-these.

    But yes, there is at least one technical superiority of the .deb file format. I have never heard any argument that .rpms have a technical superiority to .debs, so I have to wonder: why don't RPM-based distros don't switch to deb? They could just adopt the .deb file format as RPM 5, make the tools speak deb, and stop worrying about it. They'd serve their users better and reduce duplication of effort.

    Or perhaps users should take it into their own hands. Using tools like 'alien', it might be possible to take the apt4rpm approach one step further-- create an unofficial 'Redhat .deb' distribution-- the same packages as Red Hat, but in a different package format.

  • RPM is ok (Score:4, Interesting)

    by lameruga ( 528291 ) on Sunday June 16, 2002 @01:17PM (#3711325)
    Yeah, there is couple of problems with RPM, but:

    - it's easy to do upgrades (on RedHat, don't know about others) I do it several years from remote location, and only once it failed because of bad LILO configuration...
    - you always know which file belongs to which package
    - you can verify checksums of all installed files
    - dependencies is not a problem - it's a solution to the problem
    - it's simple to locate needed package from distro
    - if you're trying to install someone else package, you'll better to get sources, and build rpm package youself
    - I agree that it is bad idea to distribute rpm binaries only, so best is to post tar.gz source, rpm packages are optional (it is good if source includes .spec file)
    - and if you don't like dependencies, you can always use --nodeps :)

    P.S. When I start using linux in 1995, first distribution I installed was Slackware, and after one year I switched to RedHat.
    Slackware is a good, but you have same dependency problems (and you even don't know which package to install in case of such problem, lets say then installing some binary package). It also much harder to upgrade it....
  • by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Sunday June 16, 2002 @02:07PM (#3711477) Journal
    What if, when you wanted to perform a binary installation, it checked dependancies the same way that autoconf-like programs do... tries to find them in particular locations, and creates a configuration file for that program based on what it found? It can do version checking as well, and report any mismatches to the user. In situations where there isn't a clear-cut place to put such a file, the installer could create a bourne shell startup script instead. It would work everywhere, and wouldn't be dependant on _any_ rpm or deb databases.

    I realize that this would require one new file (either a config file stored in the program's library directory, or a shell script used for startup), for each package that gets installed, but we're already looking at wasting space with the rpm or deb databases anyways.... this solution wouldn't take up any more space and has the added bonus of being completely cross-distribution!

    For library packages, it shouldn't even need to store a config file... it can just check the versions of the software or libraries that it does require and report back to you. The job of actually finding the libraries as they are needed can be performed by the linker, which is presumably set up to search applicable directories. Heck, if it's not, even this information could be reported at installation time too!
  • by Anomie-ous Cow-ard ( 18944 ) on Sunday June 16, 2002 @03:28PM (#3711738)
    most of the issues raised are inherent to binary based distros

    • Poor dependencies. Nope, not inherent, and can hit source-based distros too.
    • Poor upgradability. Nope, not inherent, and can hit source-based distros too.
    • Incompatible changes to the package format. Nope, not inherent.
    • Incompatible file locations. Nope, not inherent, and can hit source-based distros too.
    • Upgrades can break things. Inherent, but hits source-based distros just as easily.
    Ok, perhaps i missed something in the article. Where are these issues inherent to binary distributions that source distros don't suffer from?

    The software is compiled against the exact library versions installed on your system, so no subtle incompatabilities arise due to slightly mis-matched binaries. This eliminates a whole class of bugs, and a whole host of problems that can affect stability and reliability.

    At install. However, wither you lose this when you upgrade the library, or you have to recompile every single app that uses the library whenever the library changes. Damn, i would hate to upgrade anything X depends on in a system like that, 2 weeks until the upgrade is complete! (assuming my old box doesn't run out of disk space first).

    A non-stupidly-written library will be very easy to handle in dependencies (remember, minor version changes indicate backwards-compatible API changes, so app recompile shouldn't be necessary), a stupidly-written one somewhat harder. Either way, a good packaging system can easily handle this modulo human error. In fact, there's less chance for error when a few knowledgeable people handle all the compiling, as opposed to having everyone try it on their own. And when something does go wrong, everyone is likely to have the same problem, so more eyes can look for it.

    Upgrades are very easy. In the case of Source Mage they are virtually automatic (you select the package to update and everything is taken care of for you), in the case of Gentoo they are less automatic and require some care, but are nevertheless easier than with any binary distribution I've ever tried (and I've used all the major ones at one time or another),

    Somehow, i doubt recompiling everything from source (automated or not) is easier than apt-get dist-upgrade. Definately not easier than tracking unstable (as, based on your comments here, you're effectively doing with either of your favorite source-based distros).

    Security is improved in one way: the ease and ability to keep up with security updates. Binary distros are still trying to get this to work smoothly (and mostly not succeeding, or requiring a tradeoff like Debian Stable, in which one must run 2 year-old software to enjoy that level of security). This is really a side effect of the previous point, but is significant enough to deserve sepearate mention.

    Well, it would be deserving if it made any sense. You're comparing bleeding-edge source distros with tested-almost-to-the-point-of-absurdity binary distros. Try comparing to Debian unstable instead, it would be somewhat more accurate. You also neglect the fact that the updated code needs to be compiled eventually (since source-based pushes this to the user, they get 'release' ahead a little here), tested for breakage (from your description they just throw the source up without any testing, i hope this is wrong), packaged (a little more time here), and uploaded to the repository (which can take some time, depending on procedures in place).

    and with Gentoo the flexibility of having multiple versions of libraries and even runtime apps is very useful.

    I have multiple versions of many libraries on my Debian system here. A few apps too (e.g. gcc, autotools). I don't really need multiple versions of most apps. Does Gentoo make everything multi-versonable, even when config file formats (.foorc in $HOME, for example) change incompatibly and other sharing problems like that?

    in contrast to binary distros that typically require 3-6 months (worse for some distros. I still recall the Debian developers irate answer to a user's question on when thye could expect X 4.2 support in the experimental version of Debian ("unstable"), to the effect of "leave me alone, it will be months!")

    On the other hand, you're leaving out the part about X being a very large, very complex set of programs that needs quite a bit of time to package properly and ensure upgrades will actually function. You're also leaving out the part about the X packagers having to maintain the 4.1 version at the same time, especially since 4.2 cannot be tested well enough in time to make it into frozen. Further, you're leaving out the part where all this is done in the packagers' free time. And the part where the packagers made 'unofficial' packages available much sooner, so anyone who really wanted to could use them and help get things to the point of "doesn't explode on install". And the part where various patches from 4.2 have been backported to 4.1 (and the part where 4.2 contains patches originated with Debian's 4.1).

    All in all, you've severly damaged your argument with your poorly reasoned claims.

  • by Zenki ( 31868 ) on Sunday June 16, 2002 @04:25PM (#3711884)
    Is offer an interactive mode of execution.

    When installing a package, if RPM can not find the RPM dependency, it should tell the user:

    "Unable to find libfoo.x.y.z.so"

    Then ask:

    "If you do have libfoo.x.y.z.so, enter the path so an appropriate entry can be made in the rpm database"

    The user can then type in something like /usr/local/lib/libfoo.x.y.z.so, and the RPM program will add that one file into its package database so later on, it won't have to ask that dumb question again.

    If the user doesn't type in anything, then RPM should then quit and refuse to install the package.
  • by PugMajere ( 32183 ) on Sunday June 16, 2002 @11:39PM (#3713331) Homepage Journal
    The system he was referring to is for package configuration.

    I.e, I don't want to pay attention to the intricacies of how my keyboard mappings are configured. So DebConf asks me a question or 3 the first time I install the console-tools (maybe, I forget) package.

    It asks the question using my choice of interfaces, i.e, a readline4 based one, a nice ncurses based one, an X based one, or a web based one. (I have no idea how the web one works, just for reference, I just know it's an option.)

    From that point on, I should never be asked that question again, unless the meaning of it has changed.

    The benefit here is that some simple things (like say, your workgroup that your Samba server hangs out in?) can be configured once, and your config file can be regenerated on each package upgrade. You can make changes on top of that configuration file, if you need, but many updates can be handled for you without you needing to know the details of the upgrade.

    DebConf is a *really* nice tool for configuration management. In theory (it hasn't happened in practice), is that common configuration options for say, MTAs can be used, so if you want to switch from sendmail to exim you can have your old configuration information copied right over.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...