Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Is RPM Doomed? 691

Ladislav Bodnar writes "This is an opinion piece offering solutions for all the ills of the RPM Package Manager. It has been written with Slashdot in mind - it is a fairly controversial topic and I would like to hear the experiences and views of other users who have tried different package formats and different Linux distributions. The conclusions are pretty straightforward - either the big RPM-based distributions get together and develop a common standard or we will migrate to distributions offering more sophisticated and trouble-free package management. Note: the main server allows a maximum of 100 simultaneous connections. To limit the /. effect, here are two other mirrors: mirror-us and mirror-hu (the second one has larger fonts). Thanks in advance for publishing the story."
This discussion has been archived. No new comments can be posted.

Is RPM Doomed?

Comments Filter:
  • apt-get is nice (Score:2, Informative)

    I don't have problems with RPMs at all. I use apt-get since it was first introduced in Conectiva Linux, and I'm now using it in a Red Hat box. I upgraded it from 7.2 to 7.3, and the only problem I had was lack of space in /var to download the files (not my fault, but from the former sysadmin).
    • Re:apt-get is nice (Score:5, Informative)

      by nehril ( 115874 ) on Sunday June 16, 2002 @10:31AM (#3710743)
      someone please correct me if I'm wrong, but doesn't this article suffer from a fundamental misunderstanding? you cannot compare apt-get to rpm files. apt-get is a system for installing .debs and their dependencies. there are similar systems for rpms (apt-rpm or red carpet).

      .debs suffer from all the same problems he complained about rpms having, because .debs are just a single package file. so do source code files (a la gentoo etc), since alot of your source code out there wont even ./configure without the right stuff in place. where debian has apt-get to manage the dependency nightmare, gentoo has emerge.

      what he is really bellyaching about is the fact that some big rpm based distros (mandrake and redhat) don't come with free dependency management software. 99% of his anti-rpm comments are not even wrong, they are wholly irrelevant.

      The last 1% that might have value is the fact that developers can't make a "universal" rpm due to all the differences in filesystem layouts among rpm based distros (note that this can a problem with .debs too). From an end user perspective even this is not a problem with a dependency manager in place. since it will find the "right stuff" for you.
  • by nilstar ( 412094 ) on Sunday June 16, 2002 @09:50AM (#3710634) Homepage
    I think the biggest thing we need with rpm (and other distro systems) is standardized package locations. That would help, *extremely*.... as well versioning control needs to be better. For example, I hate having to have 2-10 different versions of libraries due to programs requesting their own version, even though the newer libraries could do the job of the old ones. As well, when the rpm asks for another rpm which is not installed, but the libraries are on your machine (in the right location) it is frustrating.

    I hate to say it, but maybe we need a standardized "registry" idea like in MS Windows? I hate to say it, but they do have a good idea with that.
    • by TrentC ( 11023 ) on Sunday June 16, 2002 @10:02AM (#3710667) Homepage
      I think the biggest thing we need with rpm (and other distro systems) is standardized package locations.

      You mean something like a Filesystem Hierarchy Standard [pathname.com]? Or maybe even a Linux Standard Base [linuxbase.org]?

      Jay (=
      (Is there a website that rates distributions according to their adherance to these standards?)
    • One of the problems you bump into with standard, one-size-fits-all packages is that APIs change, and bugs get fixed/introduced. Assume package A v1.2 was compiled against package B v3.6. Now package C comes along. Package C needs package B v4.0, so it upgrades package B. In the switch from v3.7 to v4.0, however, the API changed and a misfeature was corrected. Now assume package A relies on the old API and the presence/functionality of the misfeature. Now assume package A is no longer actively developed.

      This situation (and ones like it) happen all the time. That's one reason why programs often get their own copy of a library, even though it sucks from an end-user standpoint.

      Nevertheless, you're right....versioning needs to be better.

    • The registry in Windows creates a single point of failure. The point of the registry seems to be copy protection. The registry contains incomprehensible data. It is an area meant to be outside the user's control.
      • Well I'm sure glad Linux uses /etc to store confiruation data. Having 50 different styles of configuration files sure does make one's life easier.

        • Having 50 (only that many) different styles of configuration files definitely makes things tough. However, the information is not hidden from the user, as it is in Windows.

          It would be great if someone standardized Linux configuration files. I suggest a browseable, book-like or PDF-like interface like that in Ganymede. Each package would be expected to write their own interface to the configurator. That way, authors could have any configuration file format they wanted, but there would also be a standard GUI interface.
        • by 0x0d0a ( 568518 ) on Sunday June 16, 2002 @10:36AM (#3710758) Journal
          The syntax of UNIX config files is pretty standard (barring the occasional ugly misfit like sendmail -- use postfix instead).

          And all the Windows registry does is give a standard format for storing individual values (how should I store a string, how should I store a DWORD) and provide a hierarchy. It says nothing about format or structure within a single app.

          If you want to turn on, say, mail relaying in postfix on Linux, then you look for the entry called mail_relay (or whatever) that's commented out and contains a helpful set of comments right above the config entry. On Windows, the equivalent is to go into the registry to some unspecified key, create an unspecified value there and then set it to some unspecified value.

          Of course, most people just use a front end on Windows -- like a preferences or options dialog -- because the registry is next to useless to actually interact with. You can do the same thing on Linux, which is done by GNOME and friends to make things more convenient for computer novices.

          Also, if the Windows registry gets corrupted, you have a big problem. If a single text format config file gets corrupted, you can probably fix it yourself. If you can't...well, it's a single file down the drain. Reconfigure that single app. Your entire system doesn't become unbootable, a la Windows.
    • by 0x0d0a ( 568518 ) on Sunday June 16, 2002 @10:26AM (#3710730) Journal
      I think the biggest thing we need with rpm (and other distro systems) is standardized package locations.

      That's already done in the LSB.

      The problem is that each rpm is required to contain a static list of files it installs *with pathnames*. The nice thing about this is that it lets you run rpm -qip foo.i386.rpm without executing any code (sandboxed or otherwise) to see the list of files. The stupid thing is that there then has to be a totally different rpm for every distro and every maintainer.

      In addition, it means that the maintainers need to keep *two* lists of what files are in the package -- one list for "make install" and the other for rpm. This is probably the most annoying design decision of RPM I've seen. There needs to be a FILES file with a list of installed files with a gen-files script (that runs sandboxed to build FILES for not-yet-installed packages and is run at package installation time to generate FILES). Have the Makefiles read this for make install. This would make life easier for maintainers (one list of files to install), would make RPMs more reliable (no accidental adding of a file to the Makefile but not to the spec file), and would let an RPM work on any distro (if we ever get the gcc-2.7, gcc-2.96, gcc-3 stuff worked out).

      even though the newer libraries could do the job of the older ones

      This is true for minor version number increases, but for a major version number change, newer libraries cannot simply link to the program.

      Also, the registry is a fucking stupid idea. (despite the fact that GNOME and KDE are mindlessly cloning it). The registry causes more problems than anything else I've seen on a Windows system. The MacOS did things right -- let all your centralized databases just be caches for data that can be rebuilt from files around your system. If something gets borked or corrupted...that's okay. Absolutely do *not* make your single copy of data a registry -- put the masters around the system, and let the centralized db be rebuilt if necessary.

      Also, registries require "installations" and "uninstallations" instead of just copying files. You can just copy appropriate files from one system to another and run code on a Linux or MacOS box. On a Windows box, you're in for running installers to poke at the registry. And finally, I've seen tons of broken Windows installers that poke at registry entries and end up completely screwing up data that some other app uses. For example, a friend once had Sonique and WinAmp installed, but couldn't associate mp3s with either. I took a look at the registry -- Microsoft's two-entry file association scheme let the extension entry point to a nonexistent application entry, IIRC. As a result, the mp3 entry didn't show up in the Folder Options dialog in Explorer, and couldn't be reassigned, and WinAmp and Sonique kept giving errors when trying to grab associations.

      The day any distro starts requiring a registry is the day I never touch that distro again. Right now, I can just uninstall GNOME if I want to do so.

      Oh, and another thing. The Windows registry is a *massive* shared database. As a result, tons of stuff modifies it and causes internal fragmentation and loss of physical continuity between related keys. Then all apps use the registry heavily (God, I hate apps that poll it), so you get slow app launch times, that annoying disk churning that you hear on Windows boxes...rrrgh.

      Take a look at .dll registration. On Windows, the only way the OS knows about a .systemwide dll is when you've added an entry to the registry for it. On Linux...run ldconfig, and it rebuilds the systemwide cache (ld.so.cache), which is significantly faster (contiguous, not incrementally modified, not modified by all sorts of other apps storing filename associations and the like) to read.

      The registry is basically a hack, because Windows *used* to have what MS considered a worse scheme (.ini files). It isn't a very well thought out system.
      • >Also, the registry is a fucking stupid idea. (despite the
        >fact that GNOME and KDE are mindlessly cloning it).
        >The registry causes more problems than anything else
        >I've seen on a Windows system. The MacOS did things
        >right -- let all your centralized databases just be
        > caches for data that can be rebuilt from files around
        >your system.

        Umm. KDE works exactly the way you describe MacOS as working. All of the KDE config file are just that, files. However, to aid performance, etc, they are built into a sort of binary "registry/database" by a program called kbuildsycoca. If it becomes fubar'd then you just rebuild it.
      • gnome, kde (Score:3, Informative)

        by Ender Ryan ( 79406 )
        While I am a bit miffed at gnome and kde for copying the registry idea(so friggin stupid.... imo), at least with gnome(not sure about kde, probably true there too) the "registry" is not one single massive database, it's composed of different hand editable text files, xml if I remember correctly.

        • KDE is not gnome (Score:3, Insightful)

          by rseuhs ( 322520 )
          Can the RedHat users please stop claiming that KDE is cloning the registry?

          KDE users configuration files like most other Unix-software.

          There are some things debatable about the location of these files (in $KDEDIR/share/config and ~/.kde/share/config) but thankfully it's not even close to being a registry.

    • I hate having to have 2-10 different versions of libraries due to prorams requesting their own version, even though the newer libraries could do the job of the old ones

      Any program linked against a library with the same major version number and the same or lower version number than the library installed should work. If the program depends on a specific minor version (as opposed to "a specific minor version or greater"), it's broken. If you have libfoo.so.1.0.1, libfoo.so.1.0.5 and libfo.so.1.0.7 you should be able to delete all of them except for the last and expect things to keep working. Applications are linked against the major version number, so they'll be using the more recent version already (run ldd on a binary and look at the first column - that's what the program is looking for, and the last column is what it's using)

      Now, the problem arises when you have applications linked against different major versions. The major version is supposed to be incremented when an incompatible change in the ABI is made - that is, without recompilation, a program linked against libfoo.so.1 will not work with libfoo.so.2 (well, it might - the change may be limited to a specific part of the library that the program doesn't use. But you can't guarantee that).

      So, if you have dependencies on lots of libraries with the same major version but different minor versions, your packages are broken. If you have dependencies on lots of libraries with different major numbers, then that's unavoidable and nothing to do with the packaging system.
  • various alternatives to RPM packaging? I don't know much about this, but I've found QNX's [qnx.com] Package Installer to be quite efficient and trouble-free (at least in 6.1, can't say much on the new one in 6.2NC) compared to what many experience with RPMs..
    Then again, RPMs would work better if more distributions were a little more uniform in their cores (UnitedLinux might solve this?)
  • by rizzo ( 21697 ) <donNO@SPAMseiler.us> on Sunday June 16, 2002 @09:52AM (#3710641) Homepage Journal
    Gentoo Linux [gentoo.org] uses a system called "portage" which will download, compile, and install programs from source (binary for some packages). It is fantastic. Similar to apt it will check for dependencies and get those also. But the use of source is what turns me on. I'm converting all my linux boxen to it. Even inspired me to slice up the disk on my Win2000 box and go dual-boot.
    • And there are plenty of front ends to rpm that can also get dependencies. If rpm is too low-level for you, that's why it was designed to be *really easy* to write front ends for, what with rpmlib and --queryformat.
  • Mirrors (Score:5, Informative)

    by cheezycrust ( 138235 ) on Sunday June 16, 2002 @09:52AM (#3710643)
    If the other links are overloaded, you can read the story on my site [vsknet.be]. Maybe other mirrors should be posted in this thread.
  • by Xpilot ( 117961 )

    No depency checking, but it also means that you don't have the problem of circular dependecies and the like. Plus you can open it with tar and gzip. Linux Packages [linuxpackages.net] is a great place to look for pre-built Slack packages.

    I used to use RPM, but now that I've converted to Slack, I don't miss it one bit.

    • I was just about to post something to the same effect. Having spent many hours trying to fix or work around broken package dependencies with RPM on RedHat and more recently Mandrake, I recently ditched both in favour of Slackware which I have found _much_ easier to maintain. Good to see I'm not alone in this...
  • by peterdaly ( 123554 ) <{petedaly} {at} {ix.netcom.com}> on Sunday June 16, 2002 @09:55AM (#3710648)
    I administer a few RedHat servers, mostly 6.2, and 7.2 which each perform a different function. If an RPM is offered for a piece of software I need to install, I usually download that first.

    If the rpm install fails, I will spend about 3 minutes troubleshooting the issue. If I can't get it to go, I download the source and compile from scratch. 9 times out of 10 this works without having to figure out dependancies.

    RPM works great when the envirnment is exactly the same as the build envirnment. When it's not...well, it just plain sucks. Source almost always works without incident.

    Really, there is nothing to difficult about:
    ./configure
    make
    su
    make install

    Although it only works for products where the source is openly available.

    RedHat needs a compile from source package format that most people can figure out. srpms may do it, but I have no clue how to use them.

    -Pete
    • > I administer a few RedHat servers...

      Once you administer 20+ of 'em with other admins, you are going to need a package management system.

      Unless you think keeping notes really works :)

      ---
      blaze-x
      • It's just a shame that Linux doesn't have a clean install/uninstall system like Microsoft Windows, which gets it right every time.

        Err...nevermind.

        Seriously, though, the magic of open source is that if something doesn't work well, people can develop an alternative to it. As Ashcroft would say, "If you don't develop innovative new technologies, then Microsoft has already won...."

    • I'm an administrator too but I try my hardest *never* to install from source. This is because of security and ease of maintenance.

      The main concern is that if I install OpenSSH from source on all 50 of my servers when it comes time to patch it I've got myself a little inconvencience. I would most likely compile it on one machine, tar up the source directory (complete with new binaries) and do a 'make install' on all 50 machines so I don't have to recompile for each box. But this is still going to take me a lot of time.

      So therefore we've set some policies in place to make keeping systems up to date and secure easy:

      For starters, we've standardized on Mandrake for Linux. This helps a lot because if we have a single rpm to install it will install on all of our servers.

      We also mirror the updates locally so that we don't have to worry about slow downloads and we make heavy use of urpmi to automatically grab all the updates, check dependancies, gpg signatures and install them for us.

      We don't install from source unless we absolutely have to. Actually, we try not to install any software that doesn't come with Mandrake but obviously this isn't always possible. In those cases we follow the convention where everything goes in /opt/pkg_name so we can easily get rid of them if we have to.

      I really like RPM for this reason. However, as you stated as long as the systems the RPM was compiled for is roughly the same it will work well. Which is why we standardized on one distro.

      --
      Garett

    • RedHat needs a compile from source package format that most people can figure out. srpms may do it, but I have no clue how to use them.

      rpm --rebuild package-version.srpm
      rpm -Uvh /usr/src/redhat/RPMS/i386/package-version.arc h.rpm

      Or if the .tar.gz you extracted has a spec file:

      rpm -bb package.spec
      rpm -Uvh /usr/src/redhat/RPMS/i386/package-version.arc h.rpm
    • True. I've been working with this exact method for years aswell.

      I see nothing wrong about RPM compared to other systems, I can see why people running say Debian whine about RPM because .rpms isnt supported on THEIR system.. and I dont see as many .deb out there as there are .rpm ..

      So either way if rpm is worse or better than .deb its a standard. And standards are standards because people use them - and like them. Its not .rpm that should change, maybe its .deb.

    • by 0x0d0a ( 568518 ) on Sunday June 16, 2002 @10:40AM (#3710777) Journal
      Really, there is nothing too difficult about:
      ./configure
      make
      su
      make install


      Yeah. But there's the ever so much more superior checkinstall:
      ./configure
      make
      su
      checkinstall

      This creates and installs an RPM of all the stuff you were installing. Voila...you can uninstall, you can query rpm to find out what package a file is part of, find out if uninstalling something will break dependencies, etc, etc...all the stuff that you can't do with just make install.
    • I've found it easy enough to compile from source when there's no RPM or I need to set lots of parameters (i.e., PHP).

      But what about uninstalling? Is there a command I'm unaware of to remove software compiled and installed from source? It's easy with RPM.
    • by Nailer ( 69468 ) on Sunday June 16, 2002 @10:45AM (#3710792)
      Really, there is nothing to difficult about:
      ./configure
      make
      make install


      And all RPM does is automate and standardize this process. The strength of any management system is based around its ubiquity. Installing software outside the packaging system is a bad idea, as suddenly all those standard installation, uninstallation, querying, and verifying systems no longer work - for your unpackages apps, and all the broken packages or other unpackaged apps that rely upon it. Stop thinking of RPM as being seperate from source. it isn't. An RPM is a cpio archive with a source tarball and a spec file like the one below which automates the build process.

      Summary: An addictive and frantically paced puzzle game with cute 3D graphics
      Name: crack-attack
      Version: 1.1.7
      Release: 2mm
      Source0: http://aluminumangel.org/cgi-bin/download_counter. cgi?attack_linux+attack/%{name}-%{version}.tar.gz
      License: GPL
      Group: Amusements/Games
      URL: http://qcd2.mps.ohio-state.edu/attack/
      Packager: Mike MacCana
      BuildRoot: %{_builddir}/%{name}-%{version}
      BuildRequires: glut-devel
      Requires: glut
      %description
      Crack-attack is addictive and frantically paced puzzle game with cute 3D graphics, playable either against the computer in single player or across a network mnultiplayer, where o
      ne players success clearing blocks dumps large immuntable tiles upon the others block pit. Muahahahaha!
      %prep
      %setup -q

      %build
      %configure
      make

      %install
      %makeinstall

      %post -p /sbin/ldconfig

      %postun -p /sbin/ldconfig

      %clean
      rm -rf %{buildroot}

      %files
      %defattr(-,root,root)
      /usr

      This will catch all the files installed in /usr, but after you do this, note the names of these files in the package and specify them individually

      %doc AUTHORS COPYING INSTALL NEWS README

      %changelog
      * Thu Apr 11 2002 Mike MacCana 1mm
      - Created packages

      Now I'm going to sit back down on my Red Hat 7.3 box and apt-get dist-upgrade all my RPMs from Freshrpms.net
    • RedHat needs a compile from source package format that most people can figure out. srpms may do it, but I have no clue how to use them.
      rpm --rebuild name.src.rpm
      That will build and install the RPM for you. If you want to customize the compiling options, do
      rpm -i name.src.rpm
      and manually edit /usr/src/{distro,rpm}/SPECS/name.spec to add the options you want. Then run
      rpm -ba name.spec
      and install the RPM in the RPMS directory.

      RedHat has a HOWTO at RPM.org and I've written documentation for bluelinux.org which should be helpful.
    • It's really not that hard to figure out:

      rpm --rebuild whatever.srpm

      Your shiny new binary RPM(s) will be in whatever the system path is for RPM (on Red Hat it's /usr/src/redhat/RPMS).

      Or alternately, you can

      rpm -i whatever.srpm
      cd /usr/src/redhat
      rpm -ba SPECS/whatever.spec

      And the binary RPMs will end up in the same place. Plus with that method is you can edit the spec file for customization.

      Most people who gripe about RPMs being too hard probably have never looked at the spec files before - all they are is some meta information and a shell script that builds the program. Building from scratch may be hairy for a new user, but customizing an existing package is REALLY easy; usually it's just a matter of looking for a line starting with "./configure" and adding the options you want after that.

      Why go through the trouble when you could do it from source? It is much much easier for replicated installs, you don't have to worry about stray files lying all over the place, its easy to tag files as being configuration so they don't get overwritten, etc, etc.

      Matt
  • ... was to migrate to FreeBSD where using cvsup to get updated sources make package managment easy.

    There's even a great utility called portupgrade which will do all this for you for installed ports (stuff not in the base system).
  • Ever since Debian started using apt-get and dpkg with deb packages, RPM should have been taken out totally. Programs like apt-get that calculate depends for you and installs those depends BEFORE the actual program makes life much much easier.

    So why use RPM. BSD Ports are good. Debian packages are good. Debian can even use alien and rpm to MAKE the RPM into a deb package.

    Go Debian!
    • Programs like apt-get that calculate dependencies for you and install those dependencies BEFORE the actual program make life much much easier

      So if you want that, why not just use apt-get...but with RPM? Or use any of the other front ends that people have designed to meet specific needs -- there are far more fancy and specialized front ends for RPM than there are for deb. Want an absolute idiot-proof GUI RPM front end with dependencies getting? Use Red Carpet. Prefer to search for your RPMs from the command line with dependencies checking and auto installation? Use rpmfind. An ex-Debian sort? Use apt-get. Like running a daemon to automatically update RPMs? Try autoupdate.

      So why use RPM. BSD Ports are good. Debian packages are good. Debian can even use alien and rpm to MAKE the RPM into a deb package

      Why use ports or debs? You can use alien to make a deb into an RPM too (though you probably wouldn't want to, given that there's RPMs for just about everything out there, but not necessarily debs).
  • Nice article, and a good discussion to have. But why was no mention made of up2date - a program designed to alleviate many of the problems with the RPM package format the author mentions?

    The primary difficulty I've had with up2date is that you cannot upgrade between distribution versions - e.g. from RH 7.1 to 7.2. Do a dist-upgrade on Debian sometime - amazing!

    Debian woody (soon stable) finally provides a respectably up to date desktop environment. Take a look.
  • by isdnip ( 49656 ) on Sunday June 16, 2002 @10:00AM (#3710663)
    I fully appreciate the author's sympathies. I'm used to replacing RPM-based distros; just last night I burned a new Mandrake Cooker so I could try it. KDE3.01 et al are just too hard to get right using RPM upgrades. But then he mentions gentoo...

    ...which I have also tried to install. Trouble is, gentoo has *no* installer, past the kernel stage. I can't even get sound to work, becuase my mobo sound chip isn't in their ALSA tree. I'm sure there's a way to do it but they don't tell you. Gentoo users are typically, I suppose, the type of Unix experts who have no trouble figuring out which driver goes where. But gentoo lays things out differently from RedHat (etc.) so I can't just copy their /etc (etc.!) files.

    If gentoo had a decent installer, not necessarily as "friendly" as Mandrake (more flexibility is a plus) but which could guide all the files into the right places, then it might be a killer. For now, it's a cult for experts. But I don't see why a binary-based (or at least partially binary-based) distro couldn't use an apt-get or portage-like system when needed, without requiring gentoo's exceptional knowledge (well, that's what it feels like to the "n00b" whose recent Linux experience is mostly RH and Mdk) of the distro's layout.
    • There's been quite a discussion on the installer issue in the Gentoo forums (the thread can be found here [gentoo.org]). The general consesus from the users seems to be that they like Gentoo being kind of a "niche" distro. If the idea of the source based distro really appeals to you, I would suggest giving it another go and leaning very heavily on the forums (if you need to). Gentoo's Forums [gentoo.org] have the most helpful and friendly user base I have ever seen on the internet. I have yet to see a single person give a n00b a hard time (outside of the occasional rtfm...). I realize that it's not for everyone and that it takes a little bit of work, but I think Gentoo is definitely worth it after the dust settles. It's nice to install an OS and feel like you actually accomplished something.

      Oh yeah, and I don't like RPMs.
    • Debian.

      Look A basic install, though perhaps not quite as bare as gentoo, is really bare. Really light.

      You end up, with a minimal configuration, with teh bare minimum you need to boot, get a console, and install more packages over the network.

      Then it's a matter of adding packages as you see fit... which is entirely too easy.

      to get here, just skip the package selection and/or task selection (where you choose either individual packages, or in beginner mode, what kind of machine it's going to be, development, server, etc.)

      I do every debian machine for every reason this way. I love it especially becaues it leaves me with a light, clean system every time.

      One of the reasons behind Gentoo is probably one of the reasons why people used to love Slackware (heh, I guess some still do). Because you had to do things the old fashioned way. Get source, comipile, install where you saw fit. You had to actually learn how things work.

      I can say that, in having ot mess with early, early version of linux I learned more about how unix works than any other unix I've used. Having to actually figure out, either by reading, or trial and error, what file goes a LONG way towards being able to work multi-platform.
  • Hints... (Score:2, Informative)

    by n1k0 ( 553546 )
    From the article...

    > On the other hand, have you noticed how hard it is to find Debian ISO images?

    http://www.linuxiso.org - I can't believe you've never heard of this place. They've had Debian ISOs since I first learned of them.

    I admit, debian.org's ISO download wizard is garbage, but I think they're trying to save bandwidth by having you download what you need instead of the entire ISO (there's no reason you need to install every package in the ISO).

    niko
  • depends (Score:2, Insightful)

    by diamondc ( 241058 )
    I use Debian Unstable at home because I always want the latest and greatest software and I already know how to fix 95% of the apt/deb problems that occasionally happen. At work, I use Debian Stable because I never want to touch the server after it's been configured and tweaked.

    The good thing about RedHat and Mandrake to some extent is that they do good testing on the RPMS on the cds. I figure they don't expect people to install some 3rd party RPM off the net.
  • The author mentions, "On the other hand, have you noticed how hard it is to find Debian ISO images?" Yes, Debian is very upgradable, but that has nothing to do with the percieved shortcomings of the RPM package format.

    The RPM format is nearly identical feature-for-feature with Debian's dpkg. RPM's upgradability has nothing to do with technical issues. There are three things that make Debian's package management so much better than RPM-based distributions.

    The first is, there are way more distributions based on RPM packages than deb's. It's not suprising that some of them are more incompatible with each other than any debian release has ever been. Sure, there are many more people with hairy backs in the US than there are in Lichtenstein, that doesn't mean that living in the US causes hair to grow on your back. He is inferring causality where it doesn't exist.

    Second, APT. APT is what makes debian's package management so smart, not dpkg. And, in fact, this isn't a reason at all. APT now works with RPM packages [tuxfamily.org], and when dependencies are properly configured, it is every bit as good as it is on debian. You can make an APT repository with RedHat's "rawhide" distribution and upgrade daily if you want. You won't have any more upgrade issues than you would running debian unstable. It may break occasionally, but it's when large changes happen. The exact same thing happens on the debian side.

    Third, Debian is fanatical about consistency. Most debian packagers manage maybe three or four packages (there are exceptions, of course). When you devote all of your free time to just a few things like that, a lot of attention is payed to details. This is what truly makes Debian's package management so freakin' clean. It has nothing to do with technology, it has everything to do with each maintainer hand-crafting dependencies and build options very carefully.

    The thing that pretty much any of the RPM-based distributions is truly missing is the equivalent of the Debian package maintainer guidelines, and a culture that enforces it. If that existed, RedHat would be just as consistent and upgradable as debian.

    I use RedHat and I'm careful about what I put on my system, and I never run into upgrade issues. If I'm going to install something that is for a distribution other than mine, I build from .src.rpm's instead of binaries and I *know* it's compatible with my install. Someday, if packagers stop being idiots and using shortcuts, I won't have to. Everything will resolve properly in the huge worldwide-apt-rpm-uber-archive.

    • I aggree,
      I installed mandrake 8.2 a while ago, since then there have been a lot of 1.0 releases out.
      OpenOffice,
      Mozilla,
      KDE (3.0.1)
      etc....
      But mandrakes packages have some rediculios deps, to install KDE 3.0.1 from there cooker(dev), it wanted me to update thinkgs like unixODBC and MYSQl,I don't wan't mySQL and call me stupid but obdc's a protocol!!! and i dont think the latest unixODBC changes that , why the hell have they put such non-granular pagkages togeter, if i had a release plan like that at work I'd probably be out of a job.
      The RPM tree locations in mandrake used to be different from the package defaults which ment i could'nt install wines RPM and know i wasn't going to screw up package management some time in the future.

      Dependencies of RPM's really need sorting out, and there should be no reason why i can install a suse package on redhat (so long as they both follow LSB!!)

      grrrrrrrrrrrrrrrr
    • by FreeUser ( 11483 ) on Sunday June 16, 2002 @11:55AM (#3710999)
      Well, not quite, but now that I've got your attention... :-)

      It isn't the packaging format really ... most of the issues raised are inherent to binary based distros, which with todays processors really should become a thing of the past.

      Source Mage [sourcemage.org] and Gentoo [gentoo.org][1] are two excellent source based distros that avoid these classes of problems altogether, and unlike RPM (or debs[2]) add no burden to the upstream software developer.

      Shawn Gordon of The Kompany touches on this when he says (from the article, you did read the article, right?)


      So rather than providing a myriad of different binary RPMs for the dozens of different Linux distribution, The Kompany, which is a commercial entity developing Linux applications, reluctantly decides to give away the source code to paying customers. [Emphesis added]


      Source based distros like Gentoo and Source Mage have packaging systems that automate the process of downloading, configuring, compiling, and installing all of the software on their systems from source (pedants will note there is the occasional binary package, e.g. NVidia drivers, but for the vast, vast majority of software my point holds). Indeed, this approach makes the packaging system itself less important (so long as it works properly) than the overall engineering and organization of the distro itself, and completely irrelevant to the software developer (as it should be).

      This has a couple of disadvantages, and a whole bunch of real advantages. So much so that almost no one who has used a source based distro will go back to a binary based distro once they've tried it, despite the cons (in fact, of the numerous people I know who've tried Source Mage and Gentoo, both very different from one another BTW, I know of not a single person who has gone back to their old binary favorite, be it Suse, Mandrake, Red Hat, or Debian).

        • CONS of source based distros

        • Initial install typically requires source to all of the system, which is generally downloaded from the net. I.e. in most cases requires a fat pipe for installation.
        • The installation is time consuming, due to the fact that each package must be compiled. For modern CPUs this isn't such a big deal (a day will suffice, most of which you can spend away from the computer while it chugs away), but for older CPUs like an AMD K6 233 I have, the initial install can literally take days.
        • PROS of source based distros

        • Updates and upgrades typically require much less bandwidth than their binary equivelents, as only the new package's source needs to be downloaded.
        • The software is compiled optimized for your hardware. Typically such systems run 20-30% faster than their binary equivelents, based on some casual benchmarking I and a few others have done.
        • The software is compiled against the exact library versions installed on your system, so no subtle incompatabilities arise due to slightly mis-matched binaries. This eliminates a whole class of bugs, and a whole host of problems that can affect stability and reliability.
        • In the case of Gentoo, you have very precise control over the configuration of your system, and what is installed vs. what is not, as well as where it is installed to.
        • In the case of Source Mage, the system is auto-healing, meaning that if and when a new library is installed and the older one removed, all packages that rely on that library are recompiled against the new library. This makes upgrades (on Source Mage) very easy.
        • Upgrades are very easy. In the case of Source Mage they are virtually automatic (you select the package to update and everything is taken care of for you), in the case of Gentoo they are less automatic and require some care, but are nevertheless easier than with any binary distribution I've ever tried (and I've used all the major ones at one time or another), and with Gentoo the flexibility of having multiple versions of libraries and even runtime apps is very useful.
        • Security is improved in one way: the ease and ability to keep up with security updates. Binary distros are still trying to get this to work smoothly (and mostly not succeeding, or requiring a tradeoff like Debian Stable, in which one must run 2 year-old software to enjoy that level of security). This is really a side effect of the previous point, but is significant enough to deserve sepearate mention.
        • The ability to run current hardware. Again, this goes back to the ease and stability of upgrades inherent in source based distros like Source Mage and Gentoo. Source Mage had X 4.2 out a day after its release, giving its users the advantages of all the new features and bug fixes it had to offer. Ditto for KDE 3. Gentoo had these packages out almost as quickly. This means users get the latest features, and the latest bug fixes, almost immediately, in contrast to binary distros that typically require 3-6 months (worse for some distros. I still recall the Debian developers irate answer to a user's question on when thye could expect X 4.2 support in the experimental version of Debian ("unstable"), to the effect of "leave me alone, it will be months!")

      There are numerous other advantages I could add here, but you get the idea.

      The entire article on the flaws of RPM might better be entitled "The Flaws of Source Based Distributions" which, in the age of Free Software and source code availability, coupled with todays fast processors, really ought to become a thing of the past. In fact, it wouldn't surprise me at all to see Debian, Suse, Mandrake, and Red Hat all embracing the notion of source-based distros sometime in the future ... as processors get even faster, the day long install (on my dual 1 GHz P3), which has already shrunk to less than half a day on the dual 2GHz Athlon I have at work, will shrink even more, to a couple of hours or less.

      And the advantages in speed, stability, and ability to keep current with new software releases in a timely manner will only become more acute as time goes on.

      So while binary based distros are by no means dead (despite my rather provocative headline), it is my opinion that the writing is certainly on the wall, and the ovservant person can already mark the shifting change in the wind.

      [1]There are other source based distros as well, including Linux from Scratch and Lunar Penguin, and likely others as well.
      [2]Though in fairness the Debian developers take up most if not all of that burden
      • by MSG ( 12810 )
        Updates and upgrades typically require much less bandwidth than their binary equivelents, as only the new package's source needs to be downloaded.

        Source is almost never significantly smaller than binary, and often is bigger. Consider bash:
        -rw-r--r-- 26 root root 701486 Apr 16 21:14 /var/ftp/mnt/valhalla-i386-disc1/RedHat/RPMS / ash-2.05a-13.i386.rpm
        -rw-r--r-- 24 root root 2144412 Apr 16 21:14 /var/ftp//mnt/valhalla-SRPMS-disc1/SRPMS/bas h-2.05a-13.src.rpm

        the system is auto-healing, meaning that if and when a new library is installed and the older one removed, all packages that rely on that library are recompiled against the new library.

        In the case of rpm or dpkg, the system protects itself from damage caused by replacing a package that others depend on. Attempting to do so will result in a list of all of the packages which additionally need to be updated to work with the new library. If you're using apt (for dpkg or rpm), attempting to update a library will fetch binary upgrades for the packages which need it. Source Mage doesn't have the advantage here.

        ...the day long install (on my dual 1 GHz P3), which has already shrunk to less than half a day...

        Day long? I don't know many with anywhere near the time for that. I can install a Red Hat Linux server on a clean box in all of two minutes from initial power up to fully functional reboot.
        • The FIRST machine takes about a day. After that you can build a stage3 ISO for your platform (the only one provided is i686, which will suit most people's needs anyway) which will give you a fully functional gentoo system. Then you can install all the packages you need as binaries built on other systems, as long as they have the same architecture, or a subset (IE, you can run the i386 binaries on your i686 or whatever, but K6 binaries won't run on your i686, or vice versa.)
      • by mattdm ( 1931 ) on Sunday June 16, 2002 @01:43PM (#3711393) Homepage
        Distros like Debian GNU/Linux and Red Hat Linux don't take a while to release (to take your example) the very latest XFree86 4.x because of some inherent slowness in putting together binary packages. It takes time because they test new releases before dumping them out there.

        I'm also skeptical about your casual benchmarks. On Red Hat Linux, for example, key system elements like the kernel and glibc *are* selected based on your particular CPU. Almost everything else is compiled with -march=i386 -mcpu=i686 -- that is, optimized for i686 but still able to run on older systems.
      • I tried gentoo also, wanted the speed, but 1.3a has gnome2 and gcc3.1 which I want (AA fonts!). Problem, compile errors. Gentoo has been pretty quick on fixing the errors, but your stuck till they fix it. Another down point, takes some pretty heavy bandwidth to do an initial install. Same problems with Ports on freeBSD, some of the ports are broken, so your stuck, and need little more bandwidth.

        Ive been using mandrake, and then I grab the cooker rpms, and try to figure out which ones to install for gnome2/gcc3.1. So far its been ok, but not an easy task. urpm mandrake tools help on which rpms have which files, but rpm should includes these. Ill nail my list of concerns, some are common.

        1. rpms should list dep rpms. (this is my major bitch)
        2. rpm installs should use the opt directory for major programs. /usr/bin and /usr/X11R6/bin is not a dumping ground.
        3. take bandwidth in consideration, dont make users download every locale, doc, extra themes. Allow a "lite" version to work. 85% of the public still use modems.
        4. allow updates for cooker/beta rpms. you should be able to install gnome2 with a gnome1 system, and have both work.
        5. update rpm databases with cooker rpms
        6. ease of use, how many commands do you really use in rpm? add/remove with verbose, list files in rpm, list deps. why would you use grep to search an rpm? rpm -qa | grep blah, rpm should include pattern matching.
        7. rpm database should include locations on the net for cooker, beta, etc rpms. An updated database. Microsoft updates work flawless with mirrors, we need this on linux distros. some kind of rpm -install-net bob.i386.rpm
        8. graphical/ncurses gui. ya ya, command line is king, but cant we pretty up the install process a little? Even if its a wrapper to rpm, it would go along way to make things easier.
        9. -ivvh options, when installing its nice to see which files are installed, and a progress meter, but can we meet somewhere in the middle, just list the files installed, and give a percentage number in front of the line? I hate scrolling back 40 pages on an install just to see the 10 lines of data I needed.
        10. group rpms in directories, even if its only 10 directories like slack or use gentoo sources, anything that has 20 plus rpms should have its own directory. I like how SuSE uses symlinks on its filenames, you could do the exact thing with 1 directory of symlinks point to the sub directories with the groups. Easy for a release.

        Anyone else notice the rpms that are current on rpmfind.net are from the PLD (Polish Linux Distribution)? Why are the Poles doing a better job than other distros! Mandrake makes 2nd on updated packages.

        rpm its not a program, its an adventure.

        -
        CS under linux, does your cheat scanner block linux? YES.

        • by AME ( 49105 )
          rpm -qa | grep blah, rpm should include pattern matching.

          Why, exactly. I think you've profoundly missed the point of the command line if you think that every utility that might benefit from pattern matching should reimplement it.

          If you really do that a lot and despise the extra typing, try something like this:

          alias findrpm="rpm -qa | grep -i"

        • On use of /opt (Score:3, Insightful)

          by Nailer ( 69468 )
          rpm installs should use the opt directory for major programs. /usr/bin and /usr/X11R6/bin is not a dumping ground.

          Er, yes they are. Unix has sorted files by their type, rather than what application they belong to, for a very long time. This allows, for example:
          • All your applications to be in path
          • Short ldconfig paths
          • Someone to back up, say, only /var and /etc and get everything they need to restore your system (because the binaries are reloaded from media)
          • And a great deal more.


          If you want to address the files by what application they belong to, that's what a package manager is for. No distribution's packages can use /opt, doing so is forbidden by the FHS.
          • Re:On use of /opt (Score:3, Insightful)

            by BrookHarty ( 9119 )
            FHS says to use /opt for add-in software, then use it for that purpose. Most install packages can alter your ldconfig and add its path, check KDE and Gnome, they do. I dont know what the Great deal more is, but kde, gnome, open office, kde office, are too big (imho) to be put in the same damn dir of /usr/X11R6/bin.

            Mozilla is in /usr/lib/mozilla, /opt/mozilla would make more sense. Lucky its an easy fix, and its no harder to use.

            Its pretty easy to modify your profile path, ldconfig, and make a simple standard, like anything not unix OS goes into /opt.

            Using 1 directory for all your bins is a cluster fuck.
            • Re:On use of /opt (Score:3, Insightful)

              by Nailer ( 69468 )
              FHS says to use /opt for add-in software

              Cool. All you have to do is define what add on application software means...good luck.

              The FHS also says The directories /opt/bin, /opt/doc, /opt/include, /opt / nfo, /opt/lib, and /opt/man are reserved for local system administrator use...distributions may install software in /opt, but must not modify or delete software installed by the local system administrator without the assent of the local system administrator. I.e, distributions should avoid /opt and leave it to the local system administrator.

              Mozilla should puts its bins in /usr/bin, its lib in /usr/lib, and everything else in /usr/share.

              Using 1 directory for all your bins is Unix
              • Re:On use of /opt (Score:3, Insightful)

                by BrookHarty ( 9119 )
                Guess you havnt checked out mozilla lately, its entire directory is /usr/lib/mozilla, the program mozilla in /usr/bin/mozilla is a shell script that loads /usr/lib/mozilla-bin. You can download the mozilla tar, and extract it in /usr/lib/ and you dont have to change anything.
                Why couldnt we do the same thing in /opt, like some packages use /usr/local. (like kde, and many other apps)

                BTW, when your using a linux box as a workstation,
                IE. you are the local administrator, and should be able to do what you want. Using 1 directory for all your bins is not Unix, this is why you have /opt and /usr/local/bin /usr/local/sbin, /var/adm, blah the list goes on.

                The point is, if you use single directories for applications, you can back the directory up, and install new software without touching system files. Why would anyone sane put system files in with application files? Microsoft have "Program Files", unix should use /opt or /usr/local.

                Using the phrase "Its the normal way" is just rehasing the same mistakes. Improve...
    • I have to say I have never any problem with Mandrake 8.2 and URPMI while I use packages repositories made for 8.2 (= the 8.2 RPMS + 8.2 RPMS contribs + third party packages made for 8.2). So in this way it's really the same as Debian. Of course, sometimes when I try to install a development package from Cooker it will lead to issues, but as a normal user I'm not supposed to do that. And it's really very clean, see:
      1- Install a single package:

      # urpmi koffice
      % Total % Received % Xferd Average Speed Time Curr.
      Dload Upload Total Current Left Speed
      100 8461k 100 8461k 0 0 61958 0 0:02:19 0:02:19 0:00:00 63912
      installation de /var/cache/urpmi/rpms/koffice-1.1.1-14mdk.i586. rpm
      Preparing...
      koffice
      /* note: status bar removed because of the Slashdot junk chars filter */
      [root@europe ]#

      2- Install a package with dependencies :

      # urpmi php
      Pour satisfaire les dépendances, les paquetages suivants vont être installés (1 Mo):
      php-common-4.1.2-1mdk.i586 php-4.1.2-1mdk.i586
      Est-ce correct ? (O/n) o
      % Total % Received % Xferd Average Speed Time Curr.
      Dload Upload Total Current Left Speed
      100 481k 100 481k 0 0 56191 0 0:00:08 0:00:08 0:00:00 63824
      100 23587 100 23587 0 0 23469 0 0:00:01 0:00:01 0:00:00 59532
      installation de /var/cache/urpmi/rpms/php-common-4.1.2-1mdk.i58 6.rpm
      /var/cache/urpmi/rpms/php-4.1.2-1mdk.i586.r pm
      Preparing...
      php-common
      php
      [root@europe ]#

      3- Uninstall a package with depencies :

      # urpme php-common
      Pour satisfaire les dépendances, les paquetages suivants vont être désinstallés (1 Mo):
      php-common-4.1.2-1mdk php-4.1.2-1mdk
      Est-ce correct ? (O/n) o
      [root@europe ]#
    • Actually, there is a limitation of .rpm that hinders the APT4RPM functionality-- file dependencies. .rpm archives depend on specific files, while .debs depend on specific packages. This can be worked around, essentially by creating a list that maps files-that-are-depended-upon to packages-containing-these.

      But yes, there is at least one technical superiority of the .deb file format. I have never heard any argument that .rpms have a technical superiority to .debs, so I have to wonder: why don't RPM-based distros don't switch to deb? They could just adopt the .deb file format as RPM 5, make the tools speak deb, and stop worrying about it. They'd serve their users better and reduce duplication of effort.

      Or perhaps users should take it into their own hands. Using tools like 'alien', it might be possible to take the apt4rpm approach one step further-- create an unofficial 'Redhat .deb' distribution-- the same packages as Red Hat, but in a different package format.

  • Okay seriously. (Score:2, Flamebait)

    by mindstrm ( 20013 )
    It's about a distribution in general, not the tool.

    Roothat is a bloated pig.

    Debian is a lean mean fighting machine.

  • by Anonymous Coward
    The first time I installed an RPM which was not included on the CDs, I was wondering badly what happened.

    Where was the program?
    It was not in the K menu.
    It was not on the desktop.
    It was lost.

    So I thought something went wrong and installed the RPM once more but it claimed the rpm was already installed.

    I eventually realized it was indeed installed but the installer did not:
    - ask me whether to put an entry in the menu
    - decided for itself where the files were supposed to be

    Never mind that I wanted the files to be in my home directory.
    Never mind that I had no clue what the primary program name was.

    There were dozens and dozens of other files in the RPM, mind you. It is not easy to determine what is a binary and what is not when you have just installed Linux for the first time.

    And this does not even touch the subject of dependency hell. I have wanted to install several programs only to give up because of a huge number of dependencies.
  • Many of you will have remembered that the RPM Package Manager went from 3.x to 4.x without backward compatibility and upgrading it was an arduous task, to put it mildly.
    Aw, it ain't that bad. I found myself changing version numbers in RPMs with a hex editor.

    Strangely, it actually worked just fine.

    Sometimes I think they break backward compatiblity just for the heck of it.
  • Well, DUH! :{D (Score:2, Informative)

    by Gryffin ( 86893 )

    While I agree with the thrust of the article, it would be much more persuasive with a little more meat behind it.

    "There is at least one distribution (ESware) that has moved from RPMs do DEBs, but I don't know of any movement in the opposite direction."

    A little research into just how many distros have migrated one way or the other in, say, the last five years would be instructive.

    "Similarly, there are many users who have moved from RPMs to DEBs, but very few who have chosen the opposite path."

    This statement is pivotal to the article, but is completely unsupported by any hard numbers, and comes off overly broad. (Surely there must be have been SOME attempts to determine market share of the major distros?) Maybe you don't know anyone who's gone the other way, but I'm sure it happens.

    That said, there's a lotta truth in this article. After a couple years of struggling with RPM Hell on Red Hat and later Mandrake & Yellow Dog, I've recently decided to switch over to FreeBSD (ports, yum!) on my server and Debian on the workstation.

    Oh, as an aside, there's an implementation of deb/apt for Mac OS X and Darwin, called fink [sourceforge.net]. Fink supports both binaries with apt-get/dselect, and source installs with their own ports-like tool. I know a number of people who run traditional Linux/Unix progs, including X Windows, The Gimp, KDE, etc., side-by-side with their regular Mac apps. Oh so very cool.

  • I've long preferred slackware for it's approach which basically looks like BSD both for package management and operatoins interface.

    For linuxes the various source-based approaches are becoming more popular / solid. In addition to Gentoo, there are Sorcerer Linux and two working forks (Lunar and Source Mage) see summary [sorcererlinux.org].

    As pointed out in a comment above this one, when RPM snafus (often) you can usually build from source with minimal effort. Unfortunatley that's not true for RPM itself, which I have found to be a major pita to build from sources, or things like Gnome / E17.

    Vendor unixes (Solaris, AIX, HPUX) put a lot of effort into correctly managing dependency checking. Part of their solution, however is in building their own versions of sources and staying as much as 2-3 years behind the current-releases of any given package.

    RPM is a far cry from the vendor unix approaches, part of which I'm sure is that it's trying to do a much harder task on a less well defined base platform (random hardware).

    Try building RPM from redhat's sources sometime, use the force you're gonna need it! That alone suggests to me that this is not in 'reality' an opensource project. A GPL license for software that doesn't build with './configure; make' doesn't seem like an effective oss project to me.

  • by reflective recursion ( 462464 ) on Sunday June 16, 2002 @10:22AM (#3710712)
    in itself. The problem is not using the hierarchal file system in a coherent way. In addition to that problem we have way too many files nowadays. When package contents mix with one another.. well I'm sure you've had Chem. 101.

    This article wants solutions, so here is mine:
    Make packages a seperate directory. Just like good old DOS days--where every program lived by itself in a directory. _All_ package contents go in this special directory. Then you have the problem of per-user configuration. This is incredibly simple. Have a directory in each user's directory which _mirrors_ the package directory. Each package directory should be unique (i.e. MyProgram v1.0 lives in a different directory than MyProgram v1.1). Dependencies would be much easier this way since you would only depend on a _directory_ existing. Moving packages would simply be a matter of packing up the directory and taking it wherever.

    In any case, software is _package_ based. Why do we still throw library files from different packages together in the same directory?! When you want to remove a package you have to rely on broken package managers, or hunt down every file which goes with a package. We should be able to completely remove software by simply removing a directory. I've heard MacOS does this, why can't Linux?
    • by Fluffy the Cat ( 29157 ) on Sunday June 16, 2002 @10:28AM (#3710737) Homepage
      Why do we still throw library files from different packages together in the same directory?!

      Mostly because that's the point of libraries. Libraries allow code to be reused between applications - sticking them in application specific locations makes it somewhat harder for application A to use library B.
      • by reflective recursion ( 462464 ) on Sunday June 16, 2002 @10:42AM (#3710784)
        I don't know if you've noticed lately, but libraries _are_ packages today. GTK+ for example. Qt, ncurses, etc. And if a package creates a _new_ library, then not many people are going to depend on it. And if they _do_ depend on it, they might as well depend on the entire package being there--since the library is a _part_ of the package.

        The idea of sharing arbitrary library code is a failed experiment. If I create MyProgram and then I create MyProgramLib.. not many people will ever use the library. The only case they _will_ use that library is if I _package_ it seperately, and make it a coherent entity itself--with documentation. This is why, IMO, going package-only and dropping the various */lib directories can only be a Good Thing. And this is how Red Hat, etc. do it today. They create dependencies between _packages_. If I create an app in RPM format that needs, say libgimp, then my package will depend on the _entire_ gimp package being installed. Not just libgimp. Why not just handle packages naturally?

        I'd also like to point out the benefits of doing this:

        - Package corruption will be detected immediately. When something depends on a package and a file is missing or corrupt then the package can be determined corrupt.

        - Dependencies handled naturally. When a program complains that a file doesn't exist, I can pinpoint _exactly_ which package the file is in and can simply reinstall the package. No need to hunt down which file belongs to which package.
    • by cgleba ( 521624 ) on Sunday June 16, 2002 @12:32PM (#3711131)
      "The problem is not using the hierarchal file system in a coherent way."

      I hear this argument every time package managment is discussed on slashdot and every time I bite my tongue.

      The current system of /bin,/lib,/etc, etc. has many many advantages over the "good old DOS days" -- ESPECIALLY when you start mixing in NFS and automount. Some examples:

      * all SHARED libraries are in the same place. That way the dynamic linbker does not need to do a ridiculous path search to find a library

      * all binaries are in 3 -4 places -- that way you don't need a massive PATH variable like 'the good old DOS days'

      * because the files are sorted by type, you can do all types of neat things. Let's say for instance that you have Solaris SPARC, Tru64 Alpha and x86 Linux boxes all sharing a single NFS server. Now the /etc directory is architecture and OS independant so you can share the same directory accross all three. The /var directory is achitecture independant but depending on your set-up it will probably not be OS depandant. Thus you can discern the differences between the OSs yourself and set up an automount variable to mount the proper version per OS. The /lib and /bin directories are both OS and architecture dependant. In that case you must set automount variables for OS and arch and mount different dirs for each.

      Let's say that you install emacs network-wide. You share the same config accross all your NFS clients and just make different /bin and /lib for each. You need to change some defult configs for all the clients? Voila, just edit one config file! Could you share one program accross multiple machines, architectures and OSes in the 'good old DOS days'? Could you immediately upgrade 65 workstations to the newest version of a program without reboots and only use 1/65th the space (aka one copy) in the 'good old DOS days'?

      * Because the files are sorted BY TYPE you can do all types of neat optimization and security things. You can mount /usr ro. You can optimize your RAID array for fast read and writes in the /var mount while optimizing /usr, *lib and *bin for fast reads, etc.

      'the good old DOS' system was good for what it was used for -- a small system for one user with a few programs and didn't need any optimization. The heierarchal system is a lot better here used as a multi-user, muti-tasking shared-library networked OS with hundreds of programs.

      Now if you hate the heierarchal system that much, you can do what SCO OpenServer does -- install all the files into each 'program directory' and then make symbolic links into the heierarchal system. It would be VERY easy to do -- just write a script to query your RPM database for what files are in each package, move all the files for that package into its own directory and then make a symbolic link for each file moved back to the hierarchal system.

      SCO liked the 'good old DOS days' also. The problem with OpenServer and all those symbolic links, though, is that resolving the symbolic links by the dynamic linker, the shell, the programs, etc actually was pretty expensive and gave a decent hit to filesystem performance. Furthermore it made NFS-mounted trees hell and you could not do all the neat optimization and security stuff that I mentioned above.

      In summary, the heierarchal system is by far easier to manage for performance, security and for centralization. It is tougher to manage for "adding / removing" programs. The former highly outweighs the latter, expecially since you have package databases to help tell you where all the files are. Learn to use your package managment system.

      The bulk of this article and thread seem to be once again people bitching about RPM dependency hell. The solution to that is download the source rpms and then do a rpm --rebuild [source RPM] then a rpm -i [/usr/src/RPM/RPMS/i686/[name of RPM built]. That solves 96% of all your problems and still maintains your RPM database. config, make works too, but it throws you back into the chaotic world of no package managment and thus completely defeats the purpose of RPMs.

      Have a nice day!
  • Automaticness (Score:3, Interesting)

    by Apreche ( 239272 ) on Sunday June 16, 2002 @10:23AM (#3710715) Homepage Journal
    What we need is to get rid of the entire packaging system all together. I know I'll probably get toasted for this. But software should install in linux the same way it installs in windows. There should be one file, like setup.exe. I should take that file, execute it, it will ask me what parts of the software I want, and where I want to put it, etc. From my experience there are two pieces of software for linux that do this, the Tribes 2 server, and Mozilla.
    The entire packaging system is just a pain in the butt. This depends on that depends on this. urpmi, rpm -i, rpm -U, things not working with no explanation. In Windows I never have to worry about one thign relying on another thing. Because just about everything uses DirectX. And directX COMES WITH anything that uses it. And it has a simple graphical isntallation.
    There should be one downloadable file for each piece of software I want. It should install on its own, on any linux machine, easily and graphically. And all of my library packages like glibc, etc. Should transparently update themselves to the newest versions all the time. I dont' want to have to worry about that stuff. Drivers in linux are incredibly difficult to install. They should become a simple right click, install driver. Done. I want all that other crap taken care of for me. I don't have time to change paths in config files, tinker with code, look up crazy commands and recompile crap.
    I feel the package system is the real place in which linux fails. Most distros, lets use Mandrake as an example, have graphical easy installations. But when you get to the package selection phase you're stuck forever weeding through thousands and thousands of checkboxes. Not cool.
    One piece of software should be one checkbox. KDE alone has like 20+ rpm files. There should be one file. KDE3setup.exe.
    You know that installshield that almsot every piece of windows software has? Maybe someone could code that for linux. I would, but I have no idea how to do something like that. But I know someone reading this does. And if you want to save your open source os, I suggest you do.
    • Re:Automaticness (Score:4, Insightful)

      by 0x0d0a ( 568518 ) on Sunday June 16, 2002 @11:14AM (#3710875) Journal
      But software should install in linux the same way that it installs in windows. There should be one file, like setup.exe

      No. I'm going to have to strongly disagree here.

      Your complaint is valid and reasonable, and unfortunately ignored by the Linux community, but your solution is not.

      Your complaint is that you want a simple *user interface* to install large pieces of software. This is reasonable. There is not front end that I know of that's widely used that chooses a reasonable subset of packages in a large software package (like GNOME) to install. You have to select all the packages in the system, one by one. That's fair, and should be addressed.

      Your solution, however, is not a good idea. The Windows method of having a single installer puts you at the mercy of sitting down at the machine and actually clicking and installing stuff. It puts you at the mercy of what features are supported in the installer, how old the installer is, etc. The Linux (well, RPM/deb) approach is to separate the program from the packages. Upgrade the program, you get more features/bugfixes. You can install every piece of software remotely (ever watch Windows administrators run around installing a new piece of software on a network? What could be done securely on Linux with a bit of setup with sshd and public keys and then a single command to install software on every machine involves the IT people running around to each machine and clicking "OK" and watching progress bars). If you bundled all these packages into one large package, and someone doesn't *need* all of them, they need to download extra data. If you need to install, say, GNOME on every machine on the network, you only have to transfer the few RPMs *your* users actually require.

      The solution is a fix to the front end, not to the architecture of the system. We need a single checkbox that installs a good default subset of packages for a large software package. The "GNOME" checkbox would install gnome-core, gnome-libs, etc, but probably not glade. The Linux rpm installation architecture is superior to the Windows installshield architecture -- now let's make the user experience as nice for novices.
    • No No No No No (Score:3, Insightful)

      by unformed ( 225214 )
      Packages are way better than Windows setup.exe.

      1> Consistency, everything is installed the same way, select what you want, and hit install. (I use Mandrake, and rpmdrake makes it extremely easy to install packages...

      2) Non-bloatedness. I'd much rather have 20+ packages for KDE than 1 package. Yes, it'll take me a long time to go through them, but I select what I want, not what the developer thinks I want.

      One really cool part about Linux is that I can change --anything--. I don't have to have a graphical interface if I don't want, in which case I don't need to install it. If I plan on using Gnome as my window manager, but want to run koffice, I only need to install the kde-libs package, and don't need all of the kde binaries..

      When a small part of a large project changes, I only need to update that small part, instead of redownloading the whole package. Imagine having to download all of KDE to update a tiny KDE app.

      Uninstallation is also simple, select the box, hit remove, and there's -no other prompts-.

      BTW, There is an installshield for linux, it's any kind of RPM/DEB installer (RPMDrake, apt-get, alien, etc) and it's of a hell of a lot nicer and more consistent than any simlar idea on Windows
    • Re:Automaticness (Score:5, Informative)

      by Mongoose ( 8480 ) on Sunday June 16, 2002 @12:24PM (#3711096) Homepage
      You must be new to UNIX like systems. You see there is a reason we don't have 50MB executables from all the static linking and DLL hell [desaware.com]. We use shared objects between all apps to save disk space, development time, and main memory. I see you complaining about rpms, so maybe you should try a distro like Debian GNU/Linux [debian.org], and expand your horizons.

      For example if we did shar archives ( what you want with your 'setup.exe' ), then you'd have to install all of KDE to just get QT libs. You'd have to install all of GNOME to get gtk+. You see why that's piss poor way to do things just from a packaging standpoint even if you don't understand the techincal aspects? Also versioning would be impossible to support. Versioning is allowing multiple libs to stay on the system without conflicting, so apps can use various versions as they choose. To support versioning you'd have to have N number of KDE installs.

      I don't see how that post go modded up, when it's so misinformed... oh this is slashdot.
  • by Arethan ( 223197 ) on Sunday June 16, 2002 @10:24AM (#3710720) Journal
    RPM by itself isn't the real problem here. The author is complaining that installing applications in Linux is a pain in the ass, because the system often doesn't have all of the required libs installed.

    I admit, RPM doesn't make this an easy problem to solve. Any normal Windows app would simply package the required libraries with it. Thus if the lib doesn't exist, it can install it. But RPM doesn't work that way. RPMs can only hold one logical unit. So one app, or one library, or one set of platform independent support files. RPM builders could include more, but doing so will likely break the RPM dependancy tree.

    The real problem in all of this is the destinction between applications and the system itself. Is grep part of the OS, or is it an addon app? How do you tell? Most would argue that grep is a part of the OS, but you can easily install linux without grep, so it must not be essential. But if packages expect it to be there, then it must be essential. But if it's not part of the OS, then they shouldn't have expected it to be there in the first place, so now it is their fault for not thinking ahead... This problem just goes in circles all day. The worst part about this is that my use of grep is just an example. This problem applies to literally all packages outside of the kernel itself. Don't believe me? How about init? Do you think that init is essential? I agree, but what version? Do you want a SysV init, or a BSD style init? Technically you can have either.

    To solve this whole problem, we really need to take two steps. First we need to define a base Linux system. And I don't mean a completely solid, unwavering, definition either. Standards that never evolve are quickly dubbed 'legacy'. The trick is to define a complete base install. Everything from the kernel, to the version of GCC (and no RedHat, gcc 2.96 isn't going to cut it), to what version of X is installed, to what "expected unix utilities" are available, and what libraries are available. Feel free to change the standard, but each time you do so you must raise the bar somehow, either by making it more reliable, or faster, or adding features, or some combination of the above. There is only one last key item to making this system work. You must retain backwards binary compatability for long periods of time. Feel free to completely break legacy systems, but make sure that you only do so after you've had at least 5 to 6 years of stability.

    Then there is the second step. RPM is a nice system management system, but it is a shitty application packager. Mostly because of the dependancy issues and the fact that each RPM package can only hold one logical unit. We really need an install shield like system for applications (both gui and console installs in the same package). Feel free to keep track of what is installed, and what files belong to who, but you really need to separate the system from the applications. Once you have a base defined, keeping the system and apps under the same packaging system no longer makes sense. The absolute need for it is removed.
  • by wowbagger ( 69688 ) on Sunday June 16, 2002 @10:25AM (#3710727) Homepage Journal
    The problem with ANY packaging system is overzelous dependancy definitions.

    When Maynard builds his SuperFlyFloobyDust.rpm file, rather than specifying the dependancies as "I need libPease.so", he accepts the default "I need libPease.1.4.2.thursday.5-31-41.1-pl3-build6.so". So, even though any libPease.so would work, you get a dependancy failure.

    This is a failing not of any specific package manager - ALL package managers have this problem. You don't see it with .debs not because of any inherent superiority of .deb, but rather because of the hard work of the Debian maintainers to make sure the packages are all set up correctly!

    Additionally, there is the problem of library makers not following the fscking standards - libNarf.1.1.so is SUPPOSED to be fully compatible with libNarf.1.0.so - if it isn't, then it should be libNarf.2.0.so! However, you get people making libraries that don't follow this rule, so as a result you have to have libNarf.1.[0-99].so in your system because of programs that depend upon their version of that library.

    The solution to this CANNOT reside within the package manager - it resides in the distribution maintainer to refuse to deal with packages that break the rules.

    However, all it takes is one person installing one program that breaks the rules, and that installation is screwed.

    That is where distros like Debian and the *BSD's have the advantage - they are controlled by folks who won't let that happen. However, how many people install from the unstable branches, and why? Because that's where the latest, greatest, shiniest stuff is!
    • When Maynard builds his SuperFlyFloobyDust.rpm file, rather than specifying the dependancies as "I need libPease.so", he accepts the default "I need libPease.1.4.2.thursday.5-31-41.1-pl3-build6.so". So, even though any libPease.so would work, you get a dependancy failure.

      This is a failing not of any specific package manager - ALL package managers have this problem. You don't see it with .debs not because of any inherent superiority of .deb, but rather because of the hard work of the Debian maintainers to make sure the packages are all set up correctly!

      Actually .deb does not allow file dependencies -- only package dependencies are allowed. So if a package needs "libPease.so.1", it will Depends: libpease1, not on the actual library file.

      File dependecies makes RPM-based systems so much more unmaintainable that, in fact, the LSB forbids them.

      • You misunderstood my statement - the same problem can happen with a package dependancy.

        Suppose that Maynard has package libPease.1.4.2.thursday.5-31-41.1-pl3-build6 installed, which is supposed to be back-compatible to package libPease1. When he builds his .debs, he mistakenly builds it with a dependance upon libPease.1.4.2.thursday.5-31-41.1-pl3-build6, rather than libPease1.

        Same problem. Only if your packaging system does not allow subversions of a package can you avoid this problem. And if your package does not allow subversions, then if I really do need a feature of libPease.1.4 or later I am screwed - I cannot spell that out in the packaging system, so somebody will install my package when they only have libPease.1.0. Then I have to tell them at runtime they don't have the correct package.

        A partial solution would be for every package to supply a list of packages it is backward compatible with, and for the installer to check that list when installing a package. Then, when you install SuperFlyFloobyDust, the install can say "OK, libPease.1.5, can you take the place of libPease.1.4.2.thursday.5-31-41.1-pl3-build6?", and libPease.1.5 would have to be able to answer that question.

        This is not a full solution, however - libPease.1.4.2.thursday.5-31-41.1-pl3-build6 could be some bastard version that has a functionality that was not incorporated into libPease.1.5, and libPease.1.5 might not ever have HEARD of it.

        Additionally, SuperFlyFloobyDust might NOT really NEED the functions of the bastard version, and so even if libPease.1.5 could correctly state "No, I am not a total replacement for that bastard version", SuperFlyFloobyDust could actually run on libPease.1.5, but due to being packaged by an incompetent boob, the program won't install.

        File or package dependancies are the same. Unless you forbid sub-versions, you have this problem. And if you forbid sub-versions, you introduce other problems.
        • (Perhaps I should point this out earlier: I'm a Debian Developer, so consider myself biased.)

          Yes, .deb alone can't solve this problem; but in cases like these the Debian Policy has some guidelines.

          Suppose that Maynard has package libPease.1.4.2.thursday.5-31-41.1-pl3-build6 installed, which is supposed to be back-compatible to package libPease1. When he builds his .debs, he mistakenly builds it with a dependance upon libPease.1.4.2.thursday.5-31-41.1-pl3-build6, rather than libPease1.

          In this case, "libPease.so.1.4.2.thursday.5-31-41.1-pl3-build6" should be in the package "libpease1", version "1.4.2.thursday.5-31-41.1-pl3-build6". Other packages always do a Depends: libpease1.

          The reason that the major soname is in the package name itself is because, binary API changes are supposed to happen when the major soname changes. This way, there might be a "libPease.so.1.xxx" and a "libPease.so.2.xxx" that are binary incompatible but can coexist together on a system; and so there will be "libpease1" and "libpease2" packages that can be installed together; but "libpease1" version 1.5 will replace "libpease1" version 1.4.2 during upgrade, because upstream says they're binary compatible.

          Same problem. Only if your packaging system does not allow subversions of a package can you avoid this problem. And if your package does not allow subversions, then if I really do need a feature of libPease.1.4 or later I am screwed - I cannot spell that out in the packaging system, so somebody will install my package when they only have libPease.1.0. Then I have to tell them at runtime they don't have the correct package.

          As long as the binary API remains backwards compatible, then the "libpease1" package can be upgraded to 1.4, and packages that require 1.4 features can Depends: libpease1 (>= 1.4). If libPease.so.1.4 is not binary compatible with libPease.so.1.0, then it really should be called libPease.so.2.0. If it isn't, then upstream has stuffed up, so nag upstream about it (I've done it before).

          Additionally, SuperFlyFloobyDust might NOT really NEED the functions of the bastard version, and so even if libPease.1.5 could correctly state "No, I am not a total replacement for that bastard version", SuperFlyFloobyDust could actually run on libPease.1.5, but due to being packaged by an incompetent boob, the program won't install.

          This is the problem of the person who did the package, yes? She should test the package before releasing it to the world, just like any other software, whether in source code or binary form (especially in binary form).

          No system can guard against incompetent packagers. But with RPM's file dependencies, it's much, much easier to make a mess.

          • by wowbagger ( 69688 ) on Sunday June 16, 2002 @12:22PM (#3711083) Homepage Journal
            And this is my point exactly - the problem is people screwing up and making binary incompatible package versions - asserting that libPease1, version 1.5 is a full and complete replacement for libPease1, version 1.4 when in fact it isn't. And no packaging manager software can fix that - it is incompetence on the part of the package creator.

            That is the point I keep trying to make - .debs are superior to .rpms because of the work of Debian maintainers who bash morons who cannot understand that same major version MUST MEAN binary compatible.

            Unless we start vet'ing packages for compliance with that simple idea, no package manager created can solve the problem.

            Now, I will agree RPM has its warts - I agree that being able to depend upon a FILE, rather than a PACKAGE is a weakness. However, until we get an agreed-upon standard that a given file will ALWAYS be supplied by a given PACKAGE, this is an unavoidable problem - I've seen the same file supplied by completely different packages (e.g. the RPMs supplied by Abiword and the abiword packaged supplied by Ximian).

            How would you create a .deb if you needed /usr/bin/foo and it could be supplied by Foo.deb or FooAndNarf.deb? Which package would you tie to?

            Again, the solution here lies not within the package system, but rather the package creators - until you get a means to guarantee that package maintainers are "following the rules" you will have these problems, be you Debian, RedHat, or Microsoft.

            Perhaps what we need is for a consortium of distro vendors to create a mark of trust. A would-be package creator can sign his packages with a key, and ask to register that key with the Package Police. If the package creator proves he can package something CORRECTLY, his key is marked as trusted. If he starts screwing up, it gets marked as untrusted.

            Perhaps we could even create a system by which end users could vote on a given package - positive trust points for good packages, negative trust points for badly packaged libraries. Of course, since there are always people who seek to screw such a system up, we would need people to review the votes those other people cast, and remove the people who abuse the system. Then we would be able to see whose packages were good, and whose were crap. Of course, there would always be the dicks who rushed to be the first to vote on a package....
  • rpm -i

    Sorry, you need libpng x.y.z_e, but you have libpng x.y.z_c.

    Above is not of course technically accurate, but many MANY times I end up annoyed with RPMs since theyre put in a requirement for a SPECIFIC named package and version (on the builders system) version of something. You can end up needlessly having to upgrade libraries when you already had an entirely adequate version for the package in question.

    Solaris package management works. It can't really help us here though, since Solaris installations are generally very generic things - linux machines can be any one of thousands of combinations of package versions. Back to linux-land, and apt-get with debian mostly works, but a few times I've seen a debian machine decide to upgrade more or less the entire base dist for a trivial tool due to versions, and break in the process while replacing libc. Not fun.

    The only workable solution I've seen thus far, is the freebsd ports system. Grabs the generic source and builds it in such a way that it only upgrades backup tools and libraries when it really needs to. I've NEVER had a serious issue in years of using this system. That's not to say it's perfect of course, still suffers the issue of you not being able to easily revert to your old setup if an installation breaks somethings, and of course it can be pretty slow.

    Something does need to be done though. A Windows using friend of mine tried to install Mandrake recently, which he did all on his own without issues. He wanted an IRC client, I recommended x-chat. We tried using RPM and it failed, so we grabbed the source and then had to go about installing a set of development tools on his machine. It took a *long* time before the gcc package would install due to some idiot deciding headers should be split from main packages for the sake of a few kb of diskspace. Even then x-chat wouldnt build, due to things like the gettext rpm not having msgfmt (part of gettext), someone having decided it lived in an openwin tools rpm, which would no doubt have wanted lots of openwin rubbish installing. Eventually we ended up splatting source versions of common tools on top of the rpm installed ones to resolve several instances of missing header files and scripts. Finally, x-chat built...

    It made *my* head hurt let alone his - and I've been working with *nix machines for years. It almost put him off trying to use linux any further straight away. Linux is never going to start making any non-techie inroads unless someone sorts out a decent packaging system, and fast.
  • .deb (Score:4, Interesting)

    by Simon Brooke ( 45012 ) <stillyet@googlemail.com> on Sunday June 16, 2002 @10:38AM (#3710767) Homepage Journal

    I started my Linux experience with SLS and a 0.99 kernel. Then I switched to Slackware, then flirted with Caldera. Then for a while I ran RedHat on my servers, before switching in about 1999 to Mandrake on all machines.

    And then I decided to experiment with Debian on a test box, and fell in love. I now have it on my desktop, my laptop, and three out of my five servers.

    Why?

    The package manager. It just works. It just works reliably, installing all the right stuff, resolving all the dependencies. When there are conflicts (not often) it reports them and suggests remedies. In short, the Debian package manager is to all other UN*X package systems I've ever seen as a computer is to a tally-stick. No-one who has used dselect will ever go back to RPM.

    • Re:.deb (Score:4, Funny)

      by PSC ( 107496 ) on Sunday June 16, 2002 @01:09PM (#3711293)
      No-one who has used dselect will ever go back to RPM.

      That would be because they can't figure out how to quit the damn thing!

    • Re:.deb (Score:3, Insightful)

      by MSG ( 12810 )
      No-one who has used dselect will ever go back to RPM.

      Actually, dselect has been known to drive users the hell away from Debian. Its interface is rotten, and the authors agree. It's unfortunate, since dselect and apt really do good things. It just takes a bit of adjustment to get used to dselect.

      I was considering moving over to Debian myself before apt became available for rpm. I'm much less motivated now ; )
  • by phaze3000 ( 204500 ) on Sunday June 16, 2002 @10:39AM (#3710772) Homepage
    This article really shows more about the author's experience than it does about the merits of any particular package management system.

    Let us for a moment pretend that instead of using .debs (but still had APT, ala Connectiva), Debian used RPM for its package management. Would Debian be as good as it is now? Of course. Why is this? Well, because the Debian people spend a hell of a lot of time making sure the package management is done properly. This has drawbacks of course, like the lack of the latest-and-greatest software (notably XFree86 4.2 and KDE 3), but in terms of stability you really can't argue that Debian is the best around.

    The author then goes on to suggest that a Gentoo-like system is whats best. Quite frankly this just shows us more about how little the author understands what is necessary in a package management system. Don't get me wrong, I like Gentoo a lot (in fact I type this message on a machine running Gentoo :)) but package management really isn't its strong point, as things like the recent libpng problems show. Doing things this way makes dependencies extremely difficult to deal with. Lets pretend you have libxyz installed, and then install program abc. abc can use libxyz, but doesn't require it. As you have libxyz installed, gentoo compiles abc with libxyz support enabled (one of Gentoo's best features). However, the day after, you decide to 'emerge unmerge libxyz' (remove libxyz for Gentoo virigins). abc no longer works properly. Gentoo didn't tell you that abc needed libxyz, because it's not a dependecy.

    In my opinion, the package format is irrelevant; RPM, DEB, TGZ, all are fine as long as they are centrally controlled and well put together. A system like APT makes things many, many times better, becuase it eases dependency problems, but it isn't a pre-requisite.

  • Ladies and gentlemen, I hate to be the one to inform you, but this isn't RPM's fault, this is the individual distro vendors' faults for not standardizing on a filesystem hierarchy.

    I also blame the users who don't have enough sense to build thier own rpms. It's not *that* hard, if you have a SRPM, to build an RPM that works on your system where the binary may have failed. In fact, I recommend that if you use RPMs that don't come from your distribution vendor, you get the source RPM, edit the spec file as appropriate, and build your own...this way you're linking against *your* version of whatever libraries you have installed. It's not *that* hard, trust me.

    -Jeff
  • The BSD ports and packages work pretty well.

    cd /usr/ports/comm/kermit
    make
    It downloads, compiles, and installs.

    Got a package file? add_pkg package. The article didn't make any mention of these possibilities.
  • Not really... (Score:5, Insightful)

    by bero-rh ( 98815 ) <bero AT redhat DOT com> on Sunday June 16, 2002 @10:53AM (#3710821) Homepage
    While the article raises a couple of valid points (such as the crazy incompatibilities between some versions of rpm, a lack of standard package file names and standard locations), its conclusions are wrong.

    Let's see:

    1. An RPM-based distribution is risky to upgrade

    Not quite. Red Hat, for example, still supports upgrading from Red Hat Linux 4.x to current versions, if you use the official updating process.
    You can run into problems if you upgraded some stuff by yourself, which is true for any package manager. A good package manager doesn't downgrade packages during an upgrade process. How is it supposed to handle an "upgrade" from a custom kdebase 3.0.1 installation (compiled with libc 5.x) to the kdebase 3.0.0 package found in the distribution you're trying to update to?
    Downgrade things in the process? I think that would make people complain, as well.

    Similarily, apt-get works quite nicely for Conectiva users.

    2. A more complex binary RPM package is often hard, if not impossible, to install

    Again, this is not exactly specific to RPM. The problem here is that RPM is used much more widely than any other package manager, therefore RPM packages are typically built on a wider range of potentionally VERY different systems than other packages.
    If, say, 200 distributions used .deb, you'd run into just the same problem here - your system uses, say, glibc 2.2 and libstdc++ from gcc 2.95.4 while the package you've downloaded was built with glibc 2.3 and libstdc++ from gcc 3.1. No difference at all.

    3. The incompatibilities between different versions of the RPM Package Manager added another layer of complexity.

    This is true, and the only real rpm specific problem.
    There's always a tradeoff between new features and backwards compatibility, and rpm does seem to lean a bit too much towards new features.

    4. The developers are forced to consider differences between distributions and create multiple binary packages.

    This is just restating point 2, and is just as invalid.

    Same for the suggested "solutions":

    1. Learn to build your own RPMs

    This actually does fix some problems... But of course you can't expect everyone to do it.
    (See also #5)

    2. Petition the RPM distributions to adhere to common standards.

    Nice in theory, but because there's no real standard ATM, this would mean breaking compatibility with older versions of the distributions (by e.g. adapting a common scheme for naming packages so you won't need to make a difference between Red Hat'ish "Requires: kdelibs >= 3.0.0" and Mandrake'ish "Requires: kdelibs3"), possibly breaking the update path.

    3. Use more advanced package management tools, such as urpmi or apt-rpm

    I agree with this one (add up2date to the list, btw). The availability of those tools shows that rpm is actually a good and flexible package manager - it just needs some extra tools to simplify some common tasks. It's really the Unix way of doing things - have the tool do one job, and have it doing that one job (handling individual packages without resolving dependencies by itself, in the case of rpm) well. Then write other tools making use of the tool (rpm) to get more advanced functionality.

    4. Switch to Debian or Slackware

    As shown above, their package managers do not solve the problems mentioned in the article. The problems just happen not to show up so frequently because there aren't many distributions using these package management systems, and the ones that do are usually pretty close to the distribution they're based on. Much closer than completely different distributions like e.g. Red Hat and SuSE, which really don't have much in common except for the package manager.

    If, say, Red Hat switched to using .debs, you'd immediately run into the problem that, due to totally different base systems, Red Hat .debs wouldn't run on Debian without changes and vice versa.

    So this switch wouldn't gain anything.

    5. Switch to source-based Linux distributions, such as Gentoo or Sorcerer

    This does solve the problem, but introduces others. It's a good thing for some people, but certainly isn't a universal solution to all problems.

    Source based distributions are really nice for people who want to tweak things a lot, but they aren't very useful for a traditional desktop user (who typically doesn't have all that much of a clue and doesn't want to spend a lot of time learning), and they introduce problems even for users who can handle them.

    Let's assume you have a source based package manager that is dumbed down enough to allow a user to install a package by clicking on a package file in Konqueror or Nautilus.

    Here's some of the problems you'd still need to solve (and some of them really aren't fixable):

    • The user needs to have all development tools installed - including, depending on what packages they want to install, compilers for very obscure languages (how many of us have installed compilers for, say, Modula 3 and Objective-CAML?)
    • Installation takes forever. This is not a problem for small packages, but how do you explain to Joe User that, after clicking "Install KDE 3.0", he can't use his new KDE 3.0 for a couple of hours?
      This is a real problem on slower machines - Compiling, for example, OpenOffice takes approximately 13 hours on an Athlon 1800 with 1.5 GB RAM. Imagine installing it from source on a Pentium with 128 MB RAM...)
    • How do you handle binary-only stuff? In an ideal world, of course, you don't use any. But try explaining to Joe User why he can't see websites using Flash... I'm all for banning binary-only software, while it's there it needs to be handled.
    • No beginner-friendly error messages. How do you explain a newbie what
      foo.cc:123: invalid conversion from `const void*' to `void*' is supposed to mean? (It's typically an indication of broken code that happened to work with gcc 2.x, but doesn't work with gcc 3.x anymore - but how does a newbie know or fix it?)


    Besides, rpm is powerful enough to provide this functionality for people who want it, combining the best of both worlds - it's typically as easy as


    rpm --rebuild foo-1.0-1.src.rpm
    rpm -ivh /usr/src/redhat/RPMS/i386/foo-1.0-1.i386.rpm


    This still has the same problems as a pure source based distribution, but with rpm, you get the choice between building from source and installing the binary.

    It's the primary reasons why I prefer rpms over debs, by the way - they're much easier to build.
  • by Captain Zion ( 33522 ) on Sunday June 16, 2002 @11:40AM (#3710962)
    I worked in the initial effort to port apt to RPM, and I can say that 80% of a smooth upgrade process is not in the package manager you use, but in the way you package your software. To allow smooth upgrades, you must package software with upgrades in mind, otherwise it simply won't work. Dpkg's design offers some advantages over RPM, but even so it's possible to have smooth upgrades with RPM.

    The author summarizes his article in the following points:

    • An RPM-based distribution is risky to upgrade.
      That is usually true, but it's not the usage of RPM that makes it so, but the lack of a strict packaging policy. Applying the Denian policy to a RPM-based distro can make it much easier to upgrade. On the other hand, using .deb without following Debian's policy would make a mess out of it.
    • A more complex binary RPM package is often hard, if not impossible to install.
      This affirmation makes no sense at all. If it was correctly packaged for your distribution, it will be as easy to install as any other package. If it was designed for a different distribution, it can also happen with dpkg packages. Please note that the package manager offers a mechanism to deploy binaries, all the rest is policy.
    • The incompatibilities between different versions of the RPM Package Manager added anotherl ayer of complexity.
      True. RPM is a mess in the point that it is not an implementation of a design, it is being continually modified in both design and implementation. RPM needs to be stabilized, continuing development should go to a different product.
    • The developers are forced to consider differences between distributions and create multiple binary packages.
      Not RPM's fault. It would happen with .deb packages if multiple major distributions used it with conflicting policies.
    From my experience in the past few years, here are the real issues with RPM:
    1. Binary packages are not compatible between distributions, unless they're statically linked and conforming to some kind of packaging standard. Dependency to libraries doesn't mean much: that particular library can be compiled with different options in different distributions. It's not RPM's. Assume that distributions are 100% compatible only because they share a package format is a mistake. Third-party, distribution-agnostic packages should obey a policy shared by all distributions, and that's one of the major points behind UnitedLinux [unitedlinux.com].
    2. Allowing multiple version of the same package to be installed isn't a good idea at all. Packages are different in nature, some will allow multiple versions, others won't (e.g. binaries vs. runtime libraries). Doing so only makes the upgrade process harder. Debian simplified it using a good packaging policy. Note also that, even in runtime libraries, you should replace versions that have binary compatibility. If you don't explicitly set a soname in the package name, this information is not available at the upgrade time.
    3. Very confuse, non-intuitive pre- and post- install execution order.
    4. Transaction processing and dependency resolution is too slow, due to file dependencies. As stated above, file dependencies should not be abused, and that can only be enforced by a policy.
    5. Too many unnecessary or confuse packaging features, such as triggers. If you have a good packaging policy, you will never need triggers. Read the librpm sources and you'll find hard-coded dependencies for a number of packages. That's stupid, and a symptom that you've done something very, very wrong and didn't notice it until it was too late because you didn't have a packaging policy.
    6. Moving target. Please stop adding features to RPM and modifying existing behaviour, otherwise we'll be always fighting against the package manager while trying to make smooth upgrades happen.
    7. Immediate configuration of packages after installation in a multiple-package transaction. Dpkg's deferred configuration is a better strategy.
    Most of the other RPM problems everyone says when touting Dpkg's superiority are myths and can be emulated with RPM (even using Debian's alternatives or debconf with RPM -- diverts is something more complicated to emulate). Dpkg is indeed a superior package manager today, but what people usually see is result of Debian's policy and not a package manager feature per se.
  • by mindstrm ( 20013 ) on Sunday June 16, 2002 @12:09PM (#3711041)
    First.. you mentioned it, but I'm not sure everyone got it....

    The 'Unstable' in debian terms does not mean the system is unstable, it means the package dependencies are unstable. It has nothing to do with running unstable code. It means that there is no guarantee a change will not break a lot of stuff and not be fixed for a while. It's not uncommon to try to install a package and find the dependencies don't exist yet... or they exist, but are an older version. That's what unstable is all about.

    Secondly.. regarding server stability.

    IF you build your kernels yourself (you should), and if you are aware of what services are running, system stability is not really an issue.

    I know that debian is pretty much the only system where I *don't* run hand-compiled apache, ftpd, etc. You should know what's up in your system. In this respect, no system is more stable than any other.

  • My one gripe (Score:3, Interesting)

    by cjpez ( 148000 ) on Sunday June 16, 2002 @12:25PM (#3711097) Homepage Journal
    The one thing that I really don't like about any package manager is rigid dependency checking. It only really occurs when you try to act outside of the "accepted" package system. For instance, back in my Redhat and then Debian days, I was content to let the base system get installed by RPM or apt. I also loved, especially in Debian, the ability to use apt to just install an app I wanted to use. However, for a long time, I used the DRI XFree86 that came from CVS and got compiled by hand. So I was stuck with two options - either don't install the X packages, or install them anyway but install X by hand on top of it. In the first case, it was really difficult to install any package that relied on X. On RPM, I had to turn off dependency checking to do it (which meant that the primary purpose of the package management system was bypassed, IMO), and with apt, it was nigh-impossible (I never did figure out how to get apt to install something despite dependency issues). On the second case, whenever the package management system decided to upgrade my X, then my hand-installed stuff would get overwritten.

    What I'd love to have in a package manager is a more intelligent dependency check. Like, instead of just saying "I need this version of X," it would also just check for the existance of /usr/X11R6. Or if a package requires BerkelyDB, after checking "inside" the package manager, just try and see if there's a libdb.so somewhere in the LD search path. And then mark down "inside" the package management system that the "BerkelyDB" or "XFree86" dependency seemed to be fulfilled by a manual installation.

    That would be the ideal system for me.

  • by jilles ( 20976 ) on Sunday June 16, 2002 @12:28PM (#3711112) Homepage
    The key to figuring out why a particular solution is not working is trying to figure out what problem it is trying to solve. Why do we need a package format like rpm? Because linux applications tend to consist of a lot of files which need to be put in the right places. Doing this manually takes time and is error prone. Types of files may be icons, images, executables, man pages, fonts, .... In addition to these files scripts are bundled that may do configuration, clean up after removal, move files to the right directory etc. Making this work requires that the creator of the package makes a lot of assumptions like where do icons go on this system? What is the right place for an executable? Where do the man pages go? How do I add a menu item to whatever window manager is installed? ...

    Efforts to improve package system have focused on providing answers to such questions: standardization. Standardization is good but if you take a step back you realize that it is not relevant to provide answers to these questions. Specifying that this or that icon should go to some kde specific directory is totally wrong. It is the task of the package manager to provide such information, not to require it. All the package should provide is an icon.

    A package is a set of files with some meta information, not a set of files that scatter itself all over the place based on some assumptions the package creator made. Given the meta information and the files the package manager should do the rest: copy files, insert menu items in relevant menus, etc. This is how apple bundles work. Another example of this approach is the war package format for servlet applications.

    There's a lot of debate on whether .deb or .rpm is better. IMHO they are equally flawed. The only reason .deb works better is because there are fewer .deb based distributions (i.e. debian and a handfull of very small debian derrivatives). The .deb format is not plagued by differences between distributions because there's effectively only one distribution: it avoids the issues rather than solving them. Try unleashing potato based kde .debs on the latest unstable debian and you will find yourself in .deb hell (ironically most debian potato users end up trying to do the reverse: install the latest kde .debs on a potato system).

  • by Priyadi ( 5841 ) <priyadi@NoSPAm.priyadi.net> on Sunday June 16, 2002 @12:46PM (#3711203) Homepage
    If you take a look at comparison of various package management (http://www.kitenet.net/~joey/pkg-comp/), it is clearly shown that RPM and DEB have almost the same set of features.

    So, why installing an RPM is a more hassle that installing a DEB?

    Because there are more distributions using RPM, while DEB is used almost exclusively on Debian. Yeah I know there are Progeny and Storm, but they are not very popular and are using a sizable part of Debian itself anyway. When somebody decides to make a DEB package, he will make sure his package will work with Debian (and Debian only), and he can be sure that everyone that downloads his deb will be installing it on a Debian system. But when another person decides to make an RPM package, with current situation it is a very hard job to make sure his package are compatible with various version and various distribution.

    This problem will be gone if every RPM based distro are following the LSB. Even if they are all following the LSB very religiously, it is still possible to encounter this sort of problem. Say a person is using a LSB 1.0 compliant distro, but he downloads an RPM package compiled for LSB 2.0, it still won't install on his system. But still LSB is a lot better than forcing a distribution monoculture to all Linux user.
  • RPM is ok (Score:4, Interesting)

    by lameruga ( 528291 ) on Sunday June 16, 2002 @01:17PM (#3711325)
    Yeah, there is couple of problems with RPM, but:

    - it's easy to do upgrades (on RedHat, don't know about others) I do it several years from remote location, and only once it failed because of bad LILO configuration...
    - you always know which file belongs to which package
    - you can verify checksums of all installed files
    - dependencies is not a problem - it's a solution to the problem
    - it's simple to locate needed package from distro
    - if you're trying to install someone else package, you'll better to get sources, and build rpm package youself
    - I agree that it is bad idea to distribute rpm binaries only, so best is to post tar.gz source, rpm packages are optional (it is good if source includes .spec file)
    - and if you don't like dependencies, you can always use --nodeps :)

    P.S. When I start using linux in 1995, first distribution I installed was Slackware, and after one year I switched to RedHat.
    Slackware is a good, but you have same dependency problems (and you even don't know which package to install in case of such problem, lets say then installing some binary package). It also much harder to upgrade it....
  • by mark-t ( 151149 ) <markt.nerdflat@com> on Sunday June 16, 2002 @02:07PM (#3711477) Journal
    What if, when you wanted to perform a binary installation, it checked dependancies the same way that autoconf-like programs do... tries to find them in particular locations, and creates a configuration file for that program based on what it found? It can do version checking as well, and report any mismatches to the user. In situations where there isn't a clear-cut place to put such a file, the installer could create a bourne shell startup script instead. It would work everywhere, and wouldn't be dependant on _any_ rpm or deb databases.

    I realize that this would require one new file (either a config file stored in the program's library directory, or a shell script used for startup), for each package that gets installed, but we're already looking at wasting space with the rpm or deb databases anyways.... this solution wouldn't take up any more space and has the added bonus of being completely cross-distribution!

    For library packages, it shouldn't even need to store a config file... it can just check the versions of the software or libraries that it does require and report back to you. The job of actually finding the libraries as they are needed can be performed by the linker, which is presumably set up to search applicable directories. Heck, if it's not, even this information could be reported at installation time too!

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...