Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

The State of Linux Package Managers 265

I was pointed over to an editorial that is currently running on freshmeat. The author of the editorial takes issue with the current state of package managers for Linux and proposes a way to fix inadequacies. Here's a sample of the solution: "The solution to the problem seems to be to extend the autoconf approach to package systems. This requires providing the necessary tools, standards, and guidelines to software and/or package authors to enable a single source package to produce a wide variety of binary packages for the various platforms and packaging systems."
This discussion has been archived. No new comments can be posted.

The State of Linux Package Managers

Comments Filter:
  • look at debian/rules
    its like a makefile, but it builds .debs instead.

    Combined with an RPM spec file you can integrate most source with any package system happily
  • //Red Hat Linux
    //1.Download archive.
    //2.rpm -ivh foo.rpm

    3. su to root cos RPM's db is locked
    4. read all your failed dependencies
    5. Back on net, download dependecies, repeat
    6. relocate RPM cos of distributors brain dead
    defaults (KDE in /usr Really?)
    7. force install / no deps install
    8. Pray it starts
    9. use alien, treat it like a tarball
    10. The only complete and easy packaging system
    is an absence of packaging system, and autoconfed
    source tarballs with the install replacement that
    logs where the install puts it all.

    RPM is so much fun when you are not using the
    exact same Linux version as the packager.
  • /* Better yet, use Debian's apt-get tool, which automatically solves dependency problems for you. */

    Except for Debians extreme obsclesence(sp) and bias towards free software. It takes too long for
    a .deb to be made. And it doesn't help on most linux systems, since alien isn't to hot on conversion from deb
    /*
    What about, "Beat your self with a hammer, and
    wonder why it hurts?" RPM is telling you that you
    don't meet dependencies for a reason.
    Don't be surprised if you ignore what it says and
    then things don't work.
    */

    How about RPM names its dependencies differently across Linux distros? I have libx installed, but the package names differ so on one distro, it fails with a dependecy warning. Force it with nodeps, it should work. It may not - they may be more incompatible.

    Some RPM's cannot be relocated.

    Some RPMS from SuSE fail on Redhat, likewise Caldera, likewise TurboLinux etc.

    RPM sucks. All it does is allow uninstall. Its
    dependecy checking is broken.

    George Russell
  • My definitive comparison is at http://kitenet.net/~joey/package-comp/ [kitenet.net]

    It's basically where I wrote down everything I learned about the package formats while writing alien.
    --

  • Debian (.deb) packages contain more than just files and a list of dependencies. For one thing, the menus in your window managers are updated automatically.
    This is where I think the difference between packages and policy gets fuzzy. The Debian menu system is as much a part of Debian Policy as it is part of the Debian Packaging System.

    And you can't follow Debian Policy strictly without ending up with the Debian system. So these portable packages don't follow policy -- which is bad, and is why alien isn't The Answer. Or, they do follow policy by having all the information necessary for every OS/distribution on which they are installed. But then each author needs to know about the requirements of all the operating systems, and if the OS is changed ("innovated" :) then the packages won't really be correct any longer.

    I don't think there will be a magic bullet for packages until operating systems are so commoditized that there is effectively only one. And that's a long ways coming.

  • Writing a tool to create one of many binary distribution formats is a waste of time. It makes far more sense for the free software community to use a single binary package utility.
    If I want to install an .rpm on my Debian system, I can do so now. But it installs things in places they don't really belong and there are some other problems.

    A single binary package doesn't really relate to the problem at hand. The problem at hand is: how can you install a program on different systems, so it is installed appropriately to each operating system? A single binary package only makes alien obsolete, but solves none of the difficult problems involved.

    Autoconf simply edits makefiles. There is no common init file for package systems that could be used in the same way.
    Editting makefiles is far more difficult than init files. Consider how long the init file for a package is, and the length of the makefiles for the same package? The makefile is always more complicated and more fragile.

    Makefiles were solved first because that's the order in which things work -- programs are programmed, and only then are they worth installing. But it doesn't mean makefiles were easier.

  • What I've been confused about is the things ports advocates never mention.

    From what advocates say, it seems like the ports system makes building a system fairly straight-forward, but I don't know how it deals with changing a system.

    Can you uninstall programs with make uninstall?

    Does it recognize the problem that occurs when app A requires libfoo v5, and app B requires libfoo v4? What happens when you install app A? Does it upgrade libfoo and break app B?

    I really don't know the answers to these questions. Ports seems like a very ad hoc system, which isn't a great way to ensure system integrity. But as I said, I don't really know.

  • Debian is a bunch of .debs. If you install all of them, you get a Debian system. Using the rpm format wouldn't suddenly make that Debian system a Redhat system. And it wouldn't make a Redhat-oriented rpm compatible with the Debian system.

    I get the impression this is already a problem with the various rpm-based systems. There are certain RPMs that will break a SuSE system, for instance, no?

    Using RPM won't solve much. Alien already solves that problem. And, having used alien, I can see why that solution doesn't solve the real problems involved. It doesn't suddenly make RPMs compatible with a Debian system.

  • Thus, a new package can find out where it can install itself and how to link to everything it needs, without messing with system-level software. Not only that, but since the meta-information for everything is gonna be sitting right there, the software can not only resolve dependencies but also suggest configuration changes in its dependencies!
    This has the same problem that all distributions have to deal with -- the combinatorial aspect of interdependancies and conflicts.

    Which program/package controls which package? Which one Knows more about the others, and so has the wisdom to deal with installation? The problem I see in the Windows installation process is that each application thinks it's Right and does whatever it takes to make it work, even though that application is ignorant of what is necessary to make the entire system work.

    I wouldn't trust any package to be too smart -- a centralized system (like an RPM database and all that infrastructure) is restrictive but can keep the system sane and make it possible to look in from the outside and figure things out. I don't see an ad hoc system (which is what you propose) capable of doing this.

  • For those who don't want to download and install it to figure out what it does...

    Also, you could check out the GNU Stow webpage at http://www.gnu.org/software/stow/stow.ht ml [gnu.org] .

  • Well, the upgrade step doesn't seem so easy. You should use one utility to update ports, second one to check if update for package exists, third one to remove old package, and only then install a new one (and God help you if new one didn't install cleanly - you are now in the cold, because old one is erased). BTW - what with config files? RPM has pretty sophysticated algorithm to manage them, that works in 90% of cases and doesn't make a mess (i.e., doesn't prevent package from working) in 95-95% of cases. How ports system handles this?
  • "Ow, doctor, it hurts when I do that."

    Obviously you can "overcommit."

    What could we propose as an alternative?

    If you decide to install Balsa, which pulls in big chunks of GNOME, that may be a bit distressing. It's hardly hidden. And if you actually want to install Balsa, you've little choice in the matter.

    You can either say,

    • Oops. I guess I didn't want to install all that much stuff

      and back out, or

    • Pretty big, but I guess I'll have to live with that.
    There's no "Oh well, I'll install bits of it that won't work" (at least not without some explicit effort!)

    Remember that in the in the "Windows World" it also wouldn't be a 300k email package. It would be a 20MB email package that includes every library that could conceivably be necessary.

    And you'd have to worry that the email client might come with some older DLLs that will overwrite ones you're using for something else, thereby toasting other applications on your system.

  • The C comes in when make all doesn't work out perfectly. Which happens all too often.

    Then you start having to search for the #include files that the program expected to find. Which establishes that you have to install (say) a new version of ncurses or some such thing.

    It may not be a full-scale porting effort, but it does require, if you want any hope of troubleshooting, being reasonably comfortable with the tool set used for C development.

  • The right thing to do is to create front ends to dpkg and rpm.

    It would be a downright awful idea to create an InstallShield Package Installer tool that forcibly requires user intervention. The folks at TiVO [tivo.com] have taken an interesting approach; they offer to do a system upgrade every day and this requires no user intervention.

    After all, the only thing easier than moving from CLI-based utilities to X-based utilities is to move to cron-based utilities that don't require that the user do anything at all.

    The Debian folk have been working on improved front ends for quite some time, and prototypes for the dselect replacement pop up occasionally.

    Similar is true for RPM; if you actually look, you'll find tools that are actively being worked on.

    But I'd still argue that if, as you say,

    The average computer user simply can't handle the command line, let alone compiling things or even extracting files from a tarball.
    then the right answer is not to throw a GUI in front of it, it is rather to schedule a process that automatically grabs packages and installs them without there even being a GUI involved.
  • Debian doesn't impose a requirement that you watch out (much) for dependancies; it does all the things that you mention, verifying dependancies, and requesting any extra packages that are required to satisfy the dependancies.
    • No "go off and find dependancy foo, and ask to download it"
    • No "go off and run ./configure; make install "
    • The "glibc-foo" issue is a given so long as there is a mixture of packages compiled with varying versions of GLIBC.

      Of course, with Debian, it amounts to "Oops - you don't have the GLIBC that I need. I'll add the right library to the list of packages that I'll be downloading for you."

    By the way, dselect will, after it finishes downloading all the packages you needed into /var/cache/apt/archive , and installing them, ask you nicely, Do you want to erase the installed .deb files? to which the answer, for the average user, is probably always going to be Yes.

  • It works well simply becouse CPAN:

    A) Holds the source packages.
    B) Perl is mostely platform independent.

    This is really no more then we currently have in most standard *nix install packages, aka, ./configure, make, make install. But instead it's perl Makefile.PL, make, make install.
  • I'm not trying to pitch it, but the package that Debian uses does nearly all of this already. This is the selling point that starting to sell me from RedHat to debian, particularly the package conflict managment. It'll tell me what this will break, how, and offer ways to fix it.

    It also has command line and GUI modes, using packages such as dselect, etc.. I admin I haven't tried the X interfaces quite yet, but I'd imagine they are extremely simular to the command line ones..
  • dpkg should be fine with this situation if the dependencies are right -- if you remove or purge libesd0 with a --force option the worst that'll happen is that libesd-alsa0 will not install, but you'll be able to reinstall libesd0 without any problems.

    I also don't have any idea how to report bugs

    Have you looked at the reportbug package? I also remember reading a reference to a Web-based bug reporter, although I have no clue where it is..

    Daniel
  • I have 0.3.18, but it's been in there for several versions. Look:

    bluegreen:~> sudo apt-get --reinstall install hello
    Reading Package Lists... Done
    Building Dependency Tree... Done
    0 packages upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 4 not upgraded.
    Need to get 19.5kB of archives. After unpacking 0B will be used.
    Do you want to continue? [Y/n]

    (and it would let me reinstall hello if I hit "yes")

    Daniel
  • As I said (or maybe didn't), I checked the console-apt bugs page and it looked
    to me like it wasn't really a release-critical bug. So I agree that removing
    console-apt was probably overboard.

    Second, they LEFT IN aptitude, which DOESN'T EVEN WORK!!

    As the author of Aptitude, I have to take some offense to this, particularly since I use it daily for my own package management and just finished using it to purge a lot of cruft packages from my system and perform my daily upgrade. Not too much offense, though, espcially since it has been known to break from time to time :-)
    Would you mind telling me what doesn't work for you -- filing a bug report with debbugs or Sourceforge would be favorite -- and seeing if you can reproduce it with the latest version? (0.0.6, available from Sourceforge or with the apt line: deb http://aptitude.sourceforge.net/debian ./ )

    Thanks,
    Daniel
  • I do know the difference--I currently am working on two (vaguely related) projects that link against libapt, so I'd better know the difference ;-) -- but this isn't about apt or dpkg, it's about .tar.gz, .{dsc,tar.gz,diff.gz} and .srpm (?). As you pointed out.

    In any event, I agree that abstracting the packaging process out is nontrivial and maybe even impossible, but I think taking a crack at it would be nice, just
    because it would be so incredibly useful if it worked. (not that I have any
    time to work on it :) )

    Daniel
  • I know about Debian source packages. I use Debian source packages. I know how to make Debian source packages. I make Debian source packages.

    This is not about Debian source packages. This is not about RedHat source packages. This is about abstracting source packages so that one set of build files will do source installs, straight binary installs, binary .tar.gzs, debs, rpms, slps, and FooPkg packages. And personally, I think that it's a good idea *EVEN IF* it's technically impossible due to the complexity of integrating a package into an existing system. It's still something to think about as a possibility if the opportunity to do it ever comes up.

    Daniel
  • Just to reply to you again..

    I'd like to add a brief note to my earlier reply -- abstracting the packaging system is difficult for large and complex packages, but those aren't the ones that it's really hard to keep up with, as there are relatively few of them. What this would really benefit would be packages that install a few files in /usr/bin (on Debian systems at least) and maybe some data files under /usr/share -- for example, your average weekend-hack game.

    Autoconf already provides a mechanism for flexibly selecting where to install a package based (I think?) on system defaults -- for example, $datadir and $bindir -- and if used properly it *could* (er, theoretically) be extended to automatically generate a minimal proper package for various target systems. You might even get it to generate some simple package information -- eg, menufiles. It wouldn't cover every eventuality, but for small programs, or programs with simple needs, it'd be a start.

    Now, the real question is, how many of these games would you trust the binaries of? :-)

    Daniel
  • You can reinstall packages with "apt-get --reinstall install foopkg".

    Daniel
  • This gets brought up at regular intervals on debian-devel, and I believe is a longstanding wishlist item for Apt. (in fact, the apt internal structures have fields which I *think* are placeholders for tracking whether a package was installed to fulfill dependencies (the necessary information to implement this feature) )

    One thing that concerns me about it is that while it makes things easy for the hypothetical New User[tm], you can get into serious trouble here with installing a library originally to fulfill a dependency, linking against it (or worse, since -dev packages depend on the library, downloading a binary linked against it) yourself, and then later removing the package depending on the library.

    On the other hand, that's a rather esoteric failure case and might not be relevant. And I'm not in charge of libapt development anyway, I just use it :-)

    Daniel

    PS - it's possible but not really necessary (I think, 5-second conclusion, may be wrong!) to refcount, since apt tracks reverse dependencies as well as dependencies -- just iterate over the newly removed package's reverse depends and see if it still fulfills any other dependencies.
  • Of course you're right. I don't think too clearly after about 10:00PM, I'm afraid.. :-)

    Daniel
  • Yes, apt rocks, etc, but the article wasn't about package formats, it was about packaging: that is, how can authors create their build scripts in such a way that any package system they can target any package system they feel like?

    Daniel
  • I like Debian, but I'm not sure the package manager is that much better; I think there's just a lot more work put into making sure packages play nicely with one another and with the system as a whole. In particular, there are a couple of nits to pick:
    -> RedHat can use debs just as well as Debian can use RPMs (that is, not too well :) ) -- joeyh's alien package will happily convert both ways.
    -> RedHat users claim they have an apt-like program. Not having used it, I can't comment on its utility, but you should be aware it exists. (rpmfind)
    -> Config file handling and rebootless upgrades are cool. Oh wait, you said that already :-)

    One other note -- if you go to console-apt's bug page [debian.org] you can see why it was removed from potato -- there were evidently some release-critical bugs filed against it (segfaulting) that the maintainer didn't get to in time. Whether they really should be release-critical, and whether they were fixed..I'm not sure; I don't use console-apt, so I can't comment.

    Daniel
  • Bah, libesd-alsa0 does Provide: libesd0. I think maybe it's missing a Replaces: line, but I'll shut up now :-)

    Daniel
  • I think not removing data files when you uninstall is a feature, not a problem. If I uninstall a text editor, should all the text files I've created with it go away?


    --

  • Writing a tool to create one of many binary distribution formats is a waste of time. It makes far more sense for the free software community to use a single binary package utility. The analogy with autoconf doesn't hold up. Autoconf simply edits makefiles. There is no common init file for package systems that could be used in the same way.
  • I am perfectly happy with the way packaging system currently works. Granted, I am not an average newbie -- I've been running Linux for almost 2 years, so I may overlook something that a newbie would have trouble with.

    rpms and debs make install/uninstall simple. I mean how hard can rpm -i be? Even way back when I first installed Linux (RH 5.0), I had no problem with that. Uninstall? No problem either: rpm -e. This works just as well as InstallShield, and doesn't waste download time by putting self-extracting code in every package.

    Debian does an even better job. "apt-get install foo" will auto-magically *download* and *install* foo for you, as well as any other packages that foo needs in order to work. Give me an equivalent windows command for that. Similarly, "apt-get remove foo" will uninstall it.

    So, I just don't see what the problem is.

    What I would like to see though, is some kind of consolidation of debs and rpms into a single universal format.

    Also, a GUI config tool for packages would be very nice. Newbies can get scared away by Debian's text-mode config scripts. But progress is already being made in this area. The frozen potato (Debian 2.2) already includes a front-end for package configuration.

    To sum it up, package system can certainly use some improvement, but things are nowhere near as bad as the article would seem to imply. I would like to hear other opinions (esp. newbies) on the subject.

    ___
  • ---
    Personally, I think that encouraging binary packages is a Bad Idea for the Free Software community.
    ---

    ...while a great idea for anyone who wants to use Linux/Unix to actually get work done without screwing around with compilers.

    Who should be focused on? It's hard to say, although my intuition seems to go with the latter.

    - Jeff A. Campbell
    - VelociNews (http://www.velocinews.com [velocinews.com])
  • ---
    Looks like we got a clueless moderator here.
    ---

    Same here. Somebody re-moderated it as 'informative'.

    I venture that more people use Linux as a tool, rather than a means in itself. I believe more people use it primarily for server usage (with desktop use rising fast) than as a platform with the main intent to code for. Of those that do code, do they make code-level alterations to the majority of items they download? Not likely. Most people have a few set things they like to hack on, and would rather just install everything else so that It Works.

    Having the ability to compile stuff - which I agree is a 'big plus' - doesn't mean you should be forced to do so. Other than helping people gain ground on the 31337-ness scale, I just don't see why it's necessary for everything to be obscure and non-intuitive.

    - Jeff A. Campbell
    - VelociNews (http://www.velocinews.com [velocinews.com])
  • As a FreeBSD user, source regen is the only way I upgrade. :)
  • Why not use something like the FreeBSD Ports [freebsd.org] package management toolkit? It maintaines all the dependencies. It also provides simple install and uninstall mechanisms. In fact, I have never had a package installation go sour with this. In fact, the FreeBSD package manager was one of two reasons I dropped Linux completely. It is just that nice.
  • Answer: because each little program spews a morass of files all over the filesystem and is near-impossible to delete by hand.

    How about instead of fussing over package manager formats, we do instead what has been a tried and tested approach to the whole business: bundle directories. A directory with a .app extension containing a binary and icons in predefined places, plus libs, config, documentation, whatever the program needs. And, for when libraries should be shared (eg:GTK) a system called "frameworks" - basically bundlesd shared libs, plus include-files, plus docs, plus whatever.

    The important point is: Joe Average never needs to know what's inside a bundle. The filesystem GUI treats them like single files. To install a program, double click the tarball and a window opens with a bundle icon. Drag the bundle icon to /Local/Apps and tadaaah! one program installed. To run the program, double click the app bundle.

    Now isn't that a bit nicer?

    (hint: GNUstep is already using this, and it should be fairly trivial to configure the misc binary support to run the launch script on execution of an app bundle)
  • There is an undeniable niceness to grabbing a zipfile, unloading it into a temp directory, running the program for while, deciding whether to keep it, or to delete the directory.

    You forgot "and digging out files from the system directory, and figuring out which system-critical DLLs have been written over, and clearing out the buried registry entries..."

    Win32 does have an easier install process, but uninstalling is a bitch. I'm loathe to just "try out" some package because who knows what state my system will be in when I uninstall it again...
    ----

  • ldd `which ln`
    /lib/libNoVersion.so.1 => /lib/libNoVersion.so.1 (0x40014000)
    libc.so.6 => /lib/libc.so.6 (0x40020000)
    /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
    ldd `which mkdir`
    /lib/libNoVersion.so.1 => /lib/libNoVersion.so.1 (0x40014000)
    libc.so.6 => /lib/libc.so.6 (0x40020000)
    /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

    (sorry if that came out really ugly)

    Anyway, how am I to recreate /lib without these tools?

    I wouldn't call rpm and deb proprietary formats, either. They're cpio and ar archives respectively.

    Also, distributions could not install as quick if they had to run configure for every package they were to install.

  • Everyone keeps thinking and talking about local software. Think bigger.

    LANS are getting more and more popular. I have one in my home. They are near ubiquitous in high tech workplaces. No matter how easy *BSD ports or Debian's apt-get is, there are economy of scale benefits of just maintaining ONE application collection, rather than a separate application locally on each machine of a LAN. It's really a separate problem space; packaging systems like DEB and RPM make installing software easier (reducing difficulty in installation). Distributed filesystems can reduce the amount of installations required (reducing redundancy of installations). What can't I take advantage of my friend's diligent use of apt-get just one IP address over? Why should I do redundant administration if I don't want to?

    The next revolution in linux software distribution will be distributed filesystem friendly software collections; and I don't care if that distributed filesystem is Coda or Intermezzo or AFS or even lowly NFS. I just wish I knew the best place to throw my hat in to the ring and work on this right now. This is the one station where linux software collections have major room to improve.

  • This is really no more then we currently have in most standard *nix install packages, aka, ./configure, make, make install. But instead it's perl Makefile.PL, make, make install.

    I think the poster above is talking about the module CPAN, which you can execute with: perl -MCPAN -e shell

    This is very much like a package manager and, here I have to agree, very comfortable. Just type i.e. install foo, and foo is downloaded, compiled if necessary, tested, etc. Dependencies are taken care of too; and the shell is really nice too: you can use regexps if you don't know exactly what the package is called, and you can even read the README before you download.

    That being said, I think you have a point about all the packages being at a central location. Nevertheless, I think the standardization of the module packages plays an important role too.

    Chris
  • I have to agree. I found the ports collection to be very easy to use. In fact, the FreeBSD packages are merely precompiled ports.

    One caveat, however. The purpose of ports is to allow painless compilation on FreeBSD. Since every FreeBSD is like the next, the patches and configurations work without a hitch. But how will a ports work under Linux when there are so many different distributions?

    I can't even get some configure scripts to run to completion on some distros. How in the world will ports work when every distro wants to do their own thing? Will every distro have to maintain their own ports collection?

    What we need long before we need ports, or the articles universal package manager, is a standard Linux. When the LSB is done, then we can start getting stuff to work properly.
  • "Can you uninstall programs with make uninstall?"

    Yes. Just type make uninstall.

    "Ports seems like a very ad hoc system, which isn't a great way to ensure system integrity."

    From what I can understand listening to people who know, it's much more robust than rpm but not quite as robust as apt. It's more than just a set of makefiles. It keeps track of what's installed, what they're dependent on, etc.
  • "If you installed with RPM or the Debian package manager, you still have application-created data lying around."

    I am unaware of any OS that deletes user generated files when the application is removed. Think about this and you'll realize what a Bad Thing that would be. Maybe when you uninstall a program you want to get rid of all your work as well, but some people don't.
  • I don't understand your argument against RPM/Deb's method of uninstalling.

    They work fine for me... never had a problem with lingering files. Anything related to it (config files, whatever) are always in ~/.&ltappname&gt if it's a UNIX-like conforming app. Therefore, all you have to do if you really want to get rid of something is removing it with your package manager, and then rm -rf ~/.&ltapp&gt.

    Would you prefer that the package manager erased these directories for you? I think not. Sometimes when you uninstall a package you WANT to keep this data (I do almost always). Hmm, perhaps an option to --nuke all associated files for when you want that? :)
  • Yeah, just pull it all out of context, windows is easier for most people by a long shot.

    I'm not pulling it out of context. You're missing the point by focusing on my example.

    Under Windows, there is structured no way to install, uninstall, manage dependencies, find out which programs own which files, or which programs need which files.

    Your given example of a "Windows" install is totally bogus. For one, you totally ignored the issue of 15 different ways to distribute archives. For another, every install program is just a little bit different. Going with the defaults rarely works, or if it does, yields a system which is totally unmaintainable. Uninstalling things is a nightmare, and DLL versioning is, as is so often stated, a living hell.

    I know you post to Slashdot just to be have fun as a Microsoft-loving troll, but come on! You can do better then that, TummyX!

    Read RPM documentation to figure out how to use RPM.

    Bah. First of all, if the user is interested in RTFMing, they are going to have to do it anyway, regardless of platform. Second of all, if you're using GNOME or KDE, you can just double-click on the package file, and it will offer to install itself. Furthermore, there is no question as to what kind of installer it will be.

    Get obscure errors about dependencies you need.

    I knew I should have used Debian as an example. Okay, replace all instances of "rpm" with "apt-get", and your entire argument just evaporated. apt-get will automatically resolve all dependency issues for you, including downloading the needed packages from trusted sources.

    Goto redhat.com to try to find the other RPM you need.

    You forgot, "Beat your head against the wall, simply because you're a Linux user, and Linux requires you to do that." Give me a break, TummyX. Just use rpmfind and it is totally automatic.

    Manually make your KDE links to the files.

    So the packager didn't do there job. Nothing on Windows makes an installer put links on the "Start" menu.

    execute the application only to find that it depends on some other application to get XXX feature enabled.

    Right, and of course, that doesn't ever happen on Windows or anything like that.

    Sometimes you actually give some good insight into the limitations of Linux, TummyX, but lately, you just seem to be generating noise. If you're going to troll, at least do it right!

  • su to root cos RPM's db is locked

    Okay, I forgot that, but: Good! This helps keep the virus problem to a minimum! Besides, the more recent versions of GNOME and KDE take care of this nicely, by prompting you for your password.

    read all your failed dependencies

    Better yet, use Debian's apt-get tool, which automatically solves dependency problems for you.

    relocate RPM cos of distributors brain dead defaults

    While I agree that some RPM's pick rather dumb locations for things, how is relocating them any different then from changing the default location in a autoconf-based install?

    7. force install / no deps install
    8. Pray it starts

    What about, "Beat your self with a hammer, and wonder why it hurts?" RPM is telling you that you don't meet dependencies for a reason. Don't be surprised if you ignore what it says and then things don't work.

    The only complete and easy packaging system is an absence of packaging system,

    That doesn't manage dependencies for you.

    RPM is so much fun when you are not using the exact same Linux version as the packager.

    While RPM has its faults, I haven't found that to be one of them.

  • And why do you have to associate "microsoft-loving" with "troll" all the time?

    I don't. However, you are a Microsoft-loving troll. That is, a troll whose preferred method of trolling is to advocate heavily in favor of Microsoft, especially in Linux discussions where it is off-topic and guaranteed to raise flamage.

    Like I said, sometimes you actually raise some valid points, but it gets old after awhile, and this was just pretty weak.

    If it makes you feel any better, you're one of the best trolls on Slashdot. You always keep just close enough to the truth that you don't get moderated down or ignored. You even have an account with good karma, a technique well beyond the skills of your average AC.

    So by my example I was showing how your example meant very little.

    Ah, so we're no longer trying to argue that Windows does package management better, eh? I gotta hand it to you, TummyX, you know what you're doing. Looks like you're going to lose the debate? Answer a different question! A move from the Bill Gates playbook itself.

    I was parodying your example for fun.

    At least you admit it. I appreciate that.

  • I have to admit, Debian's package system is the big thing that is drawing me towards trying out Debian. (Mainly, what I'm waiting for at this point is for "Potato" to become "officially" stable.) More automatic, more features, and a better organized package achive. Gotta love it.

    However, as a current Red Hat user, I figure I might as well put in a word for RPM. It manages dependencies, source, installs, and so on and so forth very well. The main thing it lacks is Debian's automatic package retrieval for dependency satisfaction (again, an awesome feature). But, if you are using Red Hat, be aware of the "rpmfind" command. The command "rpmfind foo" will search the net for package "foo" and offer to download it for you. Not Debian, but a heck of a lot better then a regular netsearch, for sure. :-)

    Just an FYI for RPM users.

  • Except for Debians extreme obsclesence(sp) and bias towards free software.

    Debian is actually very up-to-date. They don't follow the Red Hat model of "a stable release every six months"; they use a more dynamic system where all packages are always being updated.

    And while they do favor GPL-style Open Source Software, they by no means exclude other OSS software. It just comes from a different branch of their tree.

    How about RPM names its dependencies differently across Linux distros? I have libx installed, but the package names differ...

    How about the fact that RPM doesn't depend on packages at all, it depends on files? Do you have a legit gripe here, or did you just have a bad experience with RPM as a child and you're not willing to see reason anymore?

    Some RPM's cannot be relocated.

    And some source code contains hard coded paths all over the place. A bad package is a bad package no matter how you package it.

    Some RPMS from SuSE fail on Redhat, likewise Caldera, likewise TurboLinux etc.

    Funny, I don't have that problem. Are you using Red Hat 3.0 or something?

    What's up with you? I mean, I know RPM isn't a perfect piece of software, but you seem determined to not like it.

  • Package formats such as deb and rpm are proprietary, not in storage format (rpm's use cpio or something), but by composition and requirement. They are composed in a format that is exclusive to their own system of doing things (having specific files in the archive with meta-data about the package).

    Could you please explain to me how else you are supposed to figure out this information? Any package is going to have to include meta-data about the package (or be damn hard to use, otherwise). It may be in English in an INSTALL file, but it is there. And computers are notoriously bad at reading English. Both Red Hat and Debian use .spec files which are ASCII text, human-readable, and well-documented. I don't see how it can get any better then that.

    They require their databases...

    Again, of course they do. The whole point of a package manager is to keep track of what belongs to what, and so on. Whether you keep that in /var/lib/rpm or a text file of installed software, you're still keeping track of it. I'd rather have the searchable database.

    They also require someone specifically construct them.

    I wasn't aware that .tar.gz archives built themselves magically. :-)

    try extracting a deb or rpm without the proper tools...

    Try extracting foo.tar.gz without tar or gzip. What are you going to do, decode the binary by hand? :-)

    My point is, there is nothing magical about .tar.gz files vs .rpm or .deb files. They are all packages. They all require tools to use them, and they all contain data not easily readable by humans. The only difference is, the newer package formats are easier for computers to work with.

  • I've never used Red Hat, just Debian. Can someone please tell me why anyone should bother with a package manager that doesn't handle dependencies?

    RPM does understand and manage dependencies. I suspect the original poster was referring to the fact that Debian's "advanced package tool" will solve dependencies for you. When installing, RPM checks for dependencies, and if anything fails, it complains and aborts. apt can actually seek out and install other packages to solve dependencies. This is a very nice bonus for Debian users, and something I (as a Red Hat user) wish I had.

  • While I generally agree whole-heartedly with what you wrote, I do have a couple minor things about RPM to post in the interest of being as helpful as possible to any RPM users in the readership. I generally agree that Debian's package system is overall superior to RPM, and I wish Red Hat would fix it.

    RedHat packages depend on files. Debian packages depend on other packages. The advantage of this for RPM is that you can install packages, if you've compiled the libs yourself...

    Additionally, this means that RPMs don't depend on specific implementations of a generic service. In other words, a properly done RPM will depend not on sendmail, but on smtpdaemon. Can Debian do this?

    Upgrading the system: With RedHat (maybe *RPM?), you reboot the system with the CD/disk of the new OS version, and use the "upgrade" option.

    You can do it this way, by I generally find it easier to simply mount the CD, and do a "rpm --freshen -vh /mnt/cdrom/RedHat/RPMS/*.rpm". The --freshen switch tells RPM to upgrade all the specified packages, but only if they are already installed.

    Just FYI.

  • What department is this from again?

    heh.

    "Software is like sex- the best is for free"
    -Linus Torvalds
  • I personally like .deb files. They are easy to use for both installs and uninstalls. However, dpkg and dselect try to be smart and won't download a package you currently have installed if you have the most up-to-date version.

    I would like to see a package checking program. Something that will check the packages installed and verify all required files are indeed installed and maybe even if they are corrupted. Then, this program will either reinstall the complete packages or atleast the affected files.

    Any Ideas/Suggestions?

    Quack
  • Much nicer command then what I was using, thanks, I will use it in the future - however, my point still stands that it's arcane to use. You shouldn't have to put thus and such into it to get information out like that - there is nothing as structured as APT that I know of, I was using that to find where a certain library file was located that was required by another RPM.

    In addition, yes, I'm doing an end run around RPM and yes, it's the wrong thing to do. By stating that I was merely pointing out (without saying it, as I'm prone so often to do) that in order for rpm to be more universally accepted it has to be more supported by the distros. LinuxPPC Dev Release 1.1 came out...last week I think, and it's binutils is at 19, vs. the 27 I last checked for.

    That's all I'm really saying.
    ls: .sig: File not found.
  • My issue with the article begins with the third paragraph:

    The number of package management systems is very large, and it is neither possible nor desirable to standardize on a single one.

    Why is it neither possible nor desirable to standardize on a single package management system? I have been extremely happy with RPM as a basis for package management. It's vendor-neutral, architecture-neutral, compresses nicely, provides for both source and binary package types, and provides for building from pristine sources. What could possibly be wrong with that?

    I get the feeling that what he's shooting for here is a way to create a single specification file to be used with a tool to create binary packages for all architectures, and all package managers. In this way I could theoretically build a Linux RPM, a Linux DEB, a Solaris pkg, and a FreeBSD whatever-the-hell-package-format-they-use-when-not -using-the-ports-collection.

    My point of view is, "why bother?" It seems to me that implementing RPM (or a similar format, perhaps with extensions to handle dependencies like DEB does) is the logical way to go here. One spec file can already create packages for multiple targets.

    As an aside, I believe this paper is a perfect example of a demonstration of how as a community, we seem to suffer from multiple-personality syndrome when it comes to our software and tools. Do we let the various options duke it out in the "marketplace"?, or do we standardize for interoperability and easy configuration management? Both have their merits, but I chose RPM at my workplace because I think at this point it's the "best of breed" when it comes to package management and software distribution, and if I had to choose a package management system for every OS, RPM would be it.
  • What can't I take advantage of my friend's diligent use of apt-get just one IP address over?

    Well, right now you can do this:
    scp friend:/var/cache/apt/archives/* /var/cache/apt/archives

    Later, you might wish to have both apt sessions run through an http proxy server (such as squid). For example:
    export http_proxy=http://friend:3128/

    As for the installation questions, non-interactive debconf backends are being worked on, but even that won't be a timesaver for 2 machines. Just answer the questions :)

  • I'm unsure how this works in practice though, eg. if i don't have a mail-transport-agent installed, does it pick one??

    Debian packages are supposed to depend on a specific packagename *or* the virtual package, for example:
    Depends: libc6, exim | mail-transport-agent

    If you didn't have a mail-transport-agent installed previously, it will install exim for you.

    The authoritative virtual packagename list is h ere [debian.org], it's updated from time to time.

  • Macintosh users (especially newer iMac users) are not going to put up with complex/user intensive installers. I know that Apple is using part of the Next-ish .App directory idea as well as /usr, /lib, /bin, etc directories, but they are also doing their best to hide this from the end-user. Does anyone know how Apple is handling installation issues in Mac OSX? Could the same approach be used to install/pkg software for Linux?
  • Your points are all good, but the bottom line is that I've been in this biz for quite a while, and I find autoconf pretty daunting for the average project. That needs to change. Yes many of the problems are hard, but they need solutions that present ease-of-use for the general case. autoconf does not do this.
  • Good points; lemme see:
    a)Yes, in the begginning you would solve the same problem --figure out where everything should go. But when you do put the 'meta-structure' into place, you won't have to do it ever again. A RedHat distribution's meta-structure should have the same data as a Debian's, as a home-brew. On any CPU.

    b) OK; I may have not been totally clear; no application should hold information about others. It should only contain information about itself. So, if it does make the wrong assumptions, it will only break itself. For example, say you're installing an Apache module. The module installation will go to the central depository (I favor /conf) and look for say /conf/apache/main.xml and then source.path or something. Then it will know how to compile itself w/o having to be told --with-apache=...

    But that's just replicating existing functionality... Think of how easy it would be to build a universal GUI for *anything* on top of /conf. The module's developer wouldn't have to worry about a GTK interface or a KDE one or one in Lesstif or or SVGAlib... One parser/GUI to rule them all ;-)... One library to parse config files... Take the grudge work out of the developers and let them create utilities/applications.

    The Windows way is flawed: the Registry is not human-editable (at least not easily) or intuitive. The dll's have to be centralized and get all mangled up. Unix can leapfrog Windows now: XML config files, and well, symlinks ;-)...

    What I am proposing is a redesign (which I know will be a pain in the ass during the transition period). What we have *now* is an ad hoc system --which doesn't work.



    engineers never lie; we just approximate the truth.
  • The problem with current Unix systems is not just the packaging; it's configuration. Unix's great strength is its flexibility, adaptablity, and yes its own bizarre OO-design (everything's a file).

    When you have something that flexible, you need to account for all the different configurations and setups people can and will make to the system. That's what autoconf does for builds and the package managers are trying to do for installs. But that's solving the wrong problem: you're effectively solving a design issue with workarounds, with duct tape and paperclips.

    What needs to be done, is for Unix/Linux to apply what years of experience have taught 'hardware' engineers: when you have flexible configurations, you need a configuration management system. The RPM database is not enough.

    What we need, is a registry-like, centralized repository of information about the system, in a standardized language that: a) can *very* easily be read by software (a la Registry), and b) can as a last resort be edited by humans with minimum tools (a la Unix /etc files). I propose (and have, again and again) XML.

    Imagine you're working on a system that doesn't have an /etc or a /var or an RPM database. What it does have is a way for you and *all* the software on your system to introspect, and find out properties of other software on your system (via some secure mechanism, of course).

    Thus, a new package can find out where it can install itself and how to link to everything it needs, without messing with system-level software. Not only that, but since the meta-information for everything is gonna be sitting right there, the software can not only resolve dependencies but also suggest configuration changes in its dependencies! And since all that will be in a parsable structure, you should be/would be able to go on the Net and find out the answer to the exact problem.

    Just dreamin...



    engineers never lie; we just approximate the truth.
  • Debian (.deb) packages contain more than just files and a list of dependencies. For one thing, the menus in your window managers are updated automatically. On a more general level, some packages have scripts which run when they are installed. E.g. the glibc 2.1 package checks if you are upgrading from glibc 2.0 and if so it restarts the services on your machine which may be affected. I imagine rpm can do things like this, too.

    Besides, there are more incompatibilities between different distros than just package formats. Configuration files often need to be kept in different places, particularly init scripts. The Linux Standard Base may help in future, but for now the differences are there.

    I'm not saying that GNU Stow couldn't be part of a Grand Unified Solution, just that there's more to modern package management than archive formats.
  • The biggest difference in the actual *packages*:

    .deb, .rpm : package knows what other stuff needs to be installed for it to work.
    .tgz : package doesn't know anything.

    .tgz archives are often just installed by untaring them. This can make it a nightmare to de-install something. (You can't just remove every file which gets installed. If you untar A.tgz and B.tgz, both containing /lib/foo.so, and then delete all the files which were contained in A.tgz, then you'll delete /lib/foo.so and B will stop working).
    However, this is not a deficiency of the *package*, just the install method. If you install a tgz archive using dpkg (via alien) then you don't have this problem.

    If two packages both contain files called /lib/foo.so but the two files are not identical, this causes problems. A centralised repository, like a [debian/redhat] mirror, can ensure that this doesn't happen. Free software can live in repositories, and be tailored to ensure conflicts don't happen. You can't do this with proprietory software; if you install two proprietory packages from different locations, there's no way to guarantee they won't have any conflicting files, except doing away with shared libraries altogether.
  • dpkg --purge foo

    This will remove all files associated with foo, including config files. It won't remove data files you've created; this is good. (Imagine if uninstalling Word removed all your Word documents!)

    WARNING TO NEWBIES: You probably DON'T want to do this. Instead do "dpkg -r foo".
  • I think you mean ~/.<appname>
  • I don't know much about rpm. But in Debian you can get a list of available packages and highlight items on this list and the system will install them automatically.

    There's a few issues with this list. It is very long (thousands of packages) and has lots of stuff like libtiff which would only be installed because another package needed them. A newbie doesn't need to see these libraries, just the "actual programs". But all in all, it's easier to install a Debian package than a Windows program that uses InstallShield (as other people have pointed out).
  • > When you go to download some Windows software you get a single .exe to download and install.
    [...]
    > For linux software we already get to choose from half a dozen different packages

    This is a non-issue for the end user, because nearly all popular [freely-distributable] software for linux is available on your distro's CDs / ftp site. The user doesn't need to worry about the format, because the distro handles it cleanly.

    Of course, things aren't that simple if you want software which isn't freely-redistributable. But AFAICS there's no way to clear this up without abandoning shared files altogether, or risking the kind of corrupted mess which is possible with Windows packages.
  • I think the original poster meant "to (handle dependencies like Debian does)" not "(to handle dependencies) like Debian does".

    But a package manager that didn't handle dependencies would still be useful, to do clean uninstalls.
  • While I can offer absolutely no insight into a feasible way to make this situation better, I agree that the biggest aggravation in package management isn't installation, it's management of already installed packages, including uninstallation. Now don't get me wrong, while the windows approach to this is 'easy', I by no means consider it good. Registry entries all over, failure to remove files, overwriting of system DLL's, etc. But I for one would be ecstatic if someone would figure out a better way to manage already installed packages.
  • Have you guys heard of makeself? It creates self extracting and installing files(tar.gz or tar.bz2). It just tacks on a 30 line BOURNE shell script to the specified archive, and viola.

    Get it here http://www.lokigames.com/~megastep/mak eself/ [lokigames.com]

    Dom
  • I've talked about a gtk applet that would use XML backends for describing various things. It'd be an extensible control panel.

    The principle was that there is a root run daemon that monitors PS and measures how often certain programs are run. This would allow a person to choose little used programs to be removed.

    Another part of the add/remove "front end" for the control panel would be installation. I talked with the author of gxTar (see freshmeat for it) about the install principle. It would involve untarring to a temp dir, and analyzing the output of configure --help. Then, the user can use "default safe" values, or change then via a wizard or dialog. For rpms, slackware tarballs, and debs, you could just use the the preexisting methods for checking files. For the GNU/autoconf, you could use something like the locatedb functionality for monitor what was added to the filesystem.

    This allows a nice centralised install and remove functionality, regardless of packageformat, and can be extended to handle more and more package formats. It also allows you to remove what you don't use. So if you go window shopping on freshmeat and install hundreds of applets, you can prune away what you don't use after a few weeks.

    Well, just some ideas of mine :-)
    ---
  • Free software has two things going for it in this case. First, there is a long history of evolution rather than completely scrapping good software. There is no reason that the best solution can't evolve from the partial solutions that already exist, just as CVS was built as a new set of tools on top of the perfectly good RCS file format and initially even used RCS under the hood.

    Second, there are a number of good tools that already solve parts of the problem available in source. Anyone with an interest in solving this can go to it. It sounded to me like a proposal to start developing a new tool. I look forward to seeing the prototype.
  • It seems to me that implementing RPM (or a similar format, perhaps with extensions to handle dependencies like DEB does) is the logical way to go here.
    I've never used Red Hat, just Debian. Can someone please tell me why anyone should bother with a package manager that doesn't handle dependencies? ISTM without that feature we might as well stick with the good ol' .tgz.
  • I was parodying your example for fun.

    The point is both linux and windows have problems, and neither is perfect. So by my example I was showing how your example meant very little.

    And why do you have to associate "microsoft-loving" with "troll" all the time?

    This is an 'open geek' forum, and you can be a geek and like (or not hate) microsoft at the same time. Admittedly, it is heavily biased to linux, but microsoft stories always gets the most postings :) and insteresting debates.
  • Ah, so we're no longer trying to argue that Windows does package management better, eh? I gotta hand it to you, TummyX, you know what you're doing. Looks like you're going to lose the debate? Answer a different question! A move from the Bill Gates playbook itself.

    No! I really was just being facetious. Debian indeed does have a kick ass package managing system. I don't even think windows has what you call "package management". It comes with some install APIs, and has cool stuff in other areas (windows 2000's self healing applications for example).

    However - I do think that it's easier to setup programs (in general) on windows than linux tho.
  • by X ( 1235 ) <x@xman.org> on Tuesday February 15, 2000 @12:54PM (#1270437) Homepage Journal
    The problem isn't developing a universal format, it's in getting everyone to support this format. I think a really good solution is already available in the OSD [marimba.com] standard. It's a standard developed by Marimba, Microsoft, Tivoli, and Novell which has been submitted to the W3C.

    It's designed to be vendor neutral, and it's been written by firms that know a lot about installing software (in particular Marimba and Tivoli bear some focus).

    The other nice thing is because it uses XML it's completely extensible.

    Of course, the big problem is getting everyone to support it!
  • This approach may be fine for you and I; we're all comfortable with: ./configure; vi Makefile; make all; su # make install

    Unfortunately, that isn't all that suitable for "naive lusers" who will react to this with a big Huh?!?

    Rather than GNU Stow, I'd think the direction of BSD Ports [freebsd.org] would be suitable; that provides the merit of automating the process of setting up configuration info for lots of packages that hasn't yet been done with Stow. You may want to believe that

    Dependencies could be added to Stow by someone without a lot of trouble.
    I remain quite skeptical, as it has taken years for distributions like Red Hat, Slackware, and Debian to become richly functional.

    Note that Ports, like Stow, uses nothing that anybody gets tangled into thinking is somehow "proprietary." (Not that RPM or DPKG actually use anything proprietary; it's mostly Slackware bigots, with emphasis on bigot, not on Slackware, that claim, dishonestly, that RPM/DPKG are somehow proprietary formats...)

    But that misses the point.

    Your proposal may be suitable for you and I, albeit marginally so, as I'd much rather that the administration of package installation for the 99% of packages where "default is fine" be dealt with by someone else; it is NOT, by any stretch of the imagination, suitable for making Linux deployable in other than highly UNIX literate environments.

  • Apt is a front end.

    I don't see any realistic way around the consideration that Systems Integration Is Messy.

    Whether we talk about DPKG, RPM, or BSD Ports, it's a given that the process of getting packages integrated into a particular distribution is a somewhat messy process. In all cases, there is some form of patch that gets applied to indicate precisely how they are to get installed.

    It is getting increasingly common for Debian packagers ( e.g. - the human being that builds the patches required to integrate the package in with Debian) to have some degree of involvement with the "upstream" production of the original, authoritative source code tree.

    When this happens, it is not unusual for there to be a ./debian subdirectory containing the "Debian-relevant" patches, and I've also seen ./SPECS directories with RPM .spec files. In cooperative development efforts, this is the point at which important cooperation takes place, as this means that there is some thought to systems integration in the original source code tree, which will make the job easier for everyone else.

    It's not likely that the level of effort will actually diminish to zero, but if it becomes largely automated, and the human effort can be widely distributed, that makes the task not too herculean.

  • by Thomas Charron ( 1485 ) <twaffleNO@SPAMgmail.com> on Tuesday February 15, 2000 @12:56PM (#1270440) Homepage
    I thought the same thing, untill I started using debians package manager. I still use RedHat on most of my machines, simply becouse debian can tend to stay in development forever, but once they come out with potato, I'm there..

    It's the difference between a 10 speed and a Harley. Particularly the conflict managment, aka, you install package A. When you select it, it detects problems with package A, B, and C, which would also need to be upgraded due to conflicts, and gives you the ability to update them as well. And the package manager also handles updates as well, that puts RedHat's up2date and gnorpm using web search to shame..
  • (try extracting a deb or rpm without the proper tools...

    bluegreen:/var/cache/apt/archives> ar t apt_0.3.18_i386.deb
    debian-binary
    control.tar.gz
    data.tar.gz

    bluegreen:/var/cache/apt/archives> ar p apt_0.3.18_i386.deb control.tar.gz | tar ztv
    drwxr-xr-x root/root 0 2000-02-13 05:01:14 ./
    -rwxr-xr-x root/root 1361 2000-02-13 05:01:03 ./postinst
    -rwxr-xr-x root/root 184 2000-02-13 05:01:03 ./prerm
    -rwxr-xr-x root/root 534 2000-02-13 05:01:03 ./postrm
    -rw-r--r-- root/root 29 2000-02-13 05:01:14 ./shlibs
    -rw-r--r-- root/root 757 2000-02-13 05:01:14 ./control
    -rw-r--r-- root/root 2707 2000-02-13 05:01:14 ./md5sums

    bluegreen:/var/cache/apt/archives> ar p apt_0.3.18_i386.deb data.tar.gz | tar ztv
    drwxr-xr-x root/root 0 2000-02-13 05:01:03 ./
    drwxr-xr-x root/root 0 2000-02-13 05:00:59 ./usr/
    drwxr-xr-x root/root 0 2000-02-13 05:01:02 ./usr/bin/
    -rwxr-xr-x root/root 50776 2000-02-13 05:01:02 ./usr/bin/apt-cache
    -rwxr-xr-x root/root 157576 2000-02-13 05:01:02 ./usr/bin/apt-cdrom
    -rwxr-xr-x root/root 11148 2000-02-13 05:01:02 ./usr/bin/apt-config
    -rwxr-xr-x root/root 129960 2000-02-13 05:01:02 ./usr/bin/apt-get
    drwxr-xr-x root/root 0 2000-02-13 05:01:02 ./usr/lib/
    drwxr-xr-x root/root 0 2000-02-13 05:00:58 ./usr/lib/apt/
    drwxr-xr-x root/root 0 2000-02-13 05:01:02 ./usr/lib/apt/methods/
    -rwxr-xr-x root/root 30288 2000-02-13 05:01:02 ./usr/lib/apt/methods/cdrom
    -rwxr-xr-x root/root 17804 2000-02-13 05:01:02 ./usr/lib/apt/methods/copy
    -rwxr-xr-x root/root 17108 2000-02-13 05:01:02 ./usr/lib/apt/methods/file
    -rwxr-xr-x root/root 65508 2000-02-13 05:01:02 ./usr/lib/apt/methods/ftp
    -rwxr-xr-x root/root 18652 2000-02-13 05:01:02 ./usr/lib/apt/methods/gzip
    -rwxr-xr-x root/root 64632 2000-02-13 05:01:02 ./usr/lib/apt/methods/http
    drwxr-xr-x root/root 0 2000-02-13 05:00:58 ./usr/lib/dpkg/
    drwxr-xr-x root/root 0 2000-02-13 05:00:58 ./usr/lib/dpkg/methods/
    drwxr-xr-x root/root 0 2000-02-13 05:00:59 ./usr/lib/dpkg/methods/apt/
    .
    .
    .
    (etc)

    Daniel
  • by Daniel ( 1678 ) <(dburrows) (at) (debian.org)> on Tuesday February 15, 2000 @02:14PM (#1270442)
    Note that this isn't apt's fault; the problem is (perhaps) that the dependencies are incorrect. In particular, if libesd-alsa0 is a replacement for libesd0, and Conflicts: with it, it should also (..I think..) declare that it Provide:s libesd0. File a bug against libesd-alsa0 requesting that it provide libesd0 if my analysis is correct and you want this fixed in Potato.

    Thanks,
    Daniel
  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Tuesday February 15, 2000 @02:11PM (#1270443)
    There are an awful lot of one-off hacks that have no real internal consistancy.

    Well, part of the problem is the lack of consistency between "UNIX-compatible" platforms. In particular, take a look at Motif; "Where's Motif" is a game somewhat like "Where's Waldo", except that it's not actually fun - OK, is it in /usr/dt, or /usr, or /usr/X11, or /usr/X11R N, or in some random location for third-party packages (although the "I installed package XXX in some random place" problem is generally handled in autoconf with a --with-XXX= YYY option)?

    Note the quote at the beginning of the autoconf Info file:

    A physicist, an engineer, and a computer scientist were discussing the nature of God. Surely a Physicist, said the physicist, because early in the Creation, God made Light; and you know, Maxwell's equations, the dual nature of electro-magnetic waves, the relativist consequences... An Engineer!, said the engineer, because before making Light, God split the Chaos into Land and Water; it takes a hell of an engineer to handle that big amount of mud, and orderly separation of solids from liquids... The computer scientist shouted: And the Chaos, where do you think it was coming from, hmm?

    ---Anonymous

    autoconf is trying to cope with the chaos.

    A complete "capabilities" API for UNIX-like systems. In other words, a programmatic way (from the language of choice) to determine how the local system compares to a set of metrics like "do I have gcc" or "is this Red Hat 6.1 or later" or "what is the standard include directory list for C++".

    "Is this Red Hat 6.1 or later" isn't a capability; presumably the package cares because RH 6.1 or later behave differently from some other systems - but the package presumably cares about some particular difference, and that'd be the capability you'd want to check.

    The API would, of course, have to be independent of the questions you ask it, so that arbitrary questions can be answered, perhaps with "I don't know" as an answer; the set of questions a package might need to ask about the system on which it's being built/installed is open-ended, so you can't just come up with a fixed set of questions that suffice for all packages.

    Given that, either it would have to be able to somehow deduce the answers to those questions without the cooperation of the system - which means, in effect, reimplementing autoconf - or it'd have to assume that the OS and the third-party packages installed atop it would supply those answers, which would require that the OS know about this mechanism and come with answers and that third-party packages know about this mechanism and use some other API to supply answers.

    (This would also require that programmers using third-party package X for their software be able to find all the questions to which third-party package X supplies answers - and hope that they don't need to ask a question about that package to which the third party in question failed to supply an answer.)

    A configuration language (preferably built on something a little more powerful and flexibility than m4) which can be used to generate headers, Makefiles and other pre-processed items by using the above API.

    Perhaps something along those lines (although not necessarily using an API of that sort) will come out of the Software Carpentry competition [codesourcery.com]. (And, if so, it'll use Python, as per the rules of the competition.)

    Realistically, your average project should not have to look like more than:


    buildmode: gnome_coding_standard
    require: c(ansi,longlong), gtk
    build_lib: fizzle(fizzle.c)
    build: fazzle(fizzle, fazzle.c)

    Unfortunately, many projects aren't necessarily "average". Ethereal, for example, doesn't "require" libpcap, it just disables packet capture if it's not present (and it has to go through some pain to try to find it); it doesn't "require" UCD or CMU SNMP, it just disables fancy dissection of SNMP packets if it doesn't find either of them, and it attempts to work with either of them; and it doesn't "require" libz, it just disables reading compressed capture files if it doesn't find it, and it requires not just libz, but a version of libz with particular functions in it, in order to support reading compressed capture files (as it doesn't just read the capture file sequentially).

  • The article is on source packages, not the state of package managers.

    Bruce

  • by Samrobb ( 12731 ) on Tuesday February 15, 2000 @12:24PM (#1270445) Journal

    Much as you may not want to admit it, this is one area where Windows products literally kick the crud out of the various free os's (osii?)

    Not that there aren't any number of post-installation problems that can cause nightmares for Windows users; but generally, the installation of new software tends to go extrememly smoothly. This really doesn't have as much to do with MS as it does with InstallShield being the default end-all-be-all of installer builders for WinTel software, though some of the installer support included in W2K looks exceptionally neat, and a year or two ahead of what's available on Linux.

    Your average user, when faced with RPM, DEB, tarballs, and the like will look at you and wonder what kind of crack you were smoking to come up with all these different ways to do the same thing, when all they want to do is just get something on thei machine so they can do X...

  • by AmirS ( 15116 ) on Tuesday February 15, 2000 @12:49PM (#1270446)
    We already have a very good packaging system. Want to install something and everything it depends on?

    # apt-get install foo

    Want to remove some software?

    # apt-get remove foo

    Want to hack the source to something?

    $ apt-get source foo

    Want to compile your own debian package from source you've just downloaded and/or tweaked?

    $ debuild

    And given the large number of packages available, I don't even bother checking whether the package I want exists first, 80% of the time it does.
  • by DragonHawk ( 21256 ) on Tuesday February 15, 2000 @08:08PM (#1270447) Homepage Journal

    Not that there aren't any number of post-installation problems that can cause nightmares for Windows users; but generally, the installation of new software tends to go extrememly smoothly.

    Not in my experience.

    Windows
    1. Download archive.
    2. Figure out if it is an archive or a self-extracting archive with a fully installed program inside or an archive or a self-extracting archive with an installer inside, or simply an all-in-one installer/archive, or maybe one of those rare single-file executables not archived at all.
    3. If needed, extract the above-mentioned archive until you find an installer to run.
    4. Run the installer.
    5. Read the welcome message.
    6. Close all your other running programs.
    7. Read the license agreement. Jump through whatever hoop is required to prove you agree to it.
    8. Click "Advanced" or "Custom" because "Typical" never works.
    9. Redirect the installer to the "Program Files" directory on the drive that actually has free space on it.
    10. Watch the pretty progress bar.
    11. Read the readme, release notes, etc., etc., it throws up without asking.
    12. Reboot.
    13. Wonder why Random Unrelated Application suddenly doesn't work anymore, until you realize that the first thing overwrote some important .DLL in the C:\WINDOWS\SYSTEM folder without asking.
    Red Hat Linux
    1. Download archive.
    2. rpm -ivh foo.rpm

    There is a key difference between perceived ease-of-use and actual ease-of-use. Just because the installer has a pretty GUI with lots of colorful icons and progress bars doesn't mean it is actually any better. Give me RPM any day.

  • by DragonHawk ( 21256 ) on Tuesday February 15, 2000 @08:14PM (#1270448) Homepage Journal

    However, there is a point where the newbies must learn how to do stuff as well, and RPM type things really don't teach much except rpm -Uvh and rpm -e :)

    While I agree, as someone who knows a lot more then how to type those commands, anything that makes my life as a system administrator easier is a Very Good Thing. If I can install a package in a single RPM command (as opposed to reading the INSTALL file, diddling with configure options, and doing three different make commands), I'll gladly take it.

  • by DarkFyre ( 23233 ) on Tuesday February 15, 2000 @12:31PM (#1270449)
    What Linux needs is a way to uninstall applications. I don't mind compiling and installing stuff, or using .deb or .rpm packages, but I want to know that when I get rid of stuff, it gets rid of stuff.

    Currently, uninstall options aren't all that promising. If you installed with 'make install', then good luck. If you still have the source around, maybe you can read the makefile and find out what went where. If you installed with RPM or the Debian package manager, you still have application-created data lying around.

    I think most people have had the experience of doing a 'ls -a' in their ~ for the first time in a while and finding megs of old config data. When I uninstall enlightenment, I want it to take all seven megs of it's config info with it. Same goes for gimp or KDE.
  • When I first saw autoconf, I'd already dealt with metaconfig a bit, and autoconf seemed to be promissing a bit more modular and maintainable strucutre. It also was (at the time) a lot less interactive, which was good for a software configuration system.

    Now, I long for what might have been if metaconfig had taken off. autoconf just isn't what it was craked up to be. There are an awful lot of one-off hacks that have no real internal consistancy. I once made the mistake of asking someone how I locate the Motif libraries in autoconf. I got several answers from "it should be where X is" to "you'll have to write your own command-line arguments, try doing something like what EMACS does". Granted, Motif is not at the heart of free software coding, but it seemed odd that a) such a popular library was not easy to locate and b) there was no standard way to say "search in these directories or as directed by these environment variables/command-line args for this library containting these headers and these functions". Many pieces of this exist, but none of it's coherent or complete.

    I'd love to see two things:

    1. A complete "capabilities" API for UNIX-like systems. In other words, a programmatic way (from the language of choice) to determine how the local system compares to a set of metrics like "do I have gcc" or "is this Red Hat 6.1 or later" or "what is the standard include directory list for C++".
    2. A configuration language (preferably built on something a little more powerful and flexibility than m4) which can be used to generate headers, Makefiles and other pre-processed items by using the above API.


    If someone were to ask my opinion, it should probably be based on one of the popular scripting languages (e.g. Perl, Python, Scheme, etc).

    Realistically, your average project should not have to look like more than:

    buildmode: gnome_coding_standard
    require: c(ansi,longlong), gtk
    build_lib: fizzle(fizzle.c)
    build: fazzle(fizzle, fazzle.c)


    That would indeed be sweet.
  • by Maul ( 83993 ) on Tuesday February 15, 2000 @12:36PM (#1270451) Journal
    Now, for Linux to go mainstream, it is going to need some type of InstallShield type utility under X to do package management. I don't think this would be a very hard thing to do well, but this has not really been done yet because most Linux users do not need it. Most of us are happy compiling our software, and some of us just straight prefer it.

    The average computer user simply can't handle the command line, let alone compiling things or even extracting files from a tarball. If we want a Mainstream Linux Desktop, we'll need this type of install utility.

    "You ever have that feeling where you're not sure if you're dreaming or awake?"

  • by kmacleod ( 103404 ) on Tuesday February 15, 2000 @12:11PM (#1270452) Homepage

    PkgMaker [slc.ut.us] is a tool I've written that can build packages for Solaris, HP-UX, binary tars, and RedHat RPMs. It uses a very simple model and can be easily extended for other package managers.

    In writing PkgMaker I came to the same basic conclusions as Jeff did: adding a small amount of packaging information to a project's source would go a long way towards making packages easier.

  • by CapnMatt ( 120969 ) on Tuesday February 15, 2000 @12:38PM (#1270453)
    Personally, I think CPAN makes a great model for what an end-all be-all package manager should be. Anything that handles dependencies and downloads automatically would be nice, but CPAN works SO WELL...
  • by TheGratefulNet ( 143330 ) on Tuesday February 15, 2000 @12:39PM (#1270454)
    after spending over 5 yrs with linux, I wanted to open my viewpoints to other pc unices. started looking at freebsd and found that the 'pkg' and 'ports' notion was quite nice. I'm told debian's pkg mgr is somewhat like this set of concepts.

    to the best of my knowledge, I am not aware of any linux distro that has an entire source tree structure such that you can 'gen a system' entirely from source - and painlessly, too. I think linux could benefit from freebsd in some ways.

    I like having the ability to get just the binaries (pkg) as well as having the binaries be gen'd from source ON MY SYSTEM. no possibility of version skewing here!

    so since linux can't decide on a common pkg scheme, why not take a slightly more neutral approach and just adopt the freebsd pkg/ports system?

    --

  • by Scola ( 4708 ) on Tuesday February 15, 2000 @01:01PM (#1270455)
    Right idea, wrong tool. See encap http://encap.cso.uiuc.edu. Stow and encap were developed completely independent of each other, but came up with the right idea. The difference is epkg, the current encap implementation, is far more featureful and far faster than stow. It's really a generation ahead.
  • by jezzball ( 28743 ) <slash2@dan k e e n . c om> on Tuesday February 15, 2000 @12:35PM (#1270456) Homepage Journal
    I use RPM on a linuxppc system. The majority of my problems come when there a ppc rpm for the most recent version of, for instance, binutils. I'll do a make and make install and *boom* suddenly it overwrites the rpm's files. Oh, but wait, some are in /usr/local/bin, some are in /usr/bin. Oh, and the rpm still thinks it's installed. Oh, and how do I now upgrade the rpm, or remove it without deleting the new binutils?

    Just a few comments (also, rpm -qpl should put a header, so I can do rpm -qpl * instead of for x in *.rpm; "rpm -qpl" "$x" > "$x.lst"; done)

    Jezzie
    ls: .sig: File not found.
  • by The Code Hog ( 79645 ) on Tuesday February 15, 2000 @12:23PM (#1270457)
    ... show off one of my linux systems to a Windows-literate friend. Not a complete newbie, but someone who is used to downloading shareware and freeware utilities for Windows. Invariably they ask what the equivalent of of self-extracting .exe file is.

    Now you and I may be happy with a uuencoded shell script, or wading through the 31 flavors of rpms on rpmfind.net, but coming from the Windows it looks very alien. Thre is an undeniable niceness to grabbing a zipfile, unloading it into a temp directory, running the program for while, deciding whether to keep it, or to delete the directory.

    No dependency-foo, no Gnu-make-foo, no glibc-foo. Just unpack it and go. No silly compile from scratch and hope you have the right kernel, libraries, compiler and support packages.

    RPMs, DEBs, source distribution with autoconf all give the user a LOT of power and niceties. But it is still an order of magnitude more complex than InstallShield looks to the average user under Windows.

    Just some thought for food,
  • by oGMo ( 379 ) on Tuesday February 15, 2000 @12:10PM (#1270458)

    What needs done is much simpler. Currently popular packaging systems need dumped in favor of GNU Stow [gnu.org]. Then we don't need to change automake and autoconf at all, because they work as-is.

    Dependencies could be added to Stow by someone without a lot of trouble.

    For those who don't want to download and install it to figure out what it does (althoug you should! It makes life very easy if you do any source installs), GNU Stow takes "packages" that have been installed in the standard manner (things placed properly in bin, lib, man, etc.) in their own directories (such as /usr/local/stow/) and makes links to the parent directory's bin, lib, etc. You can tell by a simple ls -l what a file belongs to. Since the links in the directories aren't the "real" files, you can delete and restore them with minimal trouble (I challenge someone with a conventional system to rm -rf /lib and restore it, without rebooting). You can even maintain multiple simultaneous versions of packages. Autoconf already makes this easy to use, simply supply the --prefix= parameter to your configure scripts.

    No silly proprietary formats, nor waiting for someone to come out with the latest package in your favorite format, no trying to convert foreign packages to your system. Everything you can find in a tarball is now pre-packaged and waiting for you to install...

  • by trance9 ( 10504 ) on Tuesday February 15, 2000 @01:32PM (#1270459) Homepage Journal
    This solution is very similar, but a little different to the /usr/ports solution in BSD. It would be easier to build this autoconf idea on top of ports than on top of the existing package managers, because they're already very similar.

    Brief intro for those unfamiliar with *BSD: To install "gimp" on FreeBSD you do this: "cd /usr/ports/graphics/gimp ; make install" and away it goes--it downloads gimp from wherever it needs to, notices that it depends on GTK so it downloads that, etc., and builds each thing it needs in a giant make script until the whole thing is installed on the machine.

    The FreshMeat editorial makes it sound like this is a brand new cool idea--it's not, all of the *BSD's have worked this way for years. I really like it.

    I would love to see Linux support something like this. The closest is Debian's apt, which has a mode for fetching and installing from source, but it's not as simple and direct as this /usr/ports solution.

    Some comments on this way of doing things:

    -- I *love* being able to browse through the filesystem to find out what packages I could possibly install. It's a very natural thing to do: if I want to browse graphically, I do so via netscape or some filemanger. Mostly, being a geek, I use "ls", "cd", and "locate" to find out what packages i might want to install.

    -- It's less to learn. If you are already going to have to learn how to do "make install" in order to get packages installed outside of your package management system (you just HAVE to have the version released yesterday) then you have already learned what you need to know to install any other package.

    -- It does support a binary packages system. Binary packages amount to doing the compile stage on someone else's server, the whole install process goes exaclty the same way except that ratehr than compiling the binaries, you fetch them.

    -- It brings everyone closer to the source tree. It's natural to grow up from being a novice user, to being a bit of a code hacker. There the code is, in front of your face, begging you to look at it--many people say this will scare people off, but nothing *requires* you to look at the code; and it's incredibly tempting for the curious. I think this leads to more developers, and is the main reason why *BSD has been able to keep pace with Linux despite having many fewer users.

    -- The filesystem is a concrete, easy to understand organization for the packages. I can visualize where things are and how they relate to one another. With other package managers, like RPM or DEB, the dependencies seem complicated and abstract. When there is a failure, I haven't got a clue what to do (well I do now, but I didn't used to). AT least with compiling when there is a failure I can kind of see that it is a file in this package that lives over here, and that is causing my problem. I may not know what to do, but I know where the problem "lives". This makes me a little more motivated ot try and fix it, possibly by trying to install that other package some different way or something. In theory deb is the same, but it just doesn't *feel* the same.

    In my opinion the only package management approaches that anyone should seriously consider are the Debian approach (apt/dpkg) and the *BSD appraoch (ports, plus their package management tools that back it up). Both of these allow all kinds of fun stuff like upgrading your system live without rebooting; synchronizing on a daily basis with the most current version; and have intricate and strong concepts of dependencies between packages.

    In theory, they are functionally equivalent--or close enough--but I prefer the filesystem based implementation that has source code at its heart. It not only seems more Unix-like to me, it seems more open.

    The big counter-argument to all of this is that source is scary to average users, many of whom don't understand the filesystem at all. I figure this is no argument at all, because you can bury the compilation under a pretty GUI just as easily as any other dependency system. And if your user can't figure out a filesystem, they won't be installing stuff using *any* package manager: it'll be pre-installed, or nothing, for them.

    Just my $0.02

The goal of Computer Science is to build something that will last at least until we've finished building it.

Working...