The State of Linux Package Managers 265
I was pointed over to an editorial that is currently running on freshmeat. The author of the editorial takes issue with the current state of package managers for Linux and proposes a way to fix inadequacies. Here's a sample of the solution: "The solution to the problem seems to be to extend the autoconf
approach to package systems.
This requires providing the necessary tools, standards, and guidelines to
software and/or package authors to
enable a single source package to produce a wide variety of binary packages
for the various platforms and
packaging systems."
Re:.deb, Apt (Score:2)
its like a makefile, but it builds
Combined with an RPM spec file you can integrate most source with any package system happily
Re:Advantage (?) Windows (Score:2)
//1.Download archive.
//2.rpm -ivh foo.rpm
3. su to root cos RPM's db is locked
4. read all your failed dependencies
5. Back on net, download dependecies, repeat
6. relocate RPM cos of distributors brain dead
defaults (KDE in
7. force install / no deps install
8. Pray it starts
9. use alien, treat it like a tarball
10. The only complete and easy packaging system
is an absence of packaging system, and autoconfed
source tarballs with the install replacement that
logs where the install puts it all.
RPM is so much fun when you are not using the
exact same Linux version as the packager.
Re:More package management vs none-at-all debate (Score:2)
Except for Debians extreme obsclesence(sp) and bias towards free software. It takes too long for
a
/*
What about, "Beat your self with a hammer, and
wonder why it hurts?" RPM is telling you that you
don't meet dependencies for a reason.
Don't be surprised if you ignore what it says and
then things don't work.
*/
How about RPM names its dependencies differently across Linux distros? I have libx installed, but the package names differ so on one distro, it fails with a dependecy warning. Force it with nodeps, it should work. It may not - they may be more incompatible.
Some RPM's cannot be relocated.
Some RPMS from SuSE fail on Redhat, likewise Caldera, likewise TurboLinux etc.
RPM sucks. All it does is allow uninstall. Its
dependecy checking is broken.
George Russell
Re:Which manager is the best? (Score:2)
It's basically where I wrote down everything I learned about the package formats while writing alien.
--
Re:rpm/dpkg does more than just handling dependenc (Score:2)
And you can't follow Debian Policy strictly without ending up with the Debian system. So these portable packages don't follow policy -- which is bad, and is why alien isn't The Answer. Or, they do follow policy by having all the information necessary for every OS/distribution on which they are installed. But then each author needs to know about the requirements of all the operating systems, and if the OS is changed ("innovated" :) then the packages won't really be correct any longer.
I don't think there will be a magic bullet for packages until operating systems are so commoditized that there is effectively only one. And that's a long ways coming.
Re:Waste of time (Score:2)
A single binary package doesn't really relate to the problem at hand. The problem at hand is: how can you install a program on different systems, so it is installed appropriately to each operating system? A single binary package only makes alien obsolete, but solves none of the difficult problems involved.
Editting makefiles is far more difficult than init files. Consider how long the init file for a package is, and the length of the makefiles for the same package? The makefile is always more complicated and more fragile.Makefiles were solved first because that's the order in which things work -- programs are programmed, and only then are they worth installing. But it doesn't mean makefiles were easier.
Re:Why I like /usr/ports (Score:2)
From what advocates say, it seems like the ports system makes building a system fairly straight-forward, but I don't know how it deals with changing a system.
Can you uninstall programs with make uninstall?
Does it recognize the problem that occurs when app A requires libfoo v5, and app B requires libfoo v4? What happens when you install app A? Does it upgrade libfoo and break app B?
I really don't know the answers to these questions. Ports seems like a very ad hoc system, which isn't a great way to ensure system integrity. But as I said, I don't really know.
Re:Why not standardize on RPM? (Score:2)
I get the impression this is already a problem with the various rpm-based systems. There are certain RPMs that will break a SuSE system, for instance, no?
Using RPM won't solve much. Alien already solves that problem. And, having used alien, I can see why that solution doesn't solve the real problems involved. It doesn't suddenly make RPMs compatible with a Debian system.
Re:Solving the wrong problem... (Score:2)
Which program/package controls which package? Which one Knows more about the others, and so has the wisdom to deal with installation? The problem I see in the Windows installation process is that each application thinks it's Right and does whatever it takes to make it work, even though that application is ignorant of what is necessary to make the entire system work.
I wouldn't trust any package to be too smart -- a centralized system (like an RPM database and all that infrastructure) is restrictive but can keep the system sane and make it possible to look in from the outside and figure things out. I don't see an ad hoc system (which is what you propose) capable of doing this.
GNU Stow Webpage (Score:2)
Also, you could check out the GNU Stow webpage at http://www.gnu.org/software/stow/stow.ht ml [gnu.org] .
Re:How to use the FreeBSD port/package system (Score:2)
A Rock And A Hard Place (Score:2)
Obviously you can "overcommit."
What could we propose as an alternative?
If you decide to install Balsa, which pulls in big chunks of GNOME, that may be a bit distressing. It's hardly hidden. And if you actually want to install Balsa, you've little choice in the matter.
You can either say,
and back out, or
Remember that in the in the "Windows World" it also wouldn't be a 300k email package. It would be a 20MB email package that includes every library that could conceivably be necessary.
And you'd have to worry that the email client might come with some older DLLs that will overwrite ones you're using for something else, thereby toasting other applications on your system.
Where's the C? (Score:2)
Then you start having to search for the #include files that the program expected to find. Which establishes that you have to install (say) a new version of ncurses or some such thing.
It may not be a full-scale porting effort, but it does require, if you want any hope of troubleshooting, being reasonably comfortable with the tool set used for C development.
FRONT END to DPKG/RPM is what is needed. (Score:2)
It would be a downright awful idea to create an InstallShield Package Installer tool that forcibly requires user intervention. The folks at TiVO [tivo.com] have taken an interesting approach; they offer to do a system upgrade every day and this requires no user intervention.
After all, the only thing easier than moving from CLI-based utilities to X-based utilities is to move to cron-based utilities that don't require that the user do anything at all.
The Debian folk have been working on improved front ends for quite some time, and prototypes for the dselect replacement pop up occasionally.
Similar is true for RPM; if you actually look, you'll find tools that are actively being worked on.
But I'd still argue that if, as you say,
then the right answer is not to throw a GUI in front of it, it is rather to schedule a process that automatically grabs packages and installs them without there even being a GUI involved.The only automated solution today: .deb (Score:2)
Of course, with Debian, it amounts to "Oops - you don't have the GLIBC that I need. I'll add the right library to the list of packages that I'll be downloading for you."
By the way, dselect will, after it finishes downloading all the packages you needed into /var/cache/apt/archive , and installing them, ask you nicely, Do you want to erase the installed .deb files? to which the answer, for the average user, is probably always going to be Yes.
Re:CPAN! (Score:2)
A) Holds the source packages.
B) Perl is mostely platform independent.
This is really no more then we currently have in most standard *nix install packages, aka,
Re:We need something like the SGI IRIX package mgr (Score:2)
It also has command line and GUI modes, using packages such as dselect, etc.. I admin I haven't tried the X interfaces quite yet, but I'd imagine they are extremely simular to the command line ones..
Re:I can try... (Score:2)
I also don't have any idea how to report bugs
Have you looked at the reportbug package? I also remember reading a reference to a Web-based bug reporter, although I have no clue where it is..
Daniel
Re:Package Status Manager (Score:2)
bluegreen:~> sudo apt-get --reinstall install hello
Reading Package Lists... Done
Building Dependency Tree... Done
0 packages upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 4 not upgraded.
Need to get 19.5kB of archives. After unpacking 0B will be used.
Do you want to continue? [Y/n]
(and it would let me reinstall hello if I hit "yes")
Daniel
Re:I can try... (Score:2)
to me like it wasn't really a release-critical bug. So I agree that removing
console-apt was probably overboard.
Second, they LEFT IN aptitude, which DOESN'T EVEN WORK!!
As the author of Aptitude, I have to take some offense to this, particularly since I use it daily for my own package management and just finished using it to purge a lot of cruft packages from my system and perform my daily upgrade. Not too much offense, though, espcially since it has been known to break from time to time
Would you mind telling me what doesn't work for you -- filing a bug report with debbugs or Sourceforge would be favorite -- and seeing if you can reproduce it with the latest version? (0.0.6, available from Sourceforge or with the apt line: deb http://aptitude.sourceforge.net/debian
Thanks,
Daniel
Sorry, I typed in haste :-) (Score:2)
In any event, I agree that abstracting the packaging process out is nontrivial and maybe even impossible, but I think taking a crack at it would be nice, just
because it would be so incredibly useful if it worked. (not that I have any
time to work on it
Daniel
Re:.deb, Apt (Score:2)
This is not about Debian source packages. This is not about RedHat source packages. This is about abstracting source packages so that one set of build files will do source installs, straight binary installs, binary
Daniel
Re:Apt is *not* a package format. (Score:2)
I'd like to add a brief note to my earlier reply -- abstracting the packaging system is difficult for large and complex packages, but those aren't the ones that it's really hard to keep up with, as there are relatively few of them. What this would really benefit would be packages that install a few files in
Autoconf already provides a mechanism for flexibly selecting where to install a package based (I think?) on system defaults -- for example, $datadir and $bindir -- and if used properly it *could* (er, theoretically) be extended to automatically generate a minimal proper package for various target systems. You might even get it to generate some simple package information -- eg, menufiles. It wouldn't cover every eventuality, but for small programs, or programs with simple needs, it'd be a start.
Now, the real question is, how many of these games would you trust the binaries of?
Daniel
Re:Package Status Manager (Score:2)
Daniel
Re:Debian/Apt problem [Re:I can try...] (Score:2)
One thing that concerns me about it is that while it makes things easy for the hypothetical New User[tm], you can get into serious trouble here with installing a library originally to fulfill a dependency, linking against it (or worse, since -dev packages depend on the library, downloading a binary linked against it) yourself, and then later removing the package depending on the library.
On the other hand, that's a rather esoteric failure case and might not be relevant. And I'm not in charge of libapt development anyway, I just use it
Daniel
PS - it's possible but not really necessary (I think, 5-second conclusion, may be wrong!) to refcount, since apt tracks reverse dependencies as well as dependencies -- just iterate over the newly removed package's reverse depends and see if it still fulfills any other dependencies.
Re:Debian/Apt problem [Re:I can try...] (Score:2)
Daniel
Re:.deb, Apt (Score:2)
Daniel
Re:I can try... (Score:2)
-> RedHat can use debs just as well as Debian can use RPMs (that is, not too well
-> RedHat users claim they have an apt-like program. Not having used it, I can't comment on its utility, but you should be aware it exists. (rpmfind)
-> Config file handling and rebootless upgrades are cool. Oh wait, you said that already
One other note -- if you go to console-apt's bug page [debian.org] you can see why it was removed from potato -- there were evidently some release-critical bugs filed against it (segfaulting) that the maintainer didn't get to in time. Whether they really should be release-critical, and whether they were fixed..I'm not sure; I don't use console-apt, so I can't comment.
Daniel
Re:I can try... (Score:2)
Daniel
Uninstalling created data (Score:2)
--
Waste of time (Score:2)
The problem is *way* overblown (Score:2)
rpms and debs make install/uninstall simple. I mean how hard can rpm -i be? Even way back when I first installed Linux (RH 5.0), I had no problem with that. Uninstall? No problem either: rpm -e. This works just as well as InstallShield, and doesn't waste download time by putting self-extracting code in every package.
Debian does an even better job. "apt-get install foo" will auto-magically *download* and *install* foo for you, as well as any other packages that foo needs in order to work. Give me an equivalent windows command for that. Similarly, "apt-get remove foo" will uninstall it.
So, I just don't see what the problem is.
What I would like to see though, is some kind of consolidation of debs and rpms into a single universal format.
Also, a GUI config tool for packages would be very nice. Newbies can get scared away by Debian's text-mode config scripts. But progress is already being made in this area. The frozen potato (Debian 2.2) already includes a front-end for package configuration.
To sum it up, package system can certainly use some improvement, but things are nowhere near as bad as the article would seem to imply. I would like to hear other opinions (esp. newbies) on the subject.
___
Re:No. (Score:2)
Personally, I think that encouraging binary packages is a Bad Idea for the Free Software community.
---
...while a great idea for anyone who wants to use Linux/Unix to actually get work done without screwing around with compilers.
Who should be focused on? It's hard to say, although my intuition seems to go with the latter.
- Jeff A. Campbell
- VelociNews (http://www.velocinews.com [velocinews.com])
Re:Why is this a troll? (Score:2)
Looks like we got a clueless moderator here.
---
Same here. Somebody re-moderated it as 'informative'.
I venture that more people use Linux as a tool, rather than a means in itself. I believe more people use it primarily for server usage (with desktop use rising fast) than as a platform with the main intent to code for. Of those that do code, do they make code-level alterations to the majority of items they download? Not likely. Most people have a few set things they like to hack on, and would rather just install everything else so that It Works.
Having the ability to compile stuff - which I agree is a 'big plus' - doesn't mean you should be forced to do so. Other than helping people gain ground on the 31337-ness scale, I just don't see why it's necessary for everything to be obscure and non-intuitive.
- Jeff A. Campbell
- VelociNews (http://www.velocinews.com [velocinews.com])
Re:freebsd's style? (Score:2)
FreeBSD Ports? (Score:2)
Question: why do we need pacage managers at all? (Score:2)
How about instead of fussing over package manager formats, we do instead what has been a tried and tested approach to the whole business: bundle directories. A directory with a
The important point is: Joe Average never needs to know what's inside a bundle. The filesystem GUI treats them like single files. To install a program, double click the tarball and a window opens with a bundle icon. Drag the bundle icon to
Now isn't that a bit nicer?
(hint: GNUstep is already using this, and it should be fairly trivial to configure the misc binary support to run the launch script on execution of an app bundle)
Re:This situation comes up every time I ... (Score:2)
You forgot "and digging out files from the system directory, and figuring out which system-critical DLLs have been written over, and clearing out the buried registry entries..."
Win32 does have an easier install process, but uninstalling is a bitch. I'm loathe to just "try out" some package because who knows what state my system will be in when I uninstall it again...
----
Re:No. (Score:2)
libc.so.6 =>
ldd `which mkdir`
libc.so.6 =>
(sorry if that came out really ugly)
Anyway, how am I to recreate
I wouldn't call rpm and deb proprietary formats, either. They're cpio and ar archives respectively.
Also, distributions could not install as quick if they had to run configure for every package they were to install.
requirement: distributed filesystem friendliness (Score:2)
LANS are getting more and more popular. I have one in my home. They are near ubiquitous in high tech workplaces. No matter how easy *BSD ports or Debian's apt-get is, there are economy of scale benefits of just maintaining ONE application collection, rather than a separate application locally on each machine of a LAN. It's really a separate problem space; packaging systems like DEB and RPM make installing software easier (reducing difficulty in installation). Distributed filesystems can reduce the amount of installations required (reducing redundancy of installations). What can't I take advantage of my friend's diligent use of apt-get just one IP address over? Why should I do redundant administration if I don't want to?
The next revolution in linux software distribution will be distributed filesystem friendly software collections; and I don't care if that distributed filesystem is Coda or Intermezzo or AFS or even lowly NFS. I just wish I knew the best place to throw my hat in to the ring and work on this right now. This is the one station where linux software collections have major room to improve.
Re:CPAN! (Score:2)
I think the poster above is talking about the module CPAN, which you can execute with: perl -MCPAN -e shell
This is very much like a package manager and, here I have to agree, very comfortable. Just type i.e. install foo, and foo is downloaded, compiled if necessary, tested, etc. Dependencies are taken care of too; and the shell is really nice too: you can use regexps if you don't know exactly what the package is called, and you can even read the README before you download.
That being said, I think you have a point about all the packages being at a central location. Nevertheless, I think the standardization of the module packages plays an important role too.
Chris
Re:Why I like /usr/ports (Score:2)
One caveat, however. The purpose of ports is to allow painless compilation on FreeBSD. Since every FreeBSD is like the next, the patches and configurations work without a hitch. But how will a ports work under Linux when there are so many different distributions?
I can't even get some configure scripts to run to completion on some distros. How in the world will ports work when every distro wants to do their own thing? Will every distro have to maintain their own ports collection?
What we need long before we need ports, or the articles universal package manager, is a standard Linux. When the LSB is done, then we can start getting stuff to work properly.
Re:Why I like /usr/ports (Score:2)
Yes. Just type make uninstall.
"Ports seems like a very ad hoc system, which isn't a great way to ensure system integrity."
From what I can understand listening to people who know, it's much more robust than rpm but not quite as robust as apt. It's more than just a set of makefiles. It keeps track of what's installed, what they're dependent on, etc.
Re:Uninstall! (Score:2)
I am unaware of any OS that deletes user generated files when the application is removed. Think about this and you'll realize what a Bad Thing that would be. Maybe when you uninstall a program you want to get rid of all your work as well, but some people don't.
Re:Uninstall! (Score:2)
They work fine for me... never had a problem with lingering files. Anything related to it (config files, whatever) are always in ~/.<appname> if it's a UNIX-like conforming app. Therefore, all you have to do if you really want to get rid of something is removing it with your package manager, and then rm -rf ~/.<app>.
Would you prefer that the package manager erased these directories for you? I think not. Sometimes when you uninstall a package you WANT to keep this data (I do almost always). Hmm, perhaps an option to --nuke all associated files for when you want that?
Damn, I should have used Debian as an example (Score:2)
Yeah, just pull it all out of context, windows is easier for most people by a long shot.
I'm not pulling it out of context. You're missing the point by focusing on my example.
Under Windows, there is structured no way to install, uninstall, manage dependencies, find out which programs own which files, or which programs need which files.
Your given example of a "Windows" install is totally bogus. For one, you totally ignored the issue of 15 different ways to distribute archives. For another, every install program is just a little bit different. Going with the defaults rarely works, or if it does, yields a system which is totally unmaintainable. Uninstalling things is a nightmare, and DLL versioning is, as is so often stated, a living hell.
I know you post to Slashdot just to be have fun as a Microsoft-loving troll, but come on! You can do better then that, TummyX!
Read RPM documentation to figure out how to use RPM.
Bah. First of all, if the user is interested in RTFMing, they are going to have to do it anyway, regardless of platform. Second of all, if you're using GNOME or KDE, you can just double-click on the package file, and it will offer to install itself. Furthermore, there is no question as to what kind of installer it will be.
Get obscure errors about dependencies you need.
I knew I should have used Debian as an example. Okay, replace all instances of "rpm" with "apt-get", and your entire argument just evaporated. apt-get will automatically resolve all dependency issues for you, including downloading the needed packages from trusted sources.
Goto redhat.com to try to find the other RPM you need.
You forgot, "Beat your head against the wall, simply because you're a Linux user, and Linux requires you to do that." Give me a break, TummyX. Just use rpmfind and it is totally automatic.
Manually make your KDE links to the files.
So the packager didn't do there job. Nothing on Windows makes an installer put links on the "Start" menu.
execute the application only to find that it depends on some other application to get XXX feature enabled.
Right, and of course, that doesn't ever happen on Windows or anything like that.
Sometimes you actually give some good insight into the limitations of Linux, TummyX, but lately, you just seem to be generating noise. If you're going to troll, at least do it right!
More package management vs none-at-all debate (Score:2)
su to root cos RPM's db is locked
Okay, I forgot that, but: Good! This helps keep the virus problem to a minimum! Besides, the more recent versions of GNOME and KDE take care of this nicely, by prompting you for your password.
read all your failed dependencies
Better yet, use Debian's apt-get tool, which automatically solves dependency problems for you.
relocate RPM cos of distributors brain dead defaults
While I agree that some RPM's pick rather dumb locations for things, how is relocating them any different then from changing the default location in a autoconf-based install?
7. force install / no deps install
8. Pray it starts
What about, "Beat your self with a hammer, and wonder why it hurts?" RPM is telling you that you don't meet dependencies for a reason. Don't be surprised if you ignore what it says and then things don't work.
The only complete and easy packaging system is an absence of packaging system,
That doesn't manage dependencies for you.
RPM is so much fun when you are not using the exact same Linux version as the packager.
While RPM has its faults, I haven't found that to be one of them.
Subtle subject change (Score:2)
I don't. However, you are a Microsoft-loving troll. That is, a troll whose preferred method of trolling is to advocate heavily in favor of Microsoft, especially in Linux discussions where it is off-topic and guaranteed to raise flamage.
Like I said, sometimes you actually raise some valid points, but it gets old after awhile, and this was just pretty weak.
If it makes you feel any better, you're one of the best trolls on Slashdot. You always keep just close enough to the truth that you don't get moderated down or ignored. You even have an account with good karma, a technique well beyond the skills of your average AC.
So by my example I was showing how your example meant very little.
Ah, so we're no longer trying to argue that Windows does package management better, eh? I gotta hand it to you, TummyX, you know what you're doing. Looks like you're going to lose the debate? Answer a different question! A move from the Bill Gates playbook itself.
I was parodying your example for fun.
At least you admit it. I appreciate that.
A word on RPM (Score:2)
I have to admit, Debian's package system is the big thing that is drawing me towards trying out Debian. (Mainly, what I'm waiting for at this point is for "Potato" to become "officially" stable.) More automatic, more features, and a better organized package achive. Gotta love it.
However, as a current Red Hat user, I figure I might as well put in a word for RPM. It manages dependencies, source, installs, and so on and so forth very well. The main thing it lacks is Debian's automatic package retrieval for dependency satisfaction (again, an awesome feature). But, if you are using Red Hat, be aware of the "rpmfind" command. The command "rpmfind foo" will search the net for package "foo" and offer to download it for you. Not Debian, but a heck of a lot better then a regular netsearch, for sure. :-)
Just an FYI for RPM users.
More defense for RPM (Score:2)
Except for Debians extreme obsclesence(sp) and bias towards free software.
Debian is actually very up-to-date. They don't follow the Red Hat model of "a stable release every six months"; they use a more dynamic system where all packages are always being updated.
And while they do favor GPL-style Open Source Software, they by no means exclude other OSS software. It just comes from a different branch of their tree.
How about RPM names its dependencies differently across Linux distros? I have libx installed, but the package names differ...
How about the fact that RPM doesn't depend on packages at all, it depends on files? Do you have a legit gripe here, or did you just have a bad experience with RPM as a child and you're not willing to see reason anymore?
Some RPM's cannot be relocated.
And some source code contains hard coded paths all over the place. A bad package is a bad package no matter how you package it.
Some RPMS from SuSE fail on Redhat, likewise Caldera, likewise TurboLinux etc.
Funny, I don't have that problem. Are you using Red Hat 3.0 or something?
What's up with you? I mean, I know RPM isn't a perfect piece of software, but you seem determined to not like it.
Packages are packages (Score:2)
Package formats such as deb and rpm are proprietary, not in storage format (rpm's use cpio or something), but by composition and requirement. They are composed in a format that is exclusive to their own system of doing things (having specific files in the archive with meta-data about the package).
Could you please explain to me how else you are supposed to figure out this information? Any package is going to have to include meta-data about the package (or be damn hard to use, otherwise). It may be in English in an INSTALL file, but it is there. And computers are notoriously bad at reading English. Both Red Hat and Debian use .spec files which are ASCII text, human-readable, and well-documented. I don't see how it can get any better then that.
They require their databases...
Again, of course they do. The whole point of a package manager is to keep track of what belongs to what, and so on. Whether you keep that inThey also require someone specifically construct them.
I wasn't aware that .tar.gz archives built themselves magically. :-)
try extracting a deb or rpm without the proper tools...
Try extracting foo.tar.gz without tar or gzip. What are you going to do, decode the binary by hand? :-)
My point is, there is nothing magical about .tar.gz files vs .rpm or .deb files. They are all packages. They all require tools to use them, and they all contain data not easily readable by humans. The only difference is, the newer package formats are easier for computers to work with.
RPM knows dependencies, it just doesn't solve them (Score:2)
I've never used Red Hat, just Debian. Can someone please tell me why anyone should bother with a package manager that doesn't handle dependencies?
RPM does understand and manage dependencies. I suspect the original poster was referring to the fact that Debian's "advanced package tool" will solve dependencies for you. When installing, RPM checks for dependencies, and if anything fails, it complains and aborts. apt can actually seek out and install other packages to solve dependencies. This is a very nice bonus for Debian users, and something I (as a Red Hat user) wish I had.
Some minor comments on RPM (Score:2)
While I generally agree whole-heartedly with what you wrote, I do have a couple minor things about RPM to post in the interest of being as helpful as possible to any RPM users in the readership. I generally agree that Debian's package system is overall superior to RPM, and I wish Red Hat would fix it.
RedHat packages depend on files. Debian packages depend on other packages. The advantage of this for RPM is that you can install packages, if you've compiled the libs yourself...
Additionally, this means that RPMs don't depend on specific implementations of a generic service. In other words, a properly done RPM will depend not on sendmail, but on smtpdaemon. Can Debian do this?
Upgrading the system: With RedHat (maybe *RPM?), you reboot the system with the CD/disk of the new OS version, and use the "upgrade" option.
You can do it this way, by I generally find it easier to simply mount the CD, and do a "rpm --freshen -vh /mnt/cdrom/RedHat/RPMS/*.rpm". The --freshen switch tells RPM to upgrade all the specified packages, but only if they are already installed.
Just FYI.
heh.... (Score:2)
heh.
"Software is like sex- the best is for free"
-Linus Torvalds
Package Status Manager (Score:2)
I would like to see a package checking program. Something that will check the packages installed and verify all required files are indeed installed and maybe even if they are corrupted. Then, this program will either reinstall the complete packages or atleast the affected files.
Any Ideas/Suggestions?
Quack
Re:Rpm works fine for me (Score:2)
In addition, yes, I'm doing an end run around RPM and yes, it's the wrong thing to do. By stating that I was merely pointing out (without saying it, as I'm prone so often to do) that in order for rpm to be more universally accepted it has to be more supported by the distros. LinuxPPC Dev Release 1.1 came out...last week I think, and it's binutils is at 19, vs. the 27 I last checked for.
That's all I'm really saying.
ls:
Why not standardize on RPM? (Score:2)
The number of package management systems is very large, and it is neither possible nor desirable to standardize on a single one.
Why is it neither possible nor desirable to standardize on a single package management system? I have been extremely happy with RPM as a basis for package management. It's vendor-neutral, architecture-neutral, compresses nicely, provides for both source and binary package types, and provides for building from pristine sources. What could possibly be wrong with that?
I get the feeling that what he's shooting for here is a way to create a single specification file to be used with a tool to create binary packages for all architectures, and all package managers. In this way I could theoretically build a Linux RPM, a Linux DEB, a Solaris pkg, and a FreeBSD whatever-the-hell-package-format-they-use-when-no
My point of view is, "why bother?" It seems to me that implementing RPM (or a similar format, perhaps with extensions to handle dependencies like DEB does) is the logical way to go here. One spec file can already create packages for multiple targets.
As an aside, I believe this paper is a perfect example of a demonstration of how as a community, we seem to suffer from multiple-personality syndrome when it comes to our software and tools. Do we let the various options duke it out in the "marketplace"?, or do we standardize for interoperability and easy configuration management? Both have their merits, but I chose RPM at my workplace because I think at this point it's the "best of breed" when it comes to package management and software distribution, and if I had to choose a package management system for every OS, RPM would be it.
Re:requirement: distributed filesystem friendlines (Score:2)
Well, right now you can do this: /var/cache/apt/archives
scp friend:/var/cache/apt/archives/*
Later, you might wish to have both apt sessions run through an http proxy server (such as squid). For example:
export http_proxy=http://friend:3128/
As for the installation questions, non-interactive debconf backends are being worked on, but even that won't be a timesaver for 2 machines. Just answer the questions :)
Re:Some minor comments on RPM (Score:2)
Debian packages are supposed to depend on a specific packagename *or* the virtual package, for example:
Depends: libc6, exim | mail-transport-agent
If you didn't have a mail-transport-agent installed previously, it will install exim for you.
The authoritative virtual packagename list is h ere [debian.org], it's updated from time to time.
What about Mac OS X installers? (Score:2)
Re:autoconf, or what it could have been (Score:2)
Re:Solving the wrong problem... (Score:2)
a)Yes, in the begginning you would solve the same problem --figure out where everything should go. But when you do put the 'meta-structure' into place, you won't have to do it ever again. A RedHat distribution's meta-structure should have the same data as a Debian's, as a home-brew. On any CPU.
b) OK; I may have not been totally clear; no application should hold information about others. It should only contain information about itself. So, if it does make the wrong assumptions, it will only break itself. For example, say you're installing an Apache module. The module installation will go to the central depository (I favor
But that's just replicating existing functionality... Think of how easy it would be to build a universal GUI for *anything* on top of
The Windows way is flawed: the Registry is not human-editable (at least not easily) or intuitive. The dll's have to be centralized and get all mangled up. Unix can leapfrog Windows now: XML config files, and well, symlinks
What I am proposing is a redesign (which I know will be a pain in the ass during the transition period). What we have *now* is an ad hoc system --which doesn't work.
engineers never lie; we just approximate the truth.
Solving the wrong problem... (Score:2)
When you have something that flexible, you need to account for all the different configurations and setups people can and will make to the system. That's what autoconf does for builds and the package managers are trying to do for installs. But that's solving the wrong problem: you're effectively solving a design issue with workarounds, with duct tape and paperclips.
What needs to be done, is for Unix/Linux to apply what years of experience have taught 'hardware' engineers: when you have flexible configurations, you need a configuration management system. The RPM database is not enough.
What we need, is a registry-like, centralized repository of information about the system, in a standardized language that: a) can *very* easily be read by software (a la Registry), and b) can as a last resort be edited by humans with minimum tools (a la Unix
Imagine you're working on a system that doesn't have an
Thus, a new package can find out where it can install itself and how to link to everything it needs, without messing with system-level software. Not only that, but since the meta-information for everything is gonna be sitting right there, the software can not only resolve dependencies but also suggest configuration changes in its dependencies! And since all that will be in a parsable structure, you should be/would be able to go on the Net and find out the answer to the exact problem.
Just dreamin...
engineers never lie; we just approximate the truth.
rpm/dpkg does more than just handling dependencies (Score:2)
Besides, there are more incompatibilities between different distros than just package formats. Configuration files often need to be kept in different places, particularly init scripts. The Linux Standard Base may help in future, but for now the differences are there.
I'm not saying that GNU Stow couldn't be part of a Grand Unified Solution, just that there's more to modern package management than archive formats.
Re:Which manager is the best? (Score:2)
.deb,
.tgz : package doesn't know anything.
.tgz archives are often just installed by untaring them. This can make it a nightmare to de-install something. (You can't just remove every file which gets installed. If you untar A.tgz and B.tgz, both containing
However, this is not a deficiency of the *package*, just the install method. If you install a tgz archive using dpkg (via alien) then you don't have this problem.
If two packages both contain files called
Uninstalling and removing config files in Debian (Score:2)
This will remove all files associated with foo, including config files. It won't remove data files you've created; this is good. (Imagine if uninstalling Word removed all your Word documents!)
WARNING TO NEWBIES: You probably DON'T want to do this. Instead do "dpkg -r foo".
Re:Uninstall! (Score:2)
Newbies and Debian's package management (Score:2)
There's a few issues with this list. It is very long (thousands of packages) and has lots of stuff like libtiff which would only be installed because another package needed them. A newbie doesn't need to see these libraries, just the "actual programs". But all in all, it's easier to install a Debian package than a Windows program that uses InstallShield (as other people have pointed out).
A difference between distros and Windows (Score:2)
[...]
> For linux software we already get to choose from half a dozen different packages
This is a non-issue for the end user, because nearly all popular [freely-distributable] software for linux is available on your distro's CDs / ftp site. The user doesn't need to worry about the format, because the distro handles it cleanly.
Of course, things aren't that simple if you want software which isn't freely-redistributable. But AFAICS there's no way to clear this up without abandoning shared files altogether, or risking the kind of corrupted mess which is possible with Windows packages.
RPM handles dependencies (Score:2)
But a package manager that didn't handle dependencies would still be useful, to do clean uninstalls.
I agree 100% (Score:2)
Makeself?? (Score:2)
Get it here http://www.lokigames.com/~megastep/mak eself/ [lokigames.com]
Dom
I've talked about this before. (Score:2)
The principle was that there is a root run daemon that monitors PS and measures how often certain programs are run. This would allow a person to choose little used programs to be removed.
Another part of the add/remove "front end" for the control panel would be installation. I talked with the author of gxTar (see freshmeat for it) about the install principle. It would involve untarring to a temp dir, and analyzing the output of configure --help. Then, the user can use "default safe" values, or change then via a wizard or dialog. For rpms, slackware tarballs, and debs, you could just use the the preexisting methods for checking files. For the GNU/autoconf, you could use something like the locatedb functionality for monitor what was added to the filesystem.
This allows a nice centralised install and remove functionality, regardless of packageformat, and can be extended to handle more and more package formats. It also allows you to remove what you don't use. So if you go window shopping on freshmeat and install hundreds of applets, you can prune away what you don't use after a few weeks.
Well, just some ideas of mine
---
Excellent idea! I have two points. (Score:2)
Second, there are a number of good tools that already solve parts of the problem available in source. Anyone with an interest in solving this can go to it. It sounded to me like a proposal to start developing a new tool. I look forward to seeing the prototype.
Re:Why not standardize on RPM? (Score:2)
Re:Damn, I should have used Debian as an example (Score:2)
The point is both linux and windows have problems, and neither is perfect. So by my example I was showing how your example meant very little.
And why do you have to associate "microsoft-loving" with "troll" all the time?
This is an 'open geek' forum, and you can be a geek and like (or not hate) microsoft at the same time. Admittedly, it is heavily biased to linux, but microsoft stories always gets the most postings
Re:Subtle subject change (Score:2)
No! I really was just being facetious. Debian indeed does have a kick ass package managing system. I don't even think windows has what you call "package management". It comes with some install APIs, and has cool stuff in other areas (windows 2000's self healing applications for example).
However - I do think that it's easier to setup programs (in general) on windows than linux tho.
This already exists: OSD (Score:3)
It's designed to be vendor neutral, and it's been written by firms that know a lot about installing software (in particular Marimba and Tivoli bear some focus).
The other nice thing is because it uses XML it's completely extensible.
Of course, the big problem is getting everyone to support it!
Well and good for the C-literate, but... (Score:3)
Unfortunately, that isn't all that suitable for "naive lusers" who will react to this with a big Huh?!?
Rather than GNU Stow, I'd think the direction of BSD Ports [freebsd.org] would be suitable; that provides the merit of automating the process of setting up configuration info for lots of packages that hasn't yet been done with Stow. You may want to believe that
I remain quite skeptical, as it has taken years for distributions like Red Hat, Slackware, and Debian to become richly functional.Note that Ports, like Stow, uses nothing that anybody gets tangled into thinking is somehow "proprietary." (Not that RPM or DPKG actually use anything proprietary; it's mostly Slackware bigots, with emphasis on bigot, not on Slackware, that claim, dishonestly, that RPM/DPKG are somehow proprietary formats...)
But that misses the point.
Your proposal may be suitable for you and I, albeit marginally so, as I'd much rather that the administration of package installation for the 99% of packages where "default is fine" be dealt with by someone else; it is NOT, by any stretch of the imagination, suitable for making Linux deployable in other than highly UNIX literate environments.
Apt is *not* a package format. (Score:3)
I don't see any realistic way around the consideration that Systems Integration Is Messy.
Whether we talk about DPKG, RPM, or BSD Ports, it's a given that the process of getting packages integrated into a particular distribution is a somewhat messy process. In all cases, there is some form of patch that gets applied to indicate precisely how they are to get installed.
It is getting increasingly common for Debian packagers ( e.g. - the human being that builds the patches required to integrate the package in with Debian) to have some degree of involvement with the "upstream" production of the original, authoritative source code tree.
When this happens, it is not unusual for there to be a ./debian subdirectory containing the "Debian-relevant" patches, and I've also seen ./SPECS directories with RPM .spec files. In cooperative development efforts, this is the point at which important cooperation takes place, as this means that there is some thought to systems integration in the original source code tree, which will make the job easier for everyone else.
It's not likely that the level of effort will actually diminish to zero, but if it becomes largely automated, and the human effort can be widely distributed, that makes the task not too herculean.
Re:Rpm works fine for me (Score:3)
It's the difference between a 10 speed and a Harley. Particularly the conflict managment, aka, you install package A. When you select it, it detects problems with package A, B, and C, which would also need to be upgraded due to conflicts, and gives you the ability to update them as well. And the package manager also handles updates as well, that puts RedHat's up2date and gnorpm using web search to shame..
Re:Well and good for the C-literate, but... (Score:3)
bluegreen:/var/cache/apt/archives> ar t apt_0.3.18_i386.deb
debian-binary
control.tar.gz
data.tar.gz
bluegreen:/var/cache/apt/archives> ar p apt_0.3.18_i386.deb control.tar.gz | tar ztv
drwxr-xr-x root/root 0 2000-02-13 05:01:14
-rwxr-xr-x root/root 1361 2000-02-13 05:01:03
-rwxr-xr-x root/root 184 2000-02-13 05:01:03
-rwxr-xr-x root/root 534 2000-02-13 05:01:03
-rw-r--r-- root/root 29 2000-02-13 05:01:14
-rw-r--r-- root/root 757 2000-02-13 05:01:14
-rw-r--r-- root/root 2707 2000-02-13 05:01:14
bluegreen:/var/cache/apt/archives> ar p apt_0.3.18_i386.deb data.tar.gz | tar ztv
drwxr-xr-x root/root 0 2000-02-13 05:01:03
drwxr-xr-x root/root 0 2000-02-13 05:00:59
drwxr-xr-x root/root 0 2000-02-13 05:01:02
-rwxr-xr-x root/root 50776 2000-02-13 05:01:02
-rwxr-xr-x root/root 157576 2000-02-13 05:01:02
-rwxr-xr-x root/root 11148 2000-02-13 05:01:02
-rwxr-xr-x root/root 129960 2000-02-13 05:01:02
drwxr-xr-x root/root 0 2000-02-13 05:01:02
drwxr-xr-x root/root 0 2000-02-13 05:00:58
drwxr-xr-x root/root 0 2000-02-13 05:01:02
-rwxr-xr-x root/root 30288 2000-02-13 05:01:02
-rwxr-xr-x root/root 17804 2000-02-13 05:01:02
-rwxr-xr-x root/root 17108 2000-02-13 05:01:02
-rwxr-xr-x root/root 65508 2000-02-13 05:01:02
-rwxr-xr-x root/root 18652 2000-02-13 05:01:02
-rwxr-xr-x root/root 64632 2000-02-13 05:01:02
drwxr-xr-x root/root 0 2000-02-13 05:00:58
drwxr-xr-x root/root 0 2000-02-13 05:00:58
drwxr-xr-x root/root 0 2000-02-13 05:00:59
.
.
.
(etc)
Daniel
Re:I can try... (Score:3)
Thanks,
Daniel
Re:autoconf, or what it could have been (Score:3)
Well, part of the problem is the lack of consistency between "UNIX-compatible" platforms. In particular, take a look at Motif; "Where's Motif" is a game somewhat like "Where's Waldo", except that it's not actually fun - OK, is it in /usr/dt, or /usr, or /usr/X11, or /usr/X11R N, or in some random location for third-party packages (although the "I installed package XXX in some random place" problem is generally handled in autoconf with a --with-XXX= YYY option)?
Note the quote at the beginning of the autoconf Info file:
autoconf is trying to cope with the chaos.
"Is this Red Hat 6.1 or later" isn't a capability; presumably the package cares because RH 6.1 or later behave differently from some other systems - but the package presumably cares about some particular difference, and that'd be the capability you'd want to check.
The API would, of course, have to be independent of the questions you ask it, so that arbitrary questions can be answered, perhaps with "I don't know" as an answer; the set of questions a package might need to ask about the system on which it's being built/installed is open-ended, so you can't just come up with a fixed set of questions that suffice for all packages.
Given that, either it would have to be able to somehow deduce the answers to those questions without the cooperation of the system - which means, in effect, reimplementing autoconf - or it'd have to assume that the OS and the third-party packages installed atop it would supply those answers, which would require that the OS know about this mechanism and come with answers and that third-party packages know about this mechanism and use some other API to supply answers.
(This would also require that programmers using third-party package X for their software be able to find all the questions to which third-party package X supplies answers - and hope that they don't need to ask a question about that package to which the third party in question failed to supply an answer.)
Perhaps something along those lines (although not necessarily using an API of that sort) will come out of the Software Carpentry competition [codesourcery.com]. (And, if so, it'll use Python, as per the rules of the competition.)
Unfortunately, many projects aren't necessarily "average". Ethereal, for example, doesn't "require" libpcap, it just disables packet capture if it's not present (and it has to go through some pain to try to find it); it doesn't "require" UCD or CMU SNMP, it just disables fancy dissection of SNMP packets if it doesn't find either of them, and it attempts to work with either of them; and it doesn't "require" libz, it just disables reading compressed capture files if it doesn't find it, and it requires not just libz, but a version of libz with particular functions in it, in order to support reading compressed capture files (as it doesn't just read the capture file sequentially).
This is not about the "State of Package Managers" (Score:3)
Bruce
Advantage: Windows (Score:3)
Much as you may not want to admit it, this is one area where Windows products literally kick the crud out of the various free os's (osii?)
Not that there aren't any number of post-installation problems that can cause nightmares for Windows users; but generally, the installation of new software tends to go extrememly smoothly. This really doesn't have as much to do with MS as it does with InstallShield being the default end-all-be-all of installer builders for WinTel software, though some of the installer support included in W2K looks exceptionally neat, and a year or two ahead of what's available on Linux.
Your average user, when faced with RPM, DEB, tarballs, and the like will look at you and wonder what kind of crack you were smoking to come up with all these different ways to do the same thing, when all they want to do is just get something on thei machine so they can do X...
.deb, Apt (Score:3)
# apt-get install foo
Want to remove some software?
# apt-get remove foo
Want to hack the source to something?
$ apt-get source foo
Want to compile your own debian package from source you've just downloaded and/or tweaked?
$ debuild
And given the large number of packages available, I don't even bother checking whether the package I want exists first, 80% of the time it does.
Advantage (?) Windows (Score:3)
Not that there aren't any number of post-installation problems that can cause nightmares for Windows users; but generally, the installation of new software tends to go extrememly smoothly.
Not in my experience.
WindowsThere is a key difference between perceived ease-of-use and actual ease-of-use. Just because the installer has a pretty GUI with lots of colorful icons and progress bars doesn't mean it is actually any better. Give me RPM any day.
Us "experts" like package managers, too (Score:3)
However, there is a point where the newbies must learn how to do stuff as well, and RPM type things really don't teach much except rpm -Uvh and rpm -e :)
While I agree, as someone who knows a lot more then how to type those commands, anything that makes my life as a system administrator easier is a Very Good Thing. If I can install a package in a single RPM command (as opposed to reading the INSTALL file, diddling with configure options, and doing three different make commands), I'll gladly take it.
Uninstall! (Score:3)
Currently, uninstall options aren't all that promising. If you installed with 'make install', then good luck. If you still have the source around, maybe you can read the makefile and find out what went where. If you installed with RPM or the Debian package manager, you still have application-created data lying around.
I think most people have had the experience of doing a 'ls -a' in their ~ for the first time in a while and finding megs of old config data. When I uninstall enlightenment, I want it to take all seven megs of it's config info with it. Same goes for gimp or KDE.
autoconf, or what it could have been (Score:3)
Now, I long for what might have been if metaconfig had taken off. autoconf just isn't what it was craked up to be. There are an awful lot of one-off hacks that have no real internal consistancy. I once made the mistake of asking someone how I locate the Motif libraries in autoconf. I got several answers from "it should be where X is" to "you'll have to write your own command-line arguments, try doing something like what EMACS does". Granted, Motif is not at the heart of free software coding, but it seemed odd that a) such a popular library was not easy to locate and b) there was no standard way to say "search in these directories or as directed by these environment variables/command-line args for this library containting these headers and these functions". Many pieces of this exist, but none of it's coherent or complete.
I'd love to see two things:
If someone were to ask my opinion, it should probably be based on one of the popular scripting languages (e.g. Perl, Python, Scheme, etc).
Realistically, your average project should not have to look like more than:
buildmode: gnome_coding_standard
require: c(ansi,longlong), gtk
build_lib: fizzle(fizzle.c)
build: fazzle(fizzle, fazzle.c)
That would indeed be sweet.
InstallShield type X utility is needed. (Score:3)
The average computer user simply can't handle the command line, let alone compiling things or even extracting files from a tarball. If we want a Mainstream Linux Desktop, we'll need this type of install utility.
"You ever have that feeling where you're not sure if you're dreaming or awake?"
PkgMaker (Score:3)
PkgMaker [slc.ut.us] is a tool I've written that can build packages for Solaris, HP-UX, binary tars, and RedHat RPMs. It uses a very simple model and can be easily extended for other package managers.
In writing PkgMaker I came to the same basic conclusions as Jeff did: adding a small amount of packaging information to a project's source would go a long way towards making packages easier.
CPAN! (Score:3)
freebsd's style? (Score:3)
to the best of my knowledge, I am not aware of any linux distro that has an entire source tree structure such that you can 'gen a system' entirely from source - and painlessly, too. I think linux could benefit from freebsd in some ways.
I like having the ability to get just the binaries (pkg) as well as having the binaries be gen'd from source ON MY SYSTEM. no possibility of version skewing here!
so since linux can't decide on a common pkg scheme, why not take a slightly more neutral approach and just adopt the freebsd pkg/ports system?
--
Re:No. (Score:4)
Rpm works fine for me (Score:4)
Just a few comments (also, rpm -qpl should put a header, so I can do rpm -qpl * instead of for x in *.rpm; "rpm -qpl" "$x" > "$x.lst"; done)
Jezzie
ls:
This situation comes up every time I ... (Score:4)
Now you and I may be happy with a uuencoded shell script, or wading through the 31 flavors of rpms on rpmfind.net, but coming from the Windows it looks very alien. Thre is an undeniable niceness to grabbing a zipfile, unloading it into a temp directory, running the program for while, deciding whether to keep it, or to delete the directory.
No dependency-foo, no Gnu-make-foo, no glibc-foo. Just unpack it and go. No silly compile from scratch and hope you have the right kernel, libraries, compiler and support packages.
RPMs, DEBs, source distribution with autoconf all give the user a LOT of power and niceties. But it is still an order of magnitude more complex than InstallShield looks to the average user under Windows.
Just some thought for food,
No. (Score:5)
What needs done is much simpler. Currently popular packaging systems need dumped in favor of GNU Stow [gnu.org]. Then we don't need to change automake and autoconf at all, because they work as-is.
Dependencies could be added to Stow by someone without a lot of trouble.
For those who don't want to download and install it to figure out what it does (althoug you should! It makes life very easy if you do any source installs), GNU Stow takes "packages" that have been installed in the standard manner (things placed properly in bin, lib, man, etc.) in their own directories (such as /usr/local/stow/) and makes links to the parent directory's bin, lib, etc. You can tell by a simple ls -l what a file belongs to. Since the links in the directories aren't the "real" files, you can delete and restore them with minimal trouble (I challenge someone with a conventional system to rm -rf /lib and restore it, without rebooting). You can even maintain multiple simultaneous versions of packages. Autoconf already makes this easy to use, simply supply the --prefix= parameter to your configure scripts.
No silly proprietary formats, nor waiting for someone to come out with the latest package in your favorite format, no trying to convert foreign packages to your system. Everything you can find in a tarball is now pre-packaged and waiting for you to install...
Why I like /usr/ports (Score:5)
Brief intro for those unfamiliar with *BSD: To install "gimp" on FreeBSD you do this: "cd
The FreshMeat editorial makes it sound like this is a brand new cool idea--it's not, all of the *BSD's have worked this way for years. I really like it.
I would love to see Linux support something like this. The closest is Debian's apt, which has a mode for fetching and installing from source, but it's not as simple and direct as this
Some comments on this way of doing things:
-- I *love* being able to browse through the filesystem to find out what packages I could possibly install. It's a very natural thing to do: if I want to browse graphically, I do so via netscape or some filemanger. Mostly, being a geek, I use "ls", "cd", and "locate" to find out what packages i might want to install.
-- It's less to learn. If you are already going to have to learn how to do "make install" in order to get packages installed outside of your package management system (you just HAVE to have the version released yesterday) then you have already learned what you need to know to install any other package.
-- It does support a binary packages system. Binary packages amount to doing the compile stage on someone else's server, the whole install process goes exaclty the same way except that ratehr than compiling the binaries, you fetch them.
-- It brings everyone closer to the source tree. It's natural to grow up from being a novice user, to being a bit of a code hacker. There the code is, in front of your face, begging you to look at it--many people say this will scare people off, but nothing *requires* you to look at the code; and it's incredibly tempting for the curious. I think this leads to more developers, and is the main reason why *BSD has been able to keep pace with Linux despite having many fewer users.
-- The filesystem is a concrete, easy to understand organization for the packages. I can visualize where things are and how they relate to one another. With other package managers, like RPM or DEB, the dependencies seem complicated and abstract. When there is a failure, I haven't got a clue what to do (well I do now, but I didn't used to). AT least with compiling when there is a failure I can kind of see that it is a file in this package that lives over here, and that is causing my problem. I may not know what to do, but I know where the problem "lives". This makes me a little more motivated ot try and fix it, possibly by trying to install that other package some different way or something. In theory deb is the same, but it just doesn't *feel* the same.
In my opinion the only package management approaches that anyone should seriously consider are the Debian approach (apt/dpkg) and the *BSD appraoch (ports, plus their package management tools that back it up). Both of these allow all kinds of fun stuff like upgrading your system live without rebooting; synchronizing on a daily basis with the most current version; and have intricate and strong concepts of dependencies between packages.
In theory, they are functionally equivalent--or close enough--but I prefer the filesystem based implementation that has source code at its heart. It not only seems more Unix-like to me, it seems more open.
The big counter-argument to all of this is that source is scary to average users, many of whom don't understand the filesystem at all. I figure this is no argument at all, because you can bury the compilation under a pretty GUI just as easily as any other dependency system. And if your user can't figure out a filesystem, they won't be installing stuff using *any* package manager: it'll be pre-installed, or nothing, for them.
Just my $0.02