Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Unifying Linux Package Management 501

Job Diogenes Ribeiro Borges writes "The Smart Package Manager is an intelligent tool that works on the 'dependency hell' of software upgrading and installation on linux. Works with all major distributions (APT, APT-RPM, YUM, URPMI, etc), supporting multiple sources and technologies concurrently. Yes, you could install from multiple sources, from deb, rpm, tgz at same time! Smart Package Manager is being developed by Conectiva and is the tool that makes the Magic of CrossPlatform package management, behind the recently announced 'Four Linux Vendors Agree On An LSB Implementation.' You can get screenshots here (portuguese texts) and a README here."
This discussion has been archived. No new comments can be posted.

Unifying Linux Package Management

Comments Filter:
  • Oh, dandy (Score:5, Funny)

    by krog ( 25663 ) on Tuesday November 23, 2004 @03:18PM (#10901512) Homepage
    Combining the weaknesses of five different package managers will surely alleviate "dependency hell."

    I'll be over here, playing nethack on my NetBSD box and giggling.
    • Combining the weaknesses of five different package managers will surely alleviate "dependency hell." I'll be over here, playing nethack on my NetBSD box and giggling.

      I can understand the giggling, in particular since the NetBSD packages system [netbsd.org] is portable and actually works.

    • by whyne ( 784135 ) on Tuesday November 23, 2004 @03:57PM (#10902084)
      I don't understand all this fuss is about. cvsup -g -L 2 /usr/ports-supfile cd /usr/ports/xxx/xxx make install clean there is always pkg_add -r xxx.xxx.xxx or in a pinch : portsdb -Uu portversion -l "" portupgrade -arR with the occasional pkgdb -fu
  • by Delphis ( 11548 ) on Tuesday November 23, 2004 @03:19PM (#10901533) Homepage
    What happens when Debian .debs and RedHat .rpms want to install to different places? If you installed as one type, would you be then forced into using the same type of archives every time?

    For library locations, ld would probably take care of it.. I'm not sure I can think of any off the top of my head but there may be programs that rely on other components being in a certain place, and possibly barfing if they are not.
    • by mrchaotica ( 681592 ) on Tuesday November 23, 2004 @03:27PM (#10901648)
      Sounds to me like one of them isn't following the Filesystem Hierarchy Standard [pathname.com]...
      • It's worse than that. A major problem would be simply the names of the packages. For example, on Fedora, the pango-* rpms depend on glib2-*. pango-devel would depend on glib2-devel. In Debian, they call the packages pango-dev and glib-2-dev or something. Package manager don't just use lists of provided resources to resolve dependencies; they also use package names.

        I haven't read much of the documentation on this project, but the only way it would work would be to implement their own (yet another) pack
  • by CrackHappy ( 625183 ) on Tuesday November 23, 2004 @03:21PM (#10901554) Journal
    This tool really points to the fact that Linux distributions in general are all over the map regarding the installation of packages.

    I believe that this tool could be VERY useful to an average user, if they can manage to get it installed and configured. From what I've seen, there are many steps to getting this to work.

    Linux distributions have a big problem with package installation and management from an end user point of view. They are a MAJOR pain in the ass, even for experienced users like myself.

    Hopefully this develops further and provides us with something to aid in distributing Linux over more desktops.
  • by TWX ( 665546 ) on Tuesday November 23, 2004 @03:23PM (#10901597)
    ...Debian.

    I switched to Debian specifically because of the ease of use with the packaging during the era when RPM still sucked massively and was fragmented between RedHat, SuSE, and Mandrake so badly that they couldn't use each others' RPMs.

    If I want to not have dependency-based packages I use Slackware, where I use Slackware's tarred gzips or I download source and compile it. If I want a workstation where I can grab X piece of software easily, then it's Debian.

    The only thing that this'll be useful for, for me anyway, is installing software that companies release RPM-only, binary only that don't have Open Source alternatives.
    • by Anonymous Coward on Tuesday November 23, 2004 @03:30PM (#10901696)
      Not just Debian. Pretty much anything non-RPM based has no "dependency hell". Why do Debian, Gentoo, and the BSDs not have dependency hell? Because the repositories are controlled! RPM-based distributions will try to install anything from anywhere, and it's no big surprise that nothing matches up.

      It's really that simple. Dependency hell is not a software problem. It's a management problem.
      • Funny, Windows programs can be fetched from anywhere too, without a single repository, and I've never heard a windows luser complain about "dependancy hell". /flamebait
        • That's because they don't have a "Package Management System". They have an "Installer", which "installs" the program onto the system. From there on, you are on your own (good luck removing the program later).

          Oh, and what do you think a "Missing xxxx.dll. Aborting." message is? It's a lack of dependancies!
      • Ahem (Score:5, Insightful)

        by Azureflare ( 645778 ) on Tuesday November 23, 2004 @03:57PM (#10902077)
        RPM-based distributions will try to install anything from anywhere

        Actually, that should read:

        users will try to install anything from anywhere.

        If you get all your rpms from the rpm repository maintained by your distro, everything is fine. If you try mixing-matching distribution rpms, then you will run into problems. But, keep in mind: distributions do not do this by default. This is the user thinking they can just go around installing rpms built for different systems easily.

        The tool that I never see mentioned is a nice and handy little tooll called rpmbuild --rebuild, which you use with .src.rpms. This will enable you to take, say, a .src.rpm for RedHat, and rebuild an rpm on a Mandrake system, and install it easily.

        Often people touting dependency hell have never actually tried to go beyond the basic .i586.rpm available from different distros.

        • Re:Ahem (Score:3, Interesting)

          by fireboy1919 ( 257783 )
          For a while there I was trying to use source rpms to get around dependency hell.

          The big trouble was in the little things - patches to gcc, or the libraries I had, and occasionally the code that I needed - weren't there.

          Case in point - the latest version of Redhat ships with a version of Bison that won't work with g++ 3.4, which also comes with Redhat.

          Even bigger - the last version of Mandrake I used (8.2) came with a gcc compiler that couldn't compile the Mandrake flavored kernel with the default options
      • gentoo, et al. (Score:4, Informative)

        by mizhi ( 186984 ) on Tuesday November 23, 2004 @04:49PM (#10902766)
        I've had some nightmares with portage using gentoo before. Keep in mind, it's my distribution of choice, but that doesn't mean that portage is not without its problems.

        RPM does suck (/flamebait) but don't fool yourself into thinking that the other package systems are problem free.
        • Re:gentoo, et al. (Score:3, Informative)

          by antiMStroll ( 664213 )
          Were you doing any customization? I've had problems for sure, but never anything more than a poorly configured emerge failing. However Gentoo doesn't replace binaries until a compile has successfully completed so worse case the result is a bit of lost drive space in var and waiting for a package fix.

          Where gentoo realy needs help is its insistence of generating new config files with every core emerge and sitting the user through a hundred questions via etc-update. Retain, replace or merge /etc/foo? At least

      • Why do Debian, Gentoo, and the BSDs not have dependency hell? Because the repositories are controlled!

        Umm, no. Debian controls their main repository, as does Red Hat. Debian has no control over the many alternative repositories listed at http://www.apt-get.org/ [apt-get.org], and Red Hat has no say about the contents of http://rpmfind.net/ [rpmfind.net].

        Any system that lets users can add unofficial package sources to their management system is subject to dependency hell. RPM-based systems happen to get the lion's share of bad p

    • RPM still sucked massively and was fragmented between RedHat, SuSE, and Mandrake so badly that they couldn't use each others' RPMs.

      Sigh. RPM didn't suck at all. The sole reason for your problems is RPMs popularity. If Ubuntu or Progeny or whoever acquires enough market share, I can guarantee you'll start to see the same issues cropping up with dpkg systems. Initially, it won't be a problem, just as Caldera and SuSE RPMs used to work fine on Red Hat and vice versa -- everyone strived to maintain compatibil

  • by Morganth ( 137341 ) on Tuesday November 23, 2004 @03:24PM (#10901610) Journal
    I'm still compiling!!

    Here's a coralized link [nyud.net] to the screenshots, too:
  • no gentoo? (Score:3, Insightful)

    by Se7enLC ( 714730 ) on Tuesday November 23, 2004 @03:24PM (#10901615) Homepage Journal
    deb...rpm...slack What? no Portage? what about my eBuilds?
    • Re:no gentoo? (Score:3, Insightful)

      by chill ( 34294 )
      Okay, I'll say it.

      Source-based packages have no place in a large scale environment. They do not scale.

      The little boxes that sit in most offices and cubes have no business with a compiler on them, nor are people interested (nor should they be) in wasting time compiling applications.

      Most office workers have a JOB to do that doesn't involve compiling software. As a corporate sysadmin I sure as hell don't want to have to use an eBuild for updating something like KDE or ANYTHING for that matter.

      Gentoo is w
    • Re:no gentoo? (Score:3, Informative)

      by eviltypeguy ( 521224 )
      ebuilds are scripts, not packages. Next.
  • This is good (Score:5, Interesting)

    by skids ( 119237 ) on Tuesday November 23, 2004 @03:24PM (#10901620) Homepage
    Projects like this create a top-down pressure for packaging formats to standardize, adopt each other's features, and work on new features collaboratively. By having an developer community abstracting package formats and procedures, the package system authors get a comparative peer review, rather than just user feedback. This highlights the benefits and shortcomings of each packaging system in a much more impartial manner than any magazine review or forum discussion.

    The GUI part of it really doesn't appeal to me. Lots of my machines are headless, and even with X11 I remote display don't particularly like the idea of installing umpteen X11 toolkits and support libraries on a router/fileserver/webserver just to support package management. It's good to see that they have commandline utilities as well.

  • Wait and see (Score:2, Insightful)

    If Fedora chooses to include it in the future, I'll give it a try. Until then, however, I think I'll stick to the evil I know (yum), rather than playing musical package managers.
  • by Anonymous Coward on Tuesday November 23, 2004 @03:27PM (#10901650)
    The worst thing about package managers today is that ever UI pops up windows asking for the 'root' password. This is training lusers across the world that to install even the most trivial web browser, it's OK to type in root passwords whenever somethings pops them up.

    Please, if someone's making a new package management system; give it the ability to run as a normal user and install in $HOME/bin, and give it the ability to run as a member of the group 'local' and install in /usr/local

    • yes because giving unpriveleged users the right to run whatever binaries they want is a good thing..

      btw, you need to enter the root password to see this message without the sarcasm.
      • They already can. Investigate passing paths to ELF binaries to /lib/ld-linux.so.2, and if you figure out how to disable that (have a cookie if you can, it is possible!) then investigate ul_exec, LD_PRELOAD and ELF constructor functions.

        Basically if a user has shell access to a box they can run whatever code they like. Deal with it.

  • Portage (Score:4, Insightful)

    by kaleco ( 801384 ) <greig@marshall2.btinternet@com> on Tuesday November 23, 2004 @03:28PM (#10901666)
    I know a lot of people have issues with Gentoo's focus on having the user compile packages that they download using portage, but what would be wrong with simply developing Portage and increasing the availability of binary packages?
    • Re:Portage (Score:4, Informative)

      by Balinares ( 316703 ) on Tuesday November 23, 2004 @04:46PM (#10902729)
      Let me rephrase your question:

      What would be wrong with simply developing any specific package management system?

      Portage is great, alright -- in particular, the possibility to pick your own choice of dependencies (like, ALSA but no OSS, SDL backend but no svgalib...) and have it respected all through the system, is the greatest thing since sliced bread.

      And with binary packages, you lose that possibility. Unless you provide as many packages as there are possible choices. Good luck.

      Besides, if you mix binary packages from different sources, you also get the problem of programs compiled against different specs/glibc version than the libs they end up linking to on your system. The good old third-party RPM recipe for crash.

      It's an old problem, and there still isn't any clear solution in sight. Even in distros with the most painstakingly maintained package repository, like Debian, you'll generally need to recompile software to your own needs (support this or that DB backend that your company requires, add ACL support, etc...).

      There are only two paths toward solving this class of issues, to my (non exhaustive -> grain of salt, please) knowledge:

      1) Somehow provide an API, perhaps glibc-wise, that will allow to disable the relevant paths of code at runtime if the required library runtime is not available. Yeah, I know about dlopen. No, that's not workable. dlopen needs to be designed around. What we'd need would be something as easily managed as #define _HAVE_SDL, only at runtime. There is no way to ensure its adoption if you don't make it as efficient to use as possible.

      2) Agree on a common set of libraries -- think DirectX, only system-wide, not just for games -- and have programs 1) depend on the required version of that LinuX set of libs, and 2) ship with what libs they need that aren't in it. The good thing is, if you define a given LinuX version as, simply, an empty package depending on a set of libraries with precise versions and compilation options, each distro can use its own package management to handle it. This is not without drawbacks, though. How do you handle security updates? (Ebuild-like revisions might work, admittedly...) And, just how BIG would any given LinuX-x.y be?

      We may never see a smoothly working universal Linux dependency management system, I realize. Still, it's good that there are still possibilities to think of. Perhaps, someday...
  • This is something that I've been hoping for. Developing for Linux, or rather distributing your program, is a bit of a pain. After a while You have a nice little program.tar.gz for people to use. Most of the time You'd want this to be handeled by a package manager so you create an RPM and a DEB. But wait, that's not enough. You'll need a RH8 rpm, a RH9 rpm, a RHES30 rpm, a MDK9 rpm and so forth and so forth, and this is just for the i386. The permutations are too many to concider. I find this quite annoying.
    • I grab one source RPM, and "rpmbuild --rebuild" it on each platform as needed. For packages I maintain, I take care to make a single SRPM build on all the platforms I use. I can usually take an SRPM from Fedora, and build it on RH7.3 or RH9 with no problems - or maybe a few tweaks. (Unless you're talking latest Gnome GUI stuff.)

      Also, what dependency hell? Sure, running the low level RPM command makes you fetch dependencies manually. But when running the high level systems like RedCarpet, Yum, Apt-RPM

  • by mrchaotica ( 681592 ) on Tuesday November 23, 2004 @03:31PM (#10901706)
    ...instead of trying to hack together all the different kinds of package management?

    It seems to me that the way to fix this thing is to just pick one and then fix whatever shortcomings it has, instead of combining all the shortcomings of everything (except Portage, apparently).
    • Unbeknownst to most people, there is one true way to install software which is eleminates dependency hell.
      1. ./configure
      2. make
      3. make install
    • ...instead of combining all the shortcomings of everything (except Portage, apparently).

      So, what.... you're suggesting we make it run really slowly, too?

    • Politics (Score:5, Insightful)

      by lakeland ( 218447 ) <lakeland@acm.org> on Tuesday November 23, 2004 @03:49PM (#10901965) Homepage
      The Debian packaging system works pretty much perfectly. There are some tiny problems (e.g. the way apt calls dpkg means an inopportune power-cut could leave the system in a worse state than it really should, the equivs package and the meta packages are a tad crude).

      Compared to yum, Debian's system works very well. flawlessly. So why doesn't RedHat use it? I rather suspect that is because RedHat didn't invent it, and RedHat has never dropped something they invented over a superior product developed elsewhere.

      Debian and the derivative distributions have this sorted perfectly. Even Gentoo has this sorted better than RedHat, even BSD (ports) solves this much better than RedHat.

      Yet RedHat continues to use an inferior system, and people continue to use RedHat. For some reason, those people think it is a problem with linux, instead of a problem only present on RPM distributions. Oh well...
  • We should at least try to hold out some hope here. Really, it could provide an opening to 5+ package managers weaknesses...or it could truly unite program management.

    Honestly, this is something that is desparately needed in the linux world...it is pretty obvious that the biggest hurdle to running linux is the fact that it is hard to update and patch. Granny don't know apt-get -dependancyhell -fubarproofmycomputer

    Our linux community really needs something as FUBAR proof as microsoft's start menu icon th
  • OSX (Score:5, Insightful)

    by minus_273 ( 174041 ) <aaaaaNO@SPAMSPAM.yahoo.com> on Tuesday November 23, 2004 @03:36PM (#10901766) Journal
    if linux is to be truly ready for the desktop we need a syatem like in OSX. Something that is as intiitive and simple as dragging an icon to the applications folder to install and then dragging it to the trash to uninstall. That should be it. I know there are arguments against this apprach from geeks who talk about the waste in having redundnat libaries, but this is not intenedeed for geeks it is for people who want software to just work.
    • Re:OSX (Score:2, Interesting)

      by Anonymous Coward
      The problem is far worse than "waste in redundant libraraies". The real problem is what happens when there is a security hole in a given version of a library. Without central libraries you will have to look at every application to see where its loading shared libraries from.
    • Re:OSX (Score:5, Interesting)

      by Mornelithe ( 83633 ) on Tuesday November 23, 2004 @04:27PM (#10902473)
      If OSX is to be truly ready for the desktop, we need a system like in Linux. Something that is intuitive and simple as looking at a list of available applications of a type, picking the one I want, and clicking a button that says 'install'.

      I don't want to have to go to a store and buy CDs or spend a long time searching Google for software that will run on my machine, then download an archive, and finally get the delivery media opened, and drag it somewhere on my hard drive.

      Seriously, how is Drag-n-Drop easier than Select-n-Click? Of course, saying "OSX is good" is a safe bet here, because you'll automatically get modded up. I'm not saying OSX is hard, but Linux is not hard in this area either.
    • Re:OSX (Score:4, Insightful)

      by TomorrowPlusX ( 571956 ) on Tuesday November 23, 2004 @04:28PM (#10902498)
      Speaking as an OS X developer, another thing this style of packaging tends to result in is devs who *want* the app to be easily installed.

      Apple provides an "Installer" app for apps which *need* installation, and look how many people use it. Almost nobody.

      The whole design guidelines of OS X and the general mindset that comes with living in and working on OS X is that an app should be one thing, an object, which can run anywhere and shouldn't require a billion libs to run.

      And for those who gripe about not re-using libs, well, OS X apps *do* link against libs in /usr/lib and frameworks in /Library. So when OS X gets an update, everybody gets it.

      Frankly, I don't miss running configure scripts and then manually installing a half dozen obscure libs to run a single app. If I -- a developer mind you on a well maintained system -- didn't have those libs already, how many people would? Just link it statically and deal with it. Criminy.
    • Re:OSX (Score:4, Insightful)

      by Ian Bicking ( 980 ) <ianb@nOspaM.colorstudy.com> on Tuesday November 23, 2004 @05:18PM (#10903137) Homepage
      OS X installation is better suited to proprietary applications, as opposed to the more cooperative software system of a typical Linux computer. On OS X there are a clear set of system libraries that people can be expected to have. (And there's no incremental way to do major upgrades on that system software; also no free way).

      When you only depend on system libraries, installation can be pretty simple. But there is little "system" on a Linux computer; libc is system. Is glib? libxml2? Who knows... various vendors define a base system, but it never means a whole lot, since Open Source developers aren't going to pay attention to it.

      The Open Source environment encourages little applications and lots of dependencies. The package management has to be a lot more robust. Also, Linux package managers supports legacy applications, OS X does nothing in that case. It lets them run free and unmanaged. There's a virtue in OS X, that they provide clear and strong conventions to their developers; but the actual infrastructure is way less powerful or complete than any Linux distribution.

    • by commodoresloat ( 172735 ) on Tuesday November 23, 2004 @06:31PM (#10903918)
      Something that is as intiitive and simple as dragging an icon to the applications folder to install and then dragging it to the trash to uninstall.

      Why would you bother with clicking and dragging when you can simply edit the compile script to your liking, then ./configure with whatever tags suit you, make, make install, go through the output to figure out the dependency errors, download and install the necessary libs, re-edit your compile script, ./configure, make, make install again? That should really be all you need unless you're doing something fancy.

  • Reinventing (Score:4, Informative)

    by paugq ( 443696 ) <pgquiles&elpauer,org> on Tuesday November 23, 2004 @03:39PM (#10901822) Homepage

    Reinvening Autopackage [autopackage.org] and OpenPKG [openpkg.og] once again?

  • I understand you have to target the business desktop world, but I am not really interested in all of this business of having a unified desktop and now a unified package management system.

    I, and many others, have CHOSEN Linux because we wanted the power to choose. One man's apt is another man's yum, is another man's yast and so on.
  • I like the "Fix all problems..." option.

    But i have no idea how software can make me understand women...
  • Wrapper hell (Score:3, Interesting)

    by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Tuesday November 23, 2004 @03:50PM (#10901980) Homepage
    Somehow I get a yucky-feeling with 'solutions' that are based on wrappering the underlying cruft instead of fixing it. No amount of duck-taping and gui-frontends will make the Linux dependecy and packaging hell really go away, it will just lead to more obscure harder to track down bugs in the end. What I would really prefer to see would be one standard way to package stuff (relocatable and thus not to tied to where exactly it has to be installed, etc.) and how to handle the install. With all the distros around there is however little chance that we will see that any time soon. Only hope currently is really LSB, if lsb conforming rpms get more widespread in the future they might be a small change that packages moves more and more out of the distro and into distro independend lsb-rpms, however this will, if it ever happens take many many years, so I won't hold my breath...
    • The problem with any of these wrappers is that they are trying to solve an intractable problem - making all package systems play together. Its been tried.

      Tried and failed?

      Tried and died.

      The wrapper app simply doesn't know what the imported packages are going to do to each other. At least in a single-source scheme, the manager of the repository can confirm that all packages on their servers play nice with each other. The same can't be said across all package managers and repositories. People will get segf

  • I especially like that menu item...
  • by Master of Transhuman ( 597628 ) on Tuesday November 23, 2004 @05:04PM (#10902982) Homepage
    Jesus Baron von Christ, geeks, this isn't nuclear fusion science!

    Develop an XML layout standard for packages defining everything - names, file sizes, hash values for everything - in other words IDENTIFY EVERYTHING uniguely (and where it ISN'T unique, cross-ref) - then write a package manager.

    Do I have to do everything for you morons?

  • by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday November 23, 2004 @05:30PM (#10903307) Homepage
    I noticed one big difference between 'smart' and existing tools is that it will sometimes choose an older version of a package if that's necessary to get a smooth upgrade. As they say in the web page,

    In this case, there's a package A version 1.0 installed in the system, and there are two versions available for upgrading: 1.5 and 2.0. Version 1.5 may be installed without problems, but version 2.0 has a dependency on B, which is not available anywhere.

    In this case, the best possibility is upgrading to 1.5, since upgrading to 2.0 is not an option.

    But doesn't it often happen that older versions of a package have known security holes? Until now it has been sufficient to package the newer, fixed release and let the systems like apt and yum pick it up. If we have package managers that may deliberately choose an older version, there needs to be good metadata on which older versions of a package are still usable (ie, don't have known or likely exploits).

    Indeed this is true of bugs in general, but security is the most worrying example.

For God's sake, stop researching for a while and begin to think!

Working...