Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

Manage Packages Using Stow 234

dW writes "This article is about Stow, a software installation management utility for Linux that offers a number of advantages over the tried-and-true Red Hat and Debian package management systems. With Stow, you can package applications in standard tar files and keep application binaries logically arranged for easy access."
This discussion has been archived. No new comments can be posted.

Manage Packages Using Stow

Comments Filter:
  • by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Wednesday March 12, 2003 @09:15AM (#5492931) Homepage
    We (the Tcl core developers) have had problems in the past with Stow, mainly because it relies on being able to specify the installation process at 'make-install' time instead of normal 'make' time, leading to messed up baked-in paths... :^/
    • by Anonymous Coward on Wednesday March 12, 2003 @09:18AM (#5492948)
      try encap. predates stow.

      http://encap.cso.uiuc.edu
    • by virtual_mps ( 62997 ) on Wednesday March 12, 2003 @09:28AM (#5492995)
      We (the Tcl core developers) have had problems in the past with Stow, mainly because it relies on being able to specify the installation process at 'make-install' time instead of normal 'make' time, leading to messed up baked-in paths...


      I'm not exactly sure what you're unhappy with, but it sounds like a build problem rather than a stow problem, IMHO. With stow you can build the program to expect either the "real" path, or the "stowed" path, depending on your purposes. (Using the "real" path means that multiple versions can be installed in the stow tree and used simultaneously with an explicit path, while using the "stowed" path means that things like config files are in an expected location like "/usr/local/etc".)
    • We (the Tcl core developers) have had problems in the past with Stow, mainly because it relies on being able to specify the installation process at 'make-install' time instead of normal 'make' time, leading to messed up baked-in paths....
      Why is this a problem with stow? You probably shouldn't hardcode paths into the binary, as that may lead to problems like this.
      • by StormCrow ( 10254 ) on Wednesday March 12, 2003 @10:22AM (#5493340) Homepage
        A quote from http://cr.yp.to/slashpackage/finding.html

        ``Software should never ever assume it knows where to get files from,'' someone once wrote. (He says I'm taking his quote out of context, so I won't identify him here.)
        Here was my sarcastic response:

        Yes, that's a very important principle!

        Let's take, for example, csh, which uses /etc/csh.cshrc and /dev/log and /bin/sh and many other files. The reason that all those filenames are listed in /etc/csh.conf is so that they can be changed.

        Now, some people want to move /etc/csh.conf itself. That's why csh looks for the /etc/csh.conf filename in a hashed /etc/registry.db file.

        Of course, on some machines, we need to move /etc/registry.db. That's why the registry filename is listed in a COMPILEDFREGISTRY environment variable.

        There's still the possibility of conflict with previous uses of the COMPILEDFREGISTRY variable. That's why the name of that variable is listed in /etc/fregistry_variable_name.txt.

        You say you want to move /etc/fregistry_variable_name.txt? You fool! We have billions of programs that /etc/fregistry_variable_name.txt at the top of main(). Everything _else_ has to be configurable, obviously, but /etc/fregistry_variable_name.txt isn't going anywhere.

    • mainly because it relies on being able to specify the installation process at 'make-install' time instead of normal 'make' time,

      Nonsense. This may be the suggested usage in the documentation, but the documentation is wrong. It almost always works to specify the path at configure time. I have installed hundreds of packages (on thousands of machines) this way. The only package that has ever given me trouble is pkgconfig, because other packages put files underneath its structure - some one-time tweaking of the links fixed this for good.

    • by shellbeach ( 610559 ) on Wednesday March 12, 2003 @07:00PM (#5498764)
      We (the Tcl core developers) have had problems in the past with Stow, mainly because it relies on being able to specify the installation process at 'make-install' time instead of normal 'make' time, leading to messed up baked-in paths... :^/

      Yes, but that's only a problem with the stow documentation. Use "--prefix=" during configure and you'll have no worries at all (except that the "baked-in" package names will be '/usr/local/stow/yourpackage/etc ...', but that has never mattered to me and I have about fifty stow packages installed on my system, with everything from gtk-2.2 to lyx to rxvt).

      I have used stow for the past year and absolutely love it. It allows me to have complete control over all the software I compile by had, as opposed to the base system installed by my distro. And since I have a bash alias to ./configure to include an automatic prefix assignment based on the directory name I'm configuring from (which is almost always based on the name and version number of the software), I can compile a new version and

      stow -D /usr/local/stow/foo-1.2

      stow /usr/local/stow/foo-1.3

      ... without losing my old version that I know works. (so, if my new version segfaults, I can "install" the old one simply by reversing the above process)

      And ... I can do a simple "du" command in the /usr/local/stow directory to see exactly how much disk-space each package is using and I can easily find, modify or delete a part of a package I compiled months ago!

      Stow is one of the most fantastic pieces of software, and it's simplicity itself as well. It reports conflicts and only installs sym-links. The scary thing is that this is the first and only time I have ever seen it reach "mainstream" coverage - like most of the best linux software, it seems to be unheard of and unused.

  • well... (Score:5, Informative)

    by virtual_mps ( 62997 ) on Wednesday March 12, 2003 @09:17AM (#5492938)
    stow is not at all a "package management utility for linux". It's a perl script and runs on almost anything. I've used it to manage local packages on IRIX, Solaris, and various flavors of Linux. IMO, the great strength of stow is exactly local packages--it's a great way to manage a shared /usr/local or such. I suggest thinking of stow as a powerful complement to your native package management scheme.
  • Whilst this is some way towards making Linux more user friendly (and ultimately gaining acceptance on the corporate desktop), what are the chances of anything being done about the crazy directory layout of a *nix system?

    If the answer is nothing, then a suitable GUI for Linux that has the objective to gain corporate desktop acceptance really has to isolate the [l]user from this - i.e. with something like "My Documents".

    I like using Linux, but even as a seasoned IT pro, the directory structure and "what goes where" of a *nix system still bugs me.
    • by 42forty-two42 ( 532340 ) <bdonlan.gmail@com> on Wednesday March 12, 2003 @09:24AM (#5492982) Homepage Journal
      "My Documents"? Ever heard of /home/$username?
    • by Sh0t ( 607838 ) on Wednesday March 12, 2003 @09:31AM (#5493023) Journal
      You have it backwards friends. It's not a crazy scheme it's a STANDARD and LOGICAL scheme. Programs installed to certain restrictive locations so you KNOW without trying to guess, where certain binaries go, where config files go and so on. It's a standardization and it works very well YOu just aren't used to it. IF you aren't installl from a system package you can put whatever you want, wherever you want, but to keep things orderly you should follow the scheme windows does the same things actually when you use installshield or msi, some files go to program files/program name, some go to /windows some go to a temp dir for settings, some entries go into the registry etc. I think the standard unix method is much tidyer overall, but it may be a bit confusing at first to those who are migrating.
      • I think the standard unix method is much tidyer overall, but it may be a bit confusing at first to those who are migrating.

        Perhaps its just familarity, but the windows system is easier to me. Your stuff goes into the program files directory, the dlls go into windows directory, and your registry gets lots of hard to spot entries.

        Its ugly but easy enough to understand.

        I cant really understand the unix system, at least on red hat linux, because everone seems to use it differently.

        Some things end up in /opt, some in /usr/bin, some in /usr/apps, and so on.

        Personally, i would just put everything a program needs into one directory. Yes, even the replicated library code. You want to use lib_some_code.so in your program? Then copy that library also. Sure, it wastes hard drive space. But the alternative seems to be dependency hell.

        Windows now has turned full circle, and the OS can track and substitute different dll versions called by different programs. Frankly, it would have been easier if we had never gone there in the first place. I'm not short of space on my hard drives, and few of us are these days, and the problems caused by incompatible versions of the same shared code just dont seem to justify the disk space savings.

        Just define your core functions (kernel, x-windowing, or desktop environment). Everything on top of that gets its own copy in the same folder as the main code. You test once on the library that you want to code with, and that is that (hopefully).

        New versions of the shared code don't change your program, because your program doen't use the newer versions unless you feel that it needs them and then you recode it at the developer level.

        Just my 2c worth, probably too simple a solution to work of course.

        Michael
        • I'd say it's definitely just familiarity.

          For anyone confused about the "what goes where" in a Linux system, I warmly recommend taking a look at:

          http://www.pathname.com/fhs/

          which describes the Filesystem Hierarchy Standard, part of the Linux Standards Base. It should clear things up.
      • KNOW without trying to guess, where certain binaries go, where config files go and so on. It's a standardization and it works very well

        OK, on a system with hundreds of packages installed, how do you remove or upgrade one, without remembeing what files belong to what package?

        Stow solves this problem.

        And this is not a new idea with stow. Intelligent administrators have been doing this forever. Otherwise /usr/local turns into an unmanagable mess.

    • with something like "My Documents".

      ~username (tilde username) works for me.

      I like using Linux, but even as a seasoned IT pro, the directory structure and "what goes where" of a *nix system still bugs me.

      You could aways port the hier(7) manpage from a FreeBSD system - then insist that Linux (solaris, irix, etc., etc.) crowd follow the standard.

    • I like using Linux, but even as a seasoned IT pro, the directory structure and "what goes where" of a *nix system still bugs me.

      Agreed. The FHS is a laughably weak standard, with multiple potentially correct interpretations for parts of it.

      Note that I don't have any real problem with names like usr, etc, opt - they are essentially meaningless except to programmers, which is how it should be, for users localised VFS systems which abstract and represent the data in the filestore are the way forward IMO.

      Of course, that's assuming good old Hans Reiser doesn't tip the whole thing on its head with ReiserFS, right? ;)

    • *rofl*

      Great troll!

      Stupid statements like the one about "My Documents" is sure to keep this flame burning.
    • You must be trolling.

      Take any file on a modern Linux system. Any file at all. Explain to me its purpose and I'll tell you where you'll find it in the filesystem. A keyboard map? /usr/share. A subprogram to be executed by a program, not by the user? /usr/libexec.

      Take any file on a modern Linux system and tell me its full path. I'll tell you what it does, and I'll probably be able to tell you what package it comes from. I can also tell you if you need it or if you can get rid of it. /usr/X11R6/lib/X11/rgb.txt? Rgb color database to define textual names for colors, from XFree86. /usr/local/share/automake/elisp-comp? Support file for automake to integrate with emacs, installed by user after system installation time, from GNU automake. /usr/local/lib/libjpeg.so? Jpeg image library, installed after system installation time, from the JPEG group.

      Take any file on a modern Windows system, and you won't be able to do anything with it. C:\winnt\keyacc32.exe? Does that come from MS or from a third party and what is it used for? C:\winnt\system32\getstart.gif? An image that says "Get started with Beta 2" in the Microsoft Arial font. Is this a remnant from a win2k beta and if so, why is still here in a production post-sp3 win2k pro system? Or does it come from some third party? Try figuring out what files are necessary on a Windows system and what's cruft. You certainly won't be able to get a Windows install to fit in less than ten megs because all these files are spread out and undocumented - however, you know it's possible to get a Windows image in only a few megs because Microsoft does it (miniwindows during Windows installation). Getting a FreeBSD install to fit in only a few megs is not a problem - just did it for a compact-flash-based system and it's not hard.

      Anyway, this scheme that stow uses is very useful. Djb also has something like this in mind, but his way of doing it is not very elegant IMHO. I've been using something like stow for all my machines for the past three years or so, but it was just a 50-line shell script. Keep all software installed in /opt. Like /opt/emacs/emacs-21.1 with a symlink /opt/emacs/default which points to "current" version (that way, upgrades can be done with just changing a symlink, and downgrades are also just changing a symlink - you don't have to learn some tool's syntax, and, more importantly, the admin that replaces me won't have to learn some esoteric system because you can figure out my system in 30 seconds). All that my shell script does is make symlinks from the /opt/cvs/default/bin to /usr/local/bin, from /opt/python/default/lib to /usr/local/lib, etc. This way, when you run a poorly-written autoconf script, it will always find the requisite packages (because people always assume the required package is in /usr/local) and my users don't have to deal with $PATH. In addition, I can run this:

      find /usr/local -type f -print
      And this tells shows me all the stuff that I've written locally for the particular system (anything in /usr/local that's not a symlink must have been written by me).

      Never run into any dependency problems, upgrades are "ln -s," uninstalls are "rm -rf."

    • I like using Linux, but even as a seasoned IT pro, the directory structure and "what goes where" of a *nix system still bugs me.

      The *n*x directory structure makes much more sense to me that the directory structure (?) you are referring to, although the one you are referring to is moving towards the *n*x structure IMHO.

      My Documents -> /home

      There is one big difference though, and that's that application configurations are put there too and the preferences of one user is contained to /home (and not to some messy reg).

      # rm -rf /home/user and deluser user removes any trace of the user from the system /bin /usr/bin -> user apps /sbin /usr/sbin -> system apps /usr/local -> not managed by packaging systems

      It has been a while, so correct me if I'm wrong but
      c:\windows\system32 -> one big mess
      c:\Program Files -> lots of duplications and a strange concept of shared libs, there is hardly any difference between system and user programs.

      As for 'what goes where', I solve this mainly this way if there is no package available (if there is, there is not really a problem):
      $ tar xvfz my-package.tar.gz
      $ cd my-package
      $ dh_make
      $ fakeroot dpkg-buildpackage
      $ su -p
      # dpkg -i ../my-package*.deb

      programs like tcpdump, groupdel, ... seem logical to bin in a sbin directory to me, since you need to be root to use them anyway...

      Furthermore, it sounds to me that, if you use anything more than a bare boned system, you'll find someone that tags it as a mess, no mather waht package management (be it rpm, deb, stow or even W32).

      Anyway, structure is, I guess, a matter of perspective, but how someone comes up with the W32 directory structure as being logical (or structure for that matter) beats me...
  • by salimfadhley ( 565599 ) <ip@@@stodge...org> on Wednesday March 12, 2003 @09:20AM (#5492957) Homepage Journal
    One of the reasons I switched from Redhat 8.1 / 7.3 to Gentoo Linux (Beta) was the amazing Emerge package management tool. It combines simple Tar based package files with cool scripts called eBuilds, that automagically fetch and compile all the components I need.

    Of course Gentoo is not for everybody... it takes longer to install than Debian (and that is before you have compiled the entire OS from scratch), but for those who are interested in that sort of thing it can be a refreshing alternative.
    • by Imran ( 4369 ) on Wednesday March 12, 2003 @09:46AM (#5493105)
      Flamebait disclaimer: I have been running Gentoo on all of my machines for over a year now, so don't take this as an anti-Gentoo comment.

      stow and ebuilds aren't really operating in the same space.

      rpm,deb,portage = full blown package managers, controlling everything under /usr. These can start with source (or pre-compiled binaries), and handle everything from installation to dependency-handling, etc (with varying degrees of efficiency).

      stow = simple symlink manager, providing an easy way to maintain order within /usr/local, for those apps I compile and install manually (and, for whatever reason, don't want to repackage as an ebuild/rpm/whatever)

      There are times when one does create one's own ebuilds (v simple) or rpms (slightly more involved). For all other occasions, stow is a helpful tool :)
    • I agree, Gentoo's portage is the best package management I've come across. Not only doe it make ANY package a one-line command that will automatically "download, untar, [patch], configure, make, make test, make install", but it uses system global optimizations for compiles, takes care of all dependencies, and places binaries, libraries, config files and startup scripts all in standard locations. And their "gentoo-sources" version of the kernel has over 70 high-performance patches it includes to the vanilla kernel.org tree (but of course, you don't have to use them if you don't want to, but why not?)

      It even has a great /etc/env system for managing environment variables (both bash and csh flavors), so if it needs to install binaries in a non-standard location, you "PATH" is automatically updated to include it.

      I don't use Gentoo as a desktop platform, so I can't comment on its X/KDE/Gnome setups, but I'm sure they're just as complete and easy. And although Gentoo may be rather intimidating for a n00b initially, it does have excellent documentation and a great support community at their site [gentoo.org].

      Keeping a system up-to-date with the latest and greatest has never been this easy!

      • [flame mode="high"]
        I've seen an increasing tendency for folks to use Gentoo, and I've also seen a rise in a set of problems; firstly, that even if we're both nominally running the same set of packages, it's not always possible to support each other as the packages in question (and the libraries they depend upon) may have been compiled with different options. Secondly, some Gentoo users are switching on all sorts of optimization flags ("because anything compiled with -O6 will run faster!!!") without being aware of the problems that can be caused by mis-compilation (buggy gcc or buggy application, I don't care).

        Just as I learnt my chops using an early version of Slackware, it probably is worthwhile to play around with Gentoo at some point. But unless you're prepared to manage the complexity (and most Gentoo users I've run across aren't) then I can't see how it can be recommended for general purpose use.
        [/flame]
        --
        • The perceived problems of customized building of an entire may be a strong advantage. It requires all software to be of highest quality or the bugs will show. A community that encounters problems encourages debugging or switching to higher quality packages and not sweeping problems under the rug.
          • Oh, indeed - I'm not denying the usefulness of Gentoo in a sociological way (along with Debian in particular, but Red Hat and SuSE too in some areas). What I'm protesting is the rash of "midbies" banging on about how l33t they are for building their distro from scratch, but then wasting everyone's time trying to get dialup PPP working using some GUI that's unique to their system and that they haven't integrated properly (for example).

            --
    • Why did this article need a "use gentoo" comment?
      • Because we're everywhere. You can't escape the horde of the Gentoo users. Just give up and in - one day you, too, shall love Gentoo... why not start today?

        -revision-

        We are Locutus of Gentoo. Binary distribution is futile. Your sources will be assimilated.

        -revision-

        In A.D. 2003
        Distro-War was beginning.
        Red Hat: What happen ?
        Mandrake: Somebody set up us the shiznit
        Debian: We get signal
        Red Hat: What !
        Debian: Main irssi screen turn on
        Red Hat: It's You !!
        Gentoo: How are you gentlemen !!
        Gentoo: All your sources are belong to Portage
        Gentoo: You are on the way to irrelevancy
        Red Hat: What you say !!
        Gentoo: You have no chance to survive make your time
        Gentoo: HA HA HA HA ....
        Red Hat: Take off every 'rpm'
        Red Hat: You know what you doing
        Red Hat: mv 'rpm'
        Red Hat: For great justice
    • Agreed ports rock.

      One of the reasons I prefer FreeBSD over Linux is because of better package managment. Gentoo is the only thing that comes close.

      There are so many different distro's and versions of different distro's that is impossible to build an rpm that will not bring dependancy hell.

      I wish there was a unix equilivant to installshield for windows. It would be great to have a self extracting executable with the dependancy .so or other programs already in it. It should be the job of the os or package installer and not the user to take care of this. The problem with an automatic dependancy installer like ports or portage is that it may automatically update some dependencies that use different scripts then the older versions you are using. This can cause problems not to mention the latest and greatest may have compatibilities issues. For example I hate redhat 8x because of the perl 5.8 packages. I use perl 5.6 and I can not downgrade without corrupting hundreds of packages that require 5.8.

      What a pain.

      • It would be great to have a self extracting executable with the dependancy .so or other programs already in it. It should be the job of the os or package installer and not the user to take care of this.
        I partly agree - but would add, it should be the job of the OS or package manager, not the user *and not the application being packaged* to take care of this. That means that there should _not_ be extra .so files for dependencies in the package. That way lies Windows-style DLL hell where every app includes slightly different versions of libraries and they get scattered across different places. The app should just say 'I require libfoo version 2.3' and the package manager will make sure that 2.3 or a later 2.x release is installed. This is what packaging systems like rpm and dpkg provide; unfortunately if you run them directly from the command line they will only tell you what needs to be done, not do it themselves. Tools like apt or Mandrake's urpmi will take care of downloading and installing the necessary dependencies as well.

        About your problem with downgrading to perl 5.6: what's really needed, I think, is a package manager which downloads dependencies and rebuilds from source if necessary. So when you say to rpm 'please replace perl 5.8 with 5.6', it should look at all the packages which currently depend on 5.8, get their source packages, recompile for 5.6 and then in one fell swoop replace perl 5.8 and all packages that use it with the older version. If there is some application which requires perl 5.8 or later, you would obviously have to uninstall that before downgrading to the earlier perl version.

    • by Ed Avis ( 5917 ) <ed@membled.com> on Wednesday March 12, 2003 @10:35AM (#5493448) Homepage
      I don't understand why many people seem to assume 'tar good, rpm/dpkg bad'. For example, the article says:
      Although some Linux flavors such as Red Hat and Debian come with their own package management utilities (rpm and apt-get, respectively) that are as efficient as Stow, they work only on specific packaging formats (.rpm and .deb, respectively). When it comes to managing applications simply packaged in .tar files, Stow is the best bet.

      Why is a tar file any more 'simple' than an RPM or Debian package? If you are just storing a bunch of files, then yes. But what about metadata? That is, the information on dependencies (what libraries the package needs), where to get the source code, who the packager was, which version of the software these files represent, whether this packge conflicts with any other packages that might be installed, and all the other things that a decent package manager keeps track of automatically so you don't have to check them by hand, or get nasty surprises when you've installed a package and only later find a necessary library is the wrong version.

      If you use tar files then you'd need to have an extra metadata file inside storing this information: then you need to decide a format for that file and write a tool to parse it. And you've then reinvented rpm or dpkg, only with the spurious 'improvement' that people on other systems can unpack the archive if they have tar installed. As if anyone running a different system would need to unpack someone else's binary packages.

      Perhaps you could argue it would have been better for rpm to be based around tar.gz format, with the package details stored in the tarball as a script of some kind. But then it would become much slower to query a large directory of packages. Maybe that is important, maybe not. Also you have digital signatures to worry about, it's not clear how to do those with a tar archive (unless you have one tarball containing another tarball plus its PGP signature). Perhaps things could have been done differently, but the mere fact that the rpm and dpkg developers chose to make their own format rather than use tar is not an excuse for committing much more serious wheel-reinvention in making a kewl Yet Another packaging format.

      I'm not meaning to knock Gentoo here, that distribution really does do something new (building everything from source), and they perhaps had good reason to make a new packaging format. (Although it might have been worthwhile to investigate whether building from source could fit in with rpm or Debian source packages.) And Slackware has used tgz packages since the beginning, and doesn't seem that bothered about automatically tracking dependencies.

      But for most uses, I just don't see why 'simple packaging with tar' is particularly simple in the long run or much of an advantage. It sounds like those Freshmeat projects which say 'this is yet another MP3 player but it has the advantage that it doesn't use GTK or Qt but implements its own user interface code instead'.

      • Well, first off, the parent poster was talking about Gentoo's portage package system. Gentoo's package system, while it *can* handle binary packages, isn't geared around that. It's a source-based package system -- an .ebuild file contains the commands needed to fetch the tarball, extract it, compile the source and install the resulting binaries.

        The .ebuild file also lists what other packages the package depends on. The Perl scripts that make Gentoo's portage, such as emerge, check these dependencies and then go out and grab and install those packages, if necessary, prior to installing the requested .ebuild.

        Also, there is a facility for adding global 'make' parameters so that you can add things like CPU-specific optimizations to the make commands portage executes. This gives you a system that feels like it's optimized for the hardware that its running on, much like Solaris on Sparc -- because it *is*.

        So gentoo's package system is much more than a 'simple packaging with tar'. It's a system for building and installing stuff that is packaged with tar. :)

        • To be more general, an ebuild is a file containing the description of the operations needed to have an app installed.

          1) It checks dependancies. If dependancies are satisfied it goes to step 2, else it launches the installation of needed packages.

          2) It retrieves the app (either sources or binary package, in any format). Some times the portage system simply cannot automatically retrieve the app through the network (apps with required registration or license agreement acceptance requirement). In this case it stops asking the user to manually retrieve the package files, then it continues with the installation.

          3) It prepares the app for installation (compiling sources or simply extracting in a temp dir the precompiled binaries (obviously it can deal with every package format)).

          4) It installs the app, updating config files.

          This way of working simply separates the actual app (source or any other package) from the metadata (contained in the ebuild script). So Gentoo can handle a lot of packets written for other distros (acting as a wrapper), simply trashing the original metadata and substituting it with its own. As a matter of fact writing an ebuild is easier than packaging an rpm.

    • Portage is thegreatest package management system IMHO. It's exactly what I always wanted, a powerful version of ports for Linux. Also, running an optimized distro is worth the extra effort it takes to build from a stage 1. All my Gentoo boxes outperform equivalent boxes with binary distros. I can definately see Gentoo becoming a big player in the Linux industry. If they make the install more user friendly, perhaps make some consolidated management tools, Gentoo could easily switch from being an "elitist" distro to a "newbie" distro, because portage really is that easy.
  • by jointm1k ( 591234 ) on Wednesday March 12, 2003 @09:20AM (#5492958)
    The article does not mention anything about dependencies. In my opinion dependencies are almost as important as keeping track of which file belongs to what application. Maybe they should do some more homework and take a look at Gentoo's Portagepackaging system. This system not only compiles a tar/tar.gz/tar.bz2 package, but also retrieves the needed packages (including the dependencies) from their homepages.
    • The article does not mention anything about dependencies. In my opinion dependencies are almost as important as keeping track of which file belongs to what application.


      Stow doesn't do dependencies. IMO, that's fine for local package management, where dependency handling is often more trouble than it's worth. (A local package will typically be run on a know configuration, and can target that configuration.)

      Maybe they should do some more homework and take a look at Gentoo's Portagepackaging system.


      Do some homework? You are aware that stow has been around since 1996, right? It's amazing how many gentoo fanboys don't know what else is out there.
  • by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Wednesday March 12, 2003 @09:20AM (#5492959) Homepage
    I've got some experience with Debian's package management system, and while hard to use for a novice and somewhat complex there is one great benefit: conflict and dependency handling.

    Based on the article I didn't quite understand if Stow provides similar services. There were some hints on this, but could someone with experience shed some light on the subject?
    • Well, there's Encap [uiuc.edu], which handles dependencies but currently only among other Encap packages. It's similar in putting files in one directory and using symlinks.
    • It doesn't (Score:3, Informative)

      by ggeens ( 53767 )

      Stow has no concept of dependencies. There is no way you could use it to build a distribution on top of it.

      I use stow on my (Debian) Linux PC at home, to manage the software I build from source. If I want to upgrade a program, I can just delete the directory and install the new version in the same location. If a Debian packages becomes available, I remove the directory and have stow remove the links in /usr/local/*.

      Until now, I have been able to get all the libraries from Debian, so I never needed to work with dependencies.

    • by Anonymous Coward on Wednesday March 12, 2003 @09:41AM (#5493069)

      Stow does not handle dependencies. All it does is use symbolic links so that your packages may install in one directory (completely) and then have symbolic links to a shared directory tree. This was once a standard technique that was manually performed by several system administrators. More recently packaging systems have gained widespread acceptance by people so tools like stow have not been as amazingly handy.

      Stow still has importance, though. For example, some people would prefer to build their own application distribution area. This is of particular utility when you have a network of machines and want the same applications available everywhere. Pick a machine and have it NFS share the applications. In these situations Stow still is important. Maybe the stock packaged Perl is not good enough, maybe you want the multithreaded options and a few extra modules from CPAN. Then creating a new Perl directory and stowing it somewhere else is handy.

      Stow is not perfect. I have found that it is a bit buggy with its delete operation. I usually erase the directory with the given software and then look for symbolic links that are broken:

      ls -lL > /dev/null 2> /tmp/T
      rm `sed s/:.*$//g`

  • Wow? (Score:5, Interesting)

    by j1mmy ( 43634 ) on Wednesday March 12, 2003 @09:22AM (#5492970) Journal
    This has about as much flexibility as distributing binaries in a tarball. You can't include installation/uninstallation scripts (what if my application needs to install a cron job?). Everything is forced into /usr/local/stow/PACKAGEDIR and a mess of symbolic links are used to bump everything into the corresponding bin, lib, include, whatever directories. While it may be easier for the software to manage, it creates countless unnecessary files on your drive.

    I don't see the benefit.
    • Think of a typical /usr/local on a multiuser system. These are not typically managed by the native package management system, and have a whole mess of binaries dropped into /usr/local/bin. If you're the unfortunate sysadmin who has to figure out what package a particular binary is from, you're in for a lot of guesswork. With stow all the binaries are symlinks to descriptive package directories, so it's easy to know what files are related. If you want to get rid of something you don't have to go on a research expedition. If you do an upgrade you can simply unlink the old version and link in the new version, then quickly put it back if something breaks. This is good.

      The other alternative is to build all your local compiles as rpm's (or sysv packages or whatever), but that's usually more work than just "./configure --prefix=/usr/local/stow/foo-1.2; make; make install" Getting your junior sysadmin to build things into stow repositories is usually easier than trying to get them to handcraft solaris packages for every little program somebody wants installed.
  • by esanbock ( 513790 ) on Wednesday March 12, 2003 @09:29AM (#5493006)
    In windows, I double-click setup.exe, a GUI pops up, I pick the destination and off it goes. Why can't someone make something like this for Linux? It would greatly improve the user experience in Linux. Instead of having to edit 8 configuration files, the user just starts setup.sh or something and the setup asks questions. This is why I like apt-get - one line setup. But every time I download something that's not part of Debian it turn into a horrible experience I wish I would have never had.
    • In windows, I double-click setup.exe, a GUI pops up, I pick the destination and off it goes.

      Absolutely. This type of thing is essential for Linux if it is going to gain widespread desktop use.

      I know it's complicated. But it just has to be made simpler for the end user, no excuses.

      Like everything in the OSS world, there are loads of different projects taking different approaches. Whilst this isn't a bad thing, eventually a standard needs to emerge, and the sooner the better. I think more people should help out with Autopackage [autopackage.org], which seems to be taking the right approach.
      • IBM should really understtand this, after having worked on OS2 for so long.

        They also have the bucks to put a team of developers on the case to create just such an installation system, so no excuse there.

        I was expecting something completely different when I surfed to that site, but instead, got a taste of yet another geek only tool; admirable, but really not boundary breaking stuff.

        They should at least take the lead from Ximian, and build something that a total computer illiterate can use. [asbestos] Whilst there can never be enough command line tools [/asbestos] someone, somewhere is going to have to bite the bullet and create this badly needed system, and its going to be someone with money.

        Look how everyone has benefitted from Nautilus; the same thing has to happen with installers.
    • by IamTheRealMike ( 537420 ) on Wednesday March 12, 2003 @10:09AM (#5493246)
      In windows, I double-click setup.exe, a GUI pops up, I pick the destination and off it goes. Why can't someone make something like this for Linux?

      A few reasons. Firstly, these programs are tremendously complex under the hood. Almost all generic ones (even light ones like NSIS) include their own scripting language. InstallShield 6 and up has used DCOM to provide remote procedure calls between the install script and the engine (ikernel.exe if you've ever wondered what that is). They do a lot of messing around under the hood in order to make things just work.

      Even then, they are too primitive for Linux. For instance, they have only basic concepts of dependancies. The lack of proper dependancy management almost brought Windows to its knees in the mid-nineties. Simply packaging every dependancy inside one self-extracting archive is simply not possible on Linux in any scalable fashion, so we have to build dependancy resolvers like apt. Windows installers tend to be GUI only. And so on.

      Now, systems like apt are pretty cool. When they work, they work really well. The problem is, that they tend to be built by distro projects, and then they are relatively tied to that distro. Apt as used on Debian for instance, is not the same as apt4rpm. URPMI is Mandrake, and emerge is basically tied to Gentoo, though I'm sure it could be generalised.

      So, the real solution is not to build Windows style setup.exe files. The real solution is to make something like apt, but that can be easily used by everybody, so you rarely if ever come across software that doesn't use it.

      There are two approaches to solving that problem. We're trying both at once. The first is to invent a new system, independant of the existing ones. See my sig. The second is to try and standardise key interfaces in a standards body, so that apt/urpmi/emerge and others can interoperate, and so you can plug distro-neutral packages into that framework. See here [freestandards.org]. Note - most of the activity so far related to that group has been off-list, hopefully there will be action starting in a few days.

      • Even then, they are too primitive for Linux. For instance, they have only basic concepts of dependancies.

        Or, perhaps, Linux dependencies are too complex? I've lost track of the number of times I've had to do the "upgrade tango" and install a dozen different packages just to satisfy the dependencies for a program I needed. More often than not, I've decided that a stable system was more valuable than trying to figure out what an upgrade to libfoo-2.11a would break.

        Windows has "DLL Hell"; Linux has "Dependency Hell". I'd rather see a general solution to the problem of overly complex dependencies on Linux than yet another package manager. Hiding complexity is well and good, when you have no other choice; hiding complexity because solving the problem the Right Way (whatever that is) is just putting bandage on a more serious problem.

        • I suspect that a truely general solution is impossible. As a half-measure Linux has all these .so. files with version numbers and sub-numbers and revision numbers.

          The problem is that the different pieces are written by different people at different times. If your software works on version 5 revision 6 of a library, it will probably work on revision 7, 8, 9 ... also. But maybe not. And it is likely to not work on version 6, but it might.

          One solution is static linking. That EATS disk space, but sometimes it's the best answer. If you can do it. But you often only find out that you needed to do it because with the new libraries installed, something important doesn't work any more.

          kde uses compatibility libs. A frequent choice is to keep older versions of libraries around. But how do you know that you need to? You guess! You (i.e., the dependancy management software) makes the best guess that it can with the available information. And it's usually right. But sometimes not. (E.g., on Red Hat systems I've had a lot of trouble with FOX installs. It wants a library that conflicts with another library that is used by many system routines. You can override this is the install, but then some of the features that I want it for aren't available. Strangely, an "install everything" bypasses this problem. Most recently I'm trying "install everything" followed by select individual packages, and then removing things I know I don't want. I haven't seen how well that works yet.)

          Then there are abandoned libraries. On windows such are just left to die. On Linux, some programs may depend on them, so it becomes necessary to somehow shoe-horn them into a working system. Or to rewrite the other program. I know which would be a better long term strategy... or think I do. But I also know which will get me results quickly.

          Programming is the matter of proving that perfection doesn't exist in the realm of mathematics, either.

        • by IamTheRealMike ( 537420 ) on Wednesday March 12, 2003 @12:56PM (#5494730)
          I've lost track of the number of times I've had to do the "upgrade tango" and install a dozen different packages just to satisfy the dependencies for a program I needed

          That's why we have/need dep resolvers like apt. I rarely, if ever, hear Debian users complaining that dependancies are too complex. They don't need to care.

    • In windows, I double-click setup.exe, a GUI pops up, I pick the destination and off it goes. Why can't someone make something like this for Linux?

      Didn't Loki [lokigames.com] write a graphical installer for Linux? I can't access the Loki site from work to check because it's blocked by websense (ha).

    • by g4dget ( 579145 ) on Wednesday March 12, 2003 @10:15AM (#5493283)
      In windows, I double-click setup.exe, a GUI pops up, I pick the destination and off it goes.

      That works fine for a few applications. Linux has thousands of applications, and people tend to install hundreds of them (they are free, after all, so why not). Do you want to go through hundreds of GUI installers, and then hundreds of GUI updaters? I don't.

      Why can't someone make something like this for Linux?

      There are interactive installers for Linux packages, but they are usually a nuisance compared to a normal package.

      But every time I download something that's not part of Debian it turn into a horrible experience I wish I would have never had.

      Well, then don't install non-Debian packages. After all, there are plenty of Windows programs that come with horrible installers. As a Debian user, think of non-Debian packages as "programs that come with horrible installers", and then decide whether they are worth the trouble. (Note that you can usually import packages reasonably well via "alien".)

      The package system you get with Debian (or RedHat, for that matter) is already so much better than anything you get for Windows that it isn't funny. If Linux developers adopted the equivalent of setup.exe more widely, that be a real blow to Linux.

      • That works fine for a few applications. Linux has thousands of applications, and people tend to install hundreds of them (they are free, after all, so why not). Do you want to go through hundreds of GUI installers, and then hundreds of GUI updaters? I don't.

        Do you have to run an installer for Solitaire or Minesweeper when installing Windows? How about Internet Explorer? WordPad? And so on...

        The parent is right, in my humble opinion, about the need of an all-inclusive setup package system for Linux. That is, if you want mainstream users to use it on the desktop. There will have to be a "basic" install setup from Red Hat or whoever, and additional applications will have to be all inclusive, one-click and step through the install type of installation. Users don't want to compile anything, download extra stuff (why didn't it come with it in the first place?). They just want to click and run an application.

        Even if it means the size of the original download is way bigger because it includes files that the user might not need, I'm sure most Windows users that would consider Linux would prefer that to the current mess.

        The package system you get with Debian (or RedHat, for that matter) is already so much better than anything you get for Windows that it isn't funny.

        Why is that? If you can't just click and install an application, I'm sure there are plenty of Windows users that would disagree with you here.

        If Linux developers adopted the equivalent of setup.exe more widely, that be a real blow to Linux.

        The only thing it would be a blow to is the egos of Linux elitists who really don't want anyone using "their" OS.

        Mark
      • The package system you get with Debian (or RedHat, for that matter) is already so much better than anything you get for Windows that it isn't funny. If Linux developers adopted the equivalent of setup.exe more widely, that be a real blow to Linux.
        You are painfully correct. Just look at the commercial programs available for Linux that use installer programs. I have several examples:
        1. Macromedia Flash plugin 6: It comes with a ridiculously long script that checks all kinds of special conditions, which you can tell was written by a newbie. What does it ultimately do? Install two (2) files! I simply unpacked the tarball and symlinked them into my plugins directory.
        2. RealONE (Real media player): It doesn't appear possible to install this program globally, only in your home directory. It uses a GUI installer that's heavy on flashiness, but low on usability. I didn't feel like messing with it to try and make it world-usable.
        3. Intel C Compiler: Haven't looked at this in a long time, but I heard that it's very difficult to install it on some systems; you basically have to fix the installer's broken assumptions. The old version I have had a hacked-up RPM to allow it to be installed on Debian.
        4. Sun JRE 1.4: This comes as a "self-extracting" executable. In reality, it's a shell script with a tarball tacked on the end. I don't remember why anymore, but I had to futz around with the script to make it work. In addition, it depended on an old version of libstdc++ that I had to find.
        5. Oracle: Never used this, but I hear it's a bitch to get working on some systems.
        Now granted, they all use a custom installer rather than something like InstallShield, but I see parallels to Windows setup programs here.
    • My guess is some people do not want to download huge files. If you want to upgrade the latest kdevelop a whole QT, and KDE would come with that download. It would be well over a 100 megs this way.

      While this was important in 1995 when everyone still used 14.4's on the internet and only had 50-70 packages at the most, its extremely inefficiant today with linux distros with 3k+ packages.

      Also in Windows only runtime dlls that are dependencies are updated in a setup.exe program and not whole software packages. For example in a typical Windows setup.exe program usually a vb runtime .dll is included, an updated mfc.dll and maybe some activex.dll's for an outdated system still using ole 2.0 . It does not install Visual Basic, all of the mfc classes and a new win32api, a Windows based sdk for activex. Only the required dlls to run it.

      For example the win32 version of gimp has a 6 meg install .exe and an 8 meg .exe. Because I did not want to lool for gtk++ for Windows I just downloaded the 8 meg version.

      Same should be true of linux. 2 options one lite for massochists and one bloated for everyone else.

    • Most of the apps I install under Linux these days are far simpler to do than under Windows. As I use Gentoo, I just type "emerge {name}" and everything is done for me automatically. If I want to use the original sources, I normally type ./configure && make && make install and everything just works. You can type ./configure on its own to see a list of options but the default normally works for me.

      Phillip.
    • But in many Linux distributions, can't you just double-click an RPM or dpkg file, press 'OK' and off it goes?

      'Every time I download something that's not part of Debian' - this is the problem. It needs to be made much easier for developers to provide Debian packages (or RPMs, etc) of their software. Ideally there should be some way to make Debian packages without needing Debian installed.
    • In windows, I double-click setup.exe, a GUI pops up, I pick the destination and off it goes.

      In Mandrake, I single-click an RPM and the package manager start installing it. Is that easy enough?

      Some notes on Windows and Linux package/program installation:

      1) Windows setups handle dependancies by basically not handling them. They almost always include a bunch of system DLLs and OCXs that might not be on a user's system or which might be outdated. This obviously leads to much larger packages which for a large part contain stuff that is already on the system. It would be relatively easy on Linux to make every package include every package it depends on. These don't have to be statically linked, you could include the packages for the shared libraries within the main package and have these install automatically. I think the bloat problem would be worse on Linux than Windows, because my feeling is that open source programs tend to use a much wider variety of shared libraries than their proprietary equivalents (where everybody re-invents the wheel on a daily basis because they can't use somebody else's design).

      2) Different languages are handled in many cases on Windows by having several setup programs. The main setup.exe in these cases is just a shell that selects which one to run. This adds to package bloat. Linux fares slightly better on this, because (IMHO) i18n is easier on the programmer here.

      3) Windows only needs to consider one architecture. If it had several to worry about, we'd probably see a situation much like we have with languages.

      4) Configuration at install time on Windows is mostly just choosing which optional extras to install. Most configuration is done within the program itself. This is more-or-less true on Linux as well (for desktop programs at least).

      To get close to the Windows installation experience under Linux, what we need to do would be to make every package include every sub-package it depends on and sub-packages for every architecture, disro and langauge. Then you could just download the single file, click it and get everything installed. That package would be enormous however.

      Tools like apt-get and urpmi give a very similar experience without the overhead of downloading a bunch of stuff you already have. So long as you stick with stuff that is packaged for you distro, they are painless.
    • Yes linux has 'dependency hell' ... Win has 'DLL hell' which imo/imx is far worse.

      Most (nearly all) of those install-sheild routines install the versions of dlls that the vendor has found it needs to work, and it's standard(sic) that winX applications install libraries into system areas.

      That's just plain ugly, and why Win32 can be a very solid system iff you stay in the realm of well-engineered server applications, and So, so unstable when you turn lusers loose on it ('Ohh I really must have this latest whizbang screensaver or desktop doodad ....).

      Imo this idiocy directly stems from the DOS/win16 programming(sic) history where there was no isolation at all from the hardware and many (most?) programss were coded to do all sorts of inane low-level calls to bios/hardware.

      This just isn't allowed in Unix(linux,bsd ...) and it's as much a social/cultural issue as a technical one. Unix has 2 decades of enforcing the distinction between what's in the 'OS' hierarchy and what's in application space. Whether you want to discuss source-builds, defaulting to install in /usr/local/ or commercial installations which usually go into /opt, I've yet to see an application which would overwrite system libraries.

      <RANT>
      It's my observation that much of the shoddy coding that's found it's way into opensource in recent years is the direct result of windows coders bringing these bad habits into Linux.

      The vast majority is being written without thought for portability and works only Linux and assuming the GNU toolchain (and often relying on version-specific features within same). Even within Linux, the difficulties associated with building code from source has taken many steps back into the past. Try building Gnome directly from sources.

      A decade ago it was like this installing free / pd / gnu / oss software on the various proprietary Unixes. Sadly, after a period of big gains in portability, software builds are moving in the other direction. The complexity (often useless/needless imo) of much modern oss code has benefits, it also has drawbacks.
      </RANT>

    • In windows, I double-click setup.exe, a GUI pops up, I pick the destination and off it goes.

      There are differences

      • Windows gives more choices.
      • Windows does not have a central installer.
      • Most Windows programmers have no clue as to how to install.
      • The Installer is a third party program.


      Windows gives more choices.

      On a general install, Windows asks where to install it. Linux follows the Un*x scheme, and gives less choices here. Also, Windows programs are larger, and thus there is an issue with how much to install. Linux binaries tend to be small, and so they don't bother asking what type of install you'd like. Finally, Windows programs are usually closed source, so the package lives in it's own little world. Linux packages are generally open source, so when you install a package it is an implicit choice. For example, a front-end, or a data file. With Windows it all comes in one closed package. In Linux they are separate packages, so choosing the package is like choosing an option.

      Windows does not have a central installer.

      This has changed recently with the Windows Installer, but it is not yet that popular. And things such as the uninstaller rely on the person programming the installer to put the appropriate entry in the registry. If they don't (and many don't) Windows has no record of it. So, each program needs it's own installation program. Linux distributions have a general installer that keeps track of everything. You can always query the rpm database, or the dpkg cache.

      Most Windows programmers have no clue as to how to install.

      You'll have to trust me on this one. I worked for WISE, and dealt with emails during my 20 or so months there. I proably answered over 20,000 emails (At least 30 emails a day), so I have a general idea. I also dealt with the newsgroups, but those people were vastly more intelligent.

      For example. One person has a CD with tens of thousands of images on it and wanted to know how to make a link to *each* image in the start menu. I warned the person how this will use up much space due to the cluster size, and they agreed to only make links to the folders. Then there was that guy who after making temp files (instead of letting the installer handle them with its own feature) would delete *everything* in the temp folder. I warned him a well. Oh, there were people who just assumed the Windows directory was "C:\Windows", and people who hadn't the slightest idea as to what the registry was. And, these people wrote programs to run on your computer!

      Thus, luckily, there are install programs for Windows. Linux does not seem to have these issues.

      The Installer is a third party program.

      Usually InstallShield, WISE, or InstallVise. So, they need frills to sell. In Linux all people care about is a packager, so it just isn't needed.

      To sum that all up, Windows is a more complicated install with space issues, that relies on programs to register themselves, with programs written by the cluless, and has third part programs charge for their use to install. Thus, there are installers, with choices, and frills.

      Linux has no need for all of that. So, the GUI just was never required.

      Why can't someone make something like this for Linux? It would greatly improve the user experience in Linux. Instead of having to edit 8 configuration files, the user just starts setup.sh or something and the setup asks questions.

      Those programs that need editing, give you a great many more choices then the Windows programs. Though, many Windows programs have it in their "options" or "preferences", the draw back being that you *require* a GUI to get to those, which makes editing harder, slower, more limited, and not easily distributable to other computers.

      This is why I like apt-get - one line setup. But every time I download something that's not part of Debian it turn into a horrible experience I wish I would have never had.

      That's what unoffical sources are for, and what /usr/local is for.
  • stow is broken (Score:5, Interesting)

    by Ender Ryan ( 79406 ) <TOKYO minus city> on Wednesday March 12, 2003 @09:35AM (#5493038) Journal
    Don't attempt to use stow for things such as Gnome or KDE. If you attempt this, things will get horribly broken for a number of reasons.

    1. Stow requires you configure all the packages into their own directory, which will cause problems with Gnome and KDE. Some packages are easy to configure into one directory, eg. /usr/local, and then install into another, eg. /usr/local/stow/packagename. Others, not so much...

    2. Stow has a serious bug in the way it handles directories. If only one package touches a certain directory, it simply creates a symlink to the directory. And then if another package puts something there, it then removes the symlink and does the normal thing. This is a good idea, however, Stow borks this up sometimes, which is bad.

    If you're interested, there's a program similar to stow, called srcpkg, at tempestgames.com/ryan. Yes, I wrote it, sorry for the blatant plug. I thought it relevent because I wrote it after experiencing said problems with Stow. FYI, I use srcpkg to manage all the non-system software on my machine, including Gnome, KDE, mplayer, ogg, SDL, and a very large number of other libs and programs.

    There's also a number of similar programs on freshmeat. They're all tailored to slightly different needs, but they're all generally better than Stow.

  • by Anonymous Coward on Wednesday March 12, 2003 @09:37AM (#5493043)
    As you may or may not know, Stow relies on "make install". And as you may or may no know, "make install" has many weaknesses, maintainability and readability being two of its most glaring problems.

    The reason is readily apparent. There is no clean, high-level way to specify installation details to make. Almost always "make install" invokes a messy ad hoc jumble of Bourne shell commands. Make has its own variables. Bourne shell has its variables. You end up double escaping all kinds of items and end up with $$Variables and a plethora of back ticks. The consequence is that install details in a make file tend to look like Perl's uglier cousin. Throw in line extension by escaping line ends with a reverse solidus, and you have the makings of a maintenance nightmare. Try previewing "make install" with "make -n". Not too helpful is it?

    How to fix it? I don't know. Perhaps if all Unix vendors could agree on an "installation specification language" -- ISL. Then each vendor's "make" program could incorporate an interperetter for ISL. Other programs like Linux RPM could benefit for this too and incorporate an ISL interpreter because RPM installation specifications are only slightly better than plain Bourne shell (although definitely a step in the right direction).

    • A better solution would be to replace automake with a totally new build system. We've been hacking around the deficiences of make for years, and the time when compatability with lame commercial unices form of make was an issue is long gone.

      Something like SCONs [scons.org] perhaps, although I'm not sure python is the best language for this. Although it's possible, easy even, to write really ugly bash, it's a very good language for filing system manipulations, which is a large part of build management. There was another build system based on bash that was a LOT easier than autotools, but I can't remember what it was called! :(

  • Encap is better (Score:4, Informative)

    by tskirvin ( 125859 ) on Wednesday March 12, 2003 @09:39AM (#5493053) Homepage
    encap [uiuc.edu] is a better and more established system that works on the same general idea - put everything in /usr/local/encap/PACKAGE-VERSION, and symlink into place. It's mostly just used at UIUC, but good Gods it works well. I use it for absolutely everything, and essentially refuse to install anything on our systems that won't support it. And I have yet to encounter a workplace where it doesn't win over absolutely everyone with its simplicity within six months.

    Also, cpanencap [uiuc.edu] is the perfect tool for perfecting Perl's module system. All it needed was versioning.
  • by aagren ( 25051 ) on Wednesday March 12, 2003 @09:42AM (#5493078)
    I've used stow on different unix platforms during the last couple of years, and I think it is a great tool to maintain software packages which aren't supported by the platforms own packaging system (deb, rpm, pkg, etc..)

    But remember one thing. If you are starting with a new stow system in f.x. /usr/local, then be sure to make the directory structure:

    /usr/local/bin
    /usr/local/lib
    /usr/local/include
    etc

    if it doesn't exit before stowing anything. Otherwise the following will happen. let's asume that you have the software package in: /usr/local/packages/app-1.4

    with it's own strucure like: /usr/local/packages/app-1.4/bin /usr/local/packages/app-1.4/lib
    etc.

    stow'ing this packages without the /usr/local-structure will result in:

    ls -l /usr/local

    bin -> packages/app-1.4/bin
    lib -> packages/app-1.4/lib
    etc.

    Then the nect package (let's call it app2-1.5) you will be stow'ing to /usr/local wille see that f.x. /usr/local/bin allready exits at then link the files from it's own bin-drectory to /usr/local/bin, which result in files from app2-1.5 will be linked to the /usr/local/packages/app-1.4 structure, which will mess up things.
    • by virtual_mps ( 62997 ) on Wednesday March 12, 2003 @09:53AM (#5493145)
      Then the nect package (let's call it app2-1.5) you will be stow'ing to /usr/local wille see that f.x. /usr/local/bin allready exits at then link the files from it's own bin-drectory to /usr/local/bin, which result in files from app2-1.5 will be linked to the /usr/local/packages/app-1.4 structure, which will mess up things.


      Odd. On my system it will notice that bin is a symlink to a bin dir in a different stow package, remove the symlink, create a dir, then link the contents of *both* packages bin dirs into the new /usr/local/bin.
  • Package Management? (Score:3, Interesting)

    by rampant mac ( 561036 ) on Wednesday March 12, 2003 @09:47AM (#5493110)
    I know this will probably come off as flamebait, but what is Linux doing to make program installation / de-installation easier?

    Sure, package management is wonderful, but it's not something I would recommend for my parents to use. They have enough trouble setting the time on their VCR, and my mother still can't grasp the concept of tabbed browsing after I set her up with Mozilla.

    Will Linux ever mature to the point where applications are bundled like they are in OS X... Where a new user can install a program by dragging one icon to install a program, and then drag that same icon to the trash to uninstall it?

    How could this be implemented?

    • Will Linux ever mature to the point where applications are bundled like they are in OS X... Where a new user can install a program by dragging one icon to install a program, and then drag that same icon to the trash to uninstall it?

      Forget appfolders, at least for now. Implementing them properly on Linux (ie with dependancies, system integration etc) is just too hard for the short to medium term. MacOS gets around these problems by hiding from them or because being a proprietary OS they aren't relevant - not really a route Linux should take.

      As for drag and drop packages, now that is certainly possible. The UI for software installation need not be directly related to how it's implemented. How does this sound?

      You browse to a web page in say Moz/Ephy/Konqueror/whatever. These pages are XHTML - inside them is a small set of namespaced XML elements which interfaces with X Drag and Drop. It basically places an icon of arbitrary size (think SVG) with a caption, just like in a file manager, into a web page.

      This icon represents an object. It wouldn't have to be a package, but let's focus on that use case for the moment. The user goes to a web page, reads about this cool new software. They see the icon. They drag the icon out of the webpage (leaving an empty container or something behind) and onto a panel. A panel, in case you're not aware, is a collection of app launchers, applets, start menus, task lists and so on. KDE and GNOME provide them, see here for a couple [theoretic.com].

      The user drags the icon onto one of the panels. The icons budge over to make room (think dock here), and the user drops the icon in. It immediately fades to grayscale, and a small rotating "busy" animation" [musichall.cz] appears in one corner.

      So, the package starts downloading in the background, as the system resolves the package and its dependancies to the nearest mirror servers, and locatest the right CPU architecture for binaries and so on.

      Meanwhile, the user gets on with their work. On a dialup connection (ie most connections) downloading software is slow, so people want to just get on with playing their games or whatever. Now the packages download and install in the background, automatically. If there is user interaction, a small throbbing RHN-style system tray icon would appear, and when the user clicked it, the interactions would take place, and then the window would disappear and the install continues. Most packages wouldn't have interactions by the way.

      Clicking the icon while grayscale gives download status, speed, ETA and so on.

      Finally, when it's done, the icon goes back to being coloured, and clicking on it launches the app.

      OK, so what if you drag it to the desktop you say? Well, the same thing basically, the package is downloaded to your desktop (any dependancies get put into a cache) and is grayed out until done. If you click it, the installation proceeds as normal, and the package turns into a launcher.

      Note that we now use a vFolder based menu system. With a bit of extra work, that means app launchers could be reference counted, and that means that uninstalling becomes a matter of dragging the launcher to the trash can. If other users have launchers for that app on the system, the app stays installed until the refcount drops to zero, at which point it could automatically garbage collected when the system is idle.

      Having a network based system like that let's you do a lot of cool stuff. For instance, if you encounter a new file, the mime-type sniffers will pick up what kind of file it is (even for compound stuff like MS Office docs) and could query the network for packages that can view or edit that filetype. All this becomes possible, when the current packaging system becomes unbroken, which is what we're currently working on.

      • I've been thinking along these lines for a while now myself, albeit from a different point of view. My suggestion is kind of a glorified "Start"/"Applications"-menu with hooks into the package management system (which in this case, as in your example, needs dependencies). I'll explain it with a current Gnome/Debian system in mind, for now.

        In this scenario, if the user was looking for a spreadsheet app, [s]he whould go to Applications->Office. Oops, nothing there. But there would be a submenu, called something like "available" or whatever, so the user looks there and finds "Gnumeric Spreadsheet". [S]he clicks on it, and a dialog pops up saying: "This program is not currently installed on your system, but is available for download. Would you like to install it now?"
        If the user clicks yes (and, of course, supplies the root password!), apt-get has it's turn on the situation, installs gnumeric with dependencies, and off we go!

        If course, this could get pretty messy wrt the number of packages in eg Debian, so someone (Gnome? Debian? Me or you?) would have to create a list of sensible packages for each category, a couple for each task which the user would appreciate. Perhaps with settings somewhere, with Beginner/Medium/Advanced-type options, which could also control which questions should be asked during install and so on.

        Your idea about mimetypes is a very good idea for this scenario too.

        If this wouldn't end the "in windows, you can just doubleclick setup.exe and..."-garbage, I don't know what would.
        • I think the main problem for that would be overcrowding the menus, and hard to keep them up to date. A better approach I think might be to have a "Get office software...." item in the menus which opened up a web browser at a page with draggable icons, mini-reviews, perhaps a SlashCode forum to discuss them. I'd probably find that more useful than a raw list.
  • by 4of12 ( 97621 ) on Wednesday March 12, 2003 @09:47AM (#5493111) Homepage Journal

    I remember seeing mention of it a couple of years ago on the GNU site.

    Was it just that it was not completely developed, or are there other issues that are inhibiting broadscale adoption of stow?

    I'm not deliberately trolling, I just wanted to know.

    A few random things I do know are:

    • how to go from .tar.gz through configure;make;make install
    • that Red Hat's rpm [rpm.org] package manager has a 400 page manual and believe the learning curve looks like Mt Everest
    • Debian folks swear by apt-get [debian.org]
    • writing autoconf [lanl.gov] macros [gnu.org] makes me weary
    • gar [lnx-bbc.org] (of Linux BBC fame) looked like an interesting superpackager
    • Well to cover a few of your points..

      >how to go from .tar.gz through configure;make;make install

      Simple, but damn near impossible to tell if package is installed. If the basic prereqs are installed, it can be the easiest...

      >that Red Hat's rpm package manager has a 400 page manual and believe the learning curve looks like Mt Everest
      Well, RPM is a commandline to a database. It wasnt supposed to be accessed by users. Go check out how many setups there are to dpkg, debian's underbelly database.

      >Debian folks swear by apt-get

      It's a wrapper to the REAL database program, dpkg. The apt-commands are supposed to be easy to manage. They're wrappers themselves

      >writing autoconf macros makes me weary

      Damn straight.
  • No "package-manager" (Score:5, Interesting)

    by Eivind ( 15695 ) <eivindorama@gmail.com> on Wednesday March 12, 2003 @09:47AM (#5493115) Homepage
    Stow is no replacement for a package-manager. It doesn't even *attempt* to do 99% of what a package manager is for. RPM and deb do all of the following, (sometimes trough frontends like urpmi and apt-get) stow doesn't do any of them:
    • Solve dependencies automatically.
    • automate configuration and building of packages if you prefer to build yourself. (with stow, building is a manual task)
    • Warn you if two packages conflict with eachothers.
    • Verify integrity of an installed package.
    • Handle updates to installed packages.
    Infact, all stow does is let you yourself manually deal with all of the above, only you're supposed to run configure with something like --prefix=/usr/local/stow/mypackage and only when you are finished with all this will stow handle the amazing task of making apropriate symlinks from the application-directory to some directory on your path, a task that would take all of 10 seconds to do by yourself.

    Pointless utility made by someone who apparently doesn't even understand which job they're trying to do.

    The article is also full of typical misunderstandings like:

    On Linux systems, most applications are required to be installed in some specific directory (which is usually /usr/local/), to run and function properly; the requirement comes either from Linux or from the application itself.

    There exists *no* requirement "from Linux" that any application reside in any particular directory, except the default kernel excpects /sbin/init to exist.

    I'm also not aware of a single package that requires living in /usr/local in order to function.

    • There exists *no* requirement "from Linux" that any application reside in any particular directory, except the default kernel excpects /sbin/init to exist. I'm also not aware of a single package that requires living in /usr/local in order to function.

      I originally thought he was referring to the fact that by default apps are configured to /usr/local, and unfortunately most apps written for Linux are not relocatable - they must be installed to the same prefix they were configured to. That's a problem we're currently working on in autopackage. But then I realised that this utility isn't meant for binaries, and you always choose the prefix yourself.

      So I don't know what to think, except maybe the author was slightly confused or used poor wording.

  • by Erpo ( 237853 ) on Wednesday March 12, 2003 @09:48AM (#5493120)
    If I understand the explanation correctly, stow gives the user the ability to keep all of an application's files together under one directory (as opposed to sprayed out across the system) while creating symlinks to simulate the files' presence where they "should" be.

    <rant mode> IMHO, this attempt at package management only goes halfway. The basic idea driving it is that while programs may need to put files into preexisting function-categorized system directories (/usr/local/bin for executables, /usr/local/etc for configuration files, etc...), it's much more convenient to have all of the files under one program-specific directory so that important files can easily be located and manipulated.

    What this says to me is that *puts on asbestos suit* the windows model for software management and installation is highly superior to the gnu/linux model in many respects. Don't get me wrong. I'm not a fan of windows, and while the actual _implementation_ of that model on windows leaves much to be desired (e.g. uninstalls are not always complete), it's a great model.

    "Things you add to the system" are divided into two categories: core system stuff (e.g. libraries) and application programs. If it's core system stuff, it gets dumped into a system directory where programs can actually find it. No need for /etc/ld.so.conf. If it's an application program, it gets its own directory and everything goes inside that. No spraying messy files all over the system and not providing an easy way to remove them (again, according to the model, not the windows implementation), no hunting down configuration files, and no guessing which application a file "belongs" to.

    Maybe there's some really important piece of functionality that the *nix model provides and the windows model doesn't, but I certainly don't see it. Perhaps someone with more experience could tell me why we shouldn't work out a better filesystem hierarchy and try to convince gnu/linux distro maintainers to adopt it? Couldn't we do better?

    </rant mode>
    • "Things you add to the system" are divided into two categories: core system stuff (e.g. libraries) and application programs.

      That works OK when there is as clear divide between system stuff and everything else. In the case of Windows or MacOS, if a component is made by Microsoft or Apple, then it's probably system stuff. Otherwise, it's everything else.

      But Linux doesn't work like that. If I install RhythmBox, it might want to pull in the GStreamer multimedia framework as a dependancy. GStreamer doesn't come with Redhat 8.0 (though it does in 8.1). Is it system stuff or not?

      What about GTK? It is only one of several widget toolkits in use. Is it "core" or not?

      Well, really when every component can be potentially upgraded externally, the line between system and application becomes blurred, so you have to deal with them all equally - hence package management.

    • I *like* to have all config files together under /etc, makes it much easier to backup and restore my configuration. I can just copy all relevant files to/from /etc instead of going after them in all the application directories.
    • ... the windows model for software management and installation is highly superior ... "Things you add to the system" are divided into two categories: core system stuff (e.g. libraries) and application programs.
      My experience has been the opposite. Software management in Windows has been nothing short of a nightmare! This has been especially true for my testing teams who like to wipe an OS clean of any applications and reinstall to do their testing. Clean wipes are far easier in unix than windows.

      Secondly you only mentioned two categories, system libs and apps, but you neglected to talk about configuration information. With windows you'll have some app configuration info in the registry and possibly some in a .ini file in the app directory. At least with unix there is some consistency with having an ascii, editable configuration file - of course its location may vary but at least you can find and edit it, unlike the windows registry.

      Lastly, unix adds executables to /bin, /usr/bin, etc. so that they can be used from the command line without creating a god awful long path. Windows completely ignores this since most programs only work with a gui, so they force the user to start it by clicking on the icon. If you want to start a windows program from the command line, you have to put it in your path or create a shortcut into an existing 'pathed' directory.

      I think windows installation was created with the notion of your grandmother being able to install new apps. For uninstall, your grandma is more likely to just go buy a new machine since the current one is 'full'.

      Organization in unix can be improved upon, but it still is light years ahead of windows in real usability.
  • Excuse me? (Score:4, Insightful)

    by arvindn ( 542080 ) on Wednesday March 12, 2003 @09:54AM (#5493160) Homepage Journal
    Could somebody please enlighten me? I don't get the point at all.
    ... a number of advantages over the tried-and-true Red Hat and Debian package management systems. With Stow, you can package applications in standard tar files
    "A standard tar file" is just a bunch of files. The reason rpm and other packaging formats are used is to do dependency tracking and management. There is no way you can figure out the dependencies from just the tar file. So comparing stow with rpm is like comparing apples and oranges. Stow is not an alternative to rpm. (Of course I agree that if we had a single universal packaging format it would be great. But the answer is not to throw all the features overboard.)
    ... and keep application binaries logically arranged for easy access.
    Wtf? What do you mean, access a binary? When is the last time you did "vim /bin/ls"? The only thing you do with binaries is to execute them, and putting them in /bin/ or /usr/bin/ etc. is perfectly adequate.
    gives users the freedom to store or install the software package at any desired location
    Excuse me, but "configure --prefix=dir" already does that?
    Imagine installing an application that accidentally overwrites a file belonging to another application, and then you have to replace the file.
    Has anyone ever encountered this? It seems somewhat contrived to me.
    Or imagine, before uninstalling and deleting an application, trying to determine which files belong to that application.
    Any half-decent package manager allows you to list all the files belonging to an application.

    The UNIX way of putting applications is well thought out, matured and perfectly fine. Needlessly playing around with it is likely to cause more problems than it solves.

    Yes, the package management scene on Linux sucks right now. But it is because of dependency management, and has nothing to do with all the files of an application in a nice folder.

    • Re:Excuse me? (Score:3, Informative)

      > Could somebody please enlighten me? I don't get the point at all.

      I'll try to give it a go. I have several Debian boxes, and mostly use apt, but every now and then I need to install something for which there is no .deb, and stow is perfect for that. I have an m68k cross-compiler version of gcc (for Palm(tm) development), a locally modified version of the GTK canvas, and a few other obscure, specialized bits of software, all of which are "stow"-ed. I have realized many of the advantages described in the article -- I can uninstall these things cleanly, and move between versions just by unstowing the old one and stowing the new one.

      I do think the article (and much of the commentary here) overstates the role of stow. It's not a substitute for a package manager, and the way it works makes it unsuitable for system-level software that, for instance, might need to set up cron jobs, require scripts in /etc/init.d, or be configured from a file in the /etc directory. But it *is* very useful for those occasional, obscure bits of software which primarily consist of libraries and include-headers, or non-system executables.

      Stow itself is not new, and interestingly is packaged by debian -- I got it by "apt-get install stow"...
  • Another way (Score:2, Interesting)

    by tallniel ( 534954 )
    I think more people should take a look at tclkit (http://www.equi4.com/tclkit) and the concept of starkits (http://www.equi4.com/starkit). This is a great concept where an application is delivered as one self-contained file (compressed, with an internal virtual file-system). This gets rid of the problem of "installation" all together.

    Very cool stuff.
    • "an application is delivered as one self-contained file [...] [t]his gets rid of the problem of "installation" all together."

      Sure. It also gets rid of the concept of shared libraries.

  • For french guys, there is a Stow tutorial in GNU Linux Magazine France of this month (http://www.linuxmag-france.org/). The article is not available online.

    Here is the author web site : http://hocwp.free.fr/ln_local/index.html
    However I don't recommend his ln_local tool (a simple stow replacement) as it is seriously flawed: this shell script doesn't escape spaces (and other more dangerous shell chars) in filenames when handling them.

    Stow is here : http://www.gnu.org/software/stow/stow.html
    See also XStow : http://xstow.sourceforge.net/

    Dolmen.
  • by wct ( 45593 ) on Wednesday March 12, 2003 @10:25AM (#5493359)

    Checkinstall [izto.org] automatically produces native packages (rpm, deb, slackware tgz) from a standard make install. I've found this gives the best of both worlds - easy, consistent package management coupled with flexible/optimized source configuration.

    • Because checkinstall uses tar+gz instead of Slackware's package building tool (makepkg), it produces broken Slackware packages. It should be fixed to use makepkg -- then I'd actually recommend it.

      Slackware packages are not simply tar+gz. It's important that the files are stored into the tar archive in a certain way, the correct version of tar is used, and the symbolic links are moved into the installation script properly, otherwise the package can't be effectively managed. You wouldn't try to make an rpm or deb with tar/cpio/bzip2/gzip/etc, so why people think they can tar up some files and call it a Slackware package is beyond me.
  • by vinod ( 2092 ) on Wednesday March 12, 2003 @10:27AM (#5493373)
    It is good to be able to use independent directories for applications that are installed at the site (i.e. not part of the distribution.) And RPM can accomodate such independent directories as well. Within the independent tree, the applications should standardize the directory structure just like in unix: ./etc etc.

    Then, putting symbolic links in various directories is bad idea. Instead, users could explicitly 'subscribe' to the directories. A special, user specific ./bin directory can be used to keep the subscriptions to bin directories of subscribed packages.

    Bad thing about RPM is that it uses a centralized DB for tracking dependencies, which can't be manupulated by hand. Instead, it could evolve to use 1. open format based on XML 2. Put the dependencies as part of independent directory tree of the package.

    In most cases, it is sufficient that dependencies be evaluated dynamically. After all, sysads know what they are doing.

    -vgk

  • SEPP [ee.ethz.ch] is a package management system that allows to separate packages in directories like Stow and similar, but in addition:
    • solves the distribution problem by allowing to mount packages with NFS and using the automounter to make the applications available under a standard path (/usr/pack/PACKAGE)
    • provides for each application a wrapper script that takes care of all the necessary environment setup so that users don't need to edit their bashrc
    • supports installation of multiple versions of the same application by installing version-tagged binaries in addition to the normal binaries. I can for example run mozilla-1.1 or just mozilla, in which case I get the "default" version. This is very important for example for a Ph.D. student that wants to finish his thesis with Matlab 5.3.
    • automatic generation of web documentation (have a look here [ee.ethz.ch])
    • usage logging with syslog
    • dependencies
  • bah (Score:3, Funny)

    by Illserve ( 56215 ) on Wednesday March 12, 2003 @11:10AM (#5493731)
    I don't need help managing my package, thank you very much.
  • See the opt_depot [utexas.edu] page for one, and for links to another dozen or so packages that do the same sort of thing in varying ways.

    Stow is really a rather GNU-come-lately entry to the Depot arena.

  • I've been ranting about the stupidity of the FHS, and /,/usr,/usr/local directories for a long time, but all I get is vacant stares and comments along the lines that I am crazy for ever wanting to store applications in their *own* directories, as opposed to littering their content horizontally over the filesystem. But now this simple idea is illustrated in a developerworks article and all of a sudden it's "obvious". argh </vent>
  • With Stow, you can
    package applications in standard tar files and keep application binaries logically arranged for easy access.


    I'm sure we all remember that Eric Wright (RIP), when asked why he stows his package like that, said "for easy access, baby."

    -Peter

It is easier to write an incorrect program than understand a correct one.

Working...