Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Rage Against the File System Standard 612

pwagland submitted a rant by Mosfet on file system standards. I think he's sort of over simplified the whole issue, and definitely wrongly assigned blame, but it definitely warrants discussion. Why does my /usr/bin need 1500 files in it? Is it the fault of lazy distribution package management? Or is it irrelevant?
This discussion has been archived. No new comments can be posted.

Rage Against the File System Standard

Comments Filter:
  • hmmmm.... (Score:3, Informative)

    by Ender Ryan ( 79406 ) <TOKYO minus city> on Wednesday November 21, 2001 @10:51AM (#2595683) Journal
    My /usr/bin has ~1,500 files in it. A whole bunch of it is gnome stuff, because Slack 7.1 didn't put gnome in a completely separate dir. But then there is also all kinds of crap that I have absolutely no clue what it does. Just looking at some of the filenames I think I know what they are for, but I have other utilities on my machine that do the same thing.

    So, I'd say yes, it probably is partly because of lazy distro package management, but then again some people might still use some of this stuff and expect it to be there.

    On most new distrubutions I've see this is actually getting better. The latest Slack at least completely separates gnome by putting it in /opt/gnome.

    In any case though, I think there are more important things to worry about, such as all-purpose configuration tools, or at least lump them all together into a graphical management tool. You should be able to configure everything from sound/video to printers all in the same place.

  • by Hektor_Troy ( 262592 ) on Wednesday November 21, 2001 @10:59AM (#2595721)
    Most people haven't read the article it seems. Allow me to copy the follow-up:

    A few followups
    The response to this commentary has been large and I've gotten a ton of emails, (mostly positive). A few things I think I should clarify. First of all, this seems to only be an issue in RH based systems - many Slackware and Suse users emailed me to say that their systems try to do the right thing. Second of all a few angry people questioned my qualifications to make the above commentary, and one person even called me a novice! Many people know who I am and that I've been involved in Linux for years, but I figure since most editorials state the author's experience I might as well, too. I'm a Unix and Windows developer, have certifications in HP-UX Systems Administration and Tru64 cluster management (TruCluster), and have been a either a Unix admin or developer since college. I've worked on free software for about 3 years and have been a Linux user since the 0.9x days. Last of all, a few users say I should just use RPM, usually stating something along the lines that I'm stupid and don't know how to use it. Nothing can be further from the case: I have a lot of experience with RPM both from a user experience and creating quite a few RPMs for Linux distributions in the past. Just because you have a package manager is no excuse for sloppy and lazy directory management.
  • Re:The Alternative? (Score:5, Informative)

    by Anonymous Coward on Wednesday November 21, 2001 @11:00AM (#2595732)
    I'd much rather have 2000 binaries in /usr/bin than 2000 path entries in my $PATH



    Here's what every unix administrator I know (including myself) does:

    1. everything is installed in /opt, in its own directory:

      example$ ls /opt
      apache emacs krb5 lsof mysql openssl pico ucspi-tcp
      cvs iptables lprng make-links openssh php qmail

      (pico is for the PHBs, by the way)
    2. Every version of every program gets its own directory

      example$ ls /opt/emacs
      default emacs-21.1

    3. Each directory in /opt has a 'default' symlink to the version we're currently using

      example$ ls -ld /opt/emacs/default
      lrwxrwxrwx 1 root root 10 Oct 23 16:33 /opt/emacs/default -> emacs-21.1

    4. You write a small shell script that links everything in /opt/*/default/bin to /usr/local/bin, /opt/*/default/lib to /usr/local/lib, etc.

    Uninstalling software is 'rm -rf' and a find command to delete broken links. Upgrading software is making one link and running the script to make links again. No need to update anyone's PATH on a multi-user system and no need to mess with ld.so.conf. You can split /opt across multiple disks if you want. NO NEED FOR A PACKAGE MANAGER. This makes life much easier, trust me.
  • Re:The Alternative? (Score:2, Informative)

    by El Prebso ( 135671 ) on Wednesday November 21, 2001 @11:04AM (#2595759) Homepage
    There is actually a Package Manager that does all this for you, only it make everything alot easier.

    http://pack.sunsite.dk/
  • by tjwhaynes ( 114792 ) on Wednesday November 21, 2001 @11:05AM (#2595765)

    The unix system doesn't really dump all the files in /usr/bin. These are, almost without exception, executable files. For each executable, support files are usually installed into one or more directory trees, such as /usr/share/executable_name/. The main convenience gained by having all the main binaries in one place (or two - I usually try to leave system binaries in /usr/bin and my own installations in /usr/local/bin) is convenience for searching paths when looking for the binaries.

    However, this paradigm is pretty ugly if you are browsing through your files graphically. It would be nice if each application/package installed into one directory tree, so you could reorganise the system simply by moving applications around. For example,

    /usr/applications/

    /usr/applications/games/

    /usr/applications/games/quake3/

    .. this dir holds all quake 3 files ...

    ...etc..

    /usr/applications/graphics/

    /usr/applications/graphics/gimp/

    ... this dir hold all gimp files

    ...etc...

    If this appeals to you, you might like to check out the ROX project [sourceforge.net]. This sort of directory tree layout was the standard on the Acorn Risc OS and made life extremely easy for GUI organisation. It makes a lot of sense to use the directory tree to categorise the apps and files.

    Cheers,

    Toby Haynes

  • by Daniel Serodio ( 74295 ) <dserodio@gmailPASCAL.com minus language> on Wednesday November 21, 2001 @11:05AM (#2595766) Homepage
    No need to do the dirty work by hand, that's what GNU Stow [gnu.org] is for. Quoting from the Debian package's description:
    GNU Stow helps the system administrator organise files under /usr/local/ by allowing each piece of software to be installed in its own tree under /usr/local/stow/, and then using symlinks to create the illusion that all the software is installed in the same place.
  • Um, so? (Score:3, Informative)

    by bugzilla ( 21620 ) on Wednesday November 21, 2001 @11:06AM (#2595771) Homepage
    Much better to have a few thousand files in one dir than to have so many dirs that need to be in your $PATH that some shells will barf.

    For instance, the POSIX standard (I believe) is 1024 characters for $PATH statements. That's a minimum. My users at work sometimes have need for much longer $PATH's. Some OS vendors say, ok, 1024 is the minimum for POSIX compliance, that's what we're doing. Some, like HP-UX (believe it or not) have increased this at user request to 4K.

    In any case, this all seems pretty petty. It's not like our current and future filesystems can't handle it, and package managers are pretty good and know what they put where.
  • by Anonymous Coward on Wednesday November 21, 2001 @11:12AM (#2595803)
    There are probably some very valid reasons for the way UNIX does things. For example, manu application binaries and related files are shared between different applications and used with others because software is often cooperativily developed between vendors rather than in isolation the way windows stuff is typically developed. As a result, sharing of common resources, libraries, etc, is much easier to achieve.

    Also, most complex sites may use different kinds of nfs partial mounts on a file system. For example all of "/usr/share" may be off a single master nfs server, but /usr/bin might come from from a cpu architecture specific machine. To do this in /opt/packagename would mean all kinds of nfs 'micro-mounts' for portions of each applications tree. Having a common set of directory trees for applications rather than package specific ones makes it much easier to organize role specific network mounts.

    Of course, most of the current package management systems do not seem to understand the concept of role specific filesystem mounts. It would be nice if I could install a rpm for my /usr/share portion on my master nfs, without it also installing bin, etc, and having it install my /usr/bin or whatever portion on workstations or my cpu specific nfs servers without it installing /usr/share and such. Having a master config file in /etc that can explain this kind of usages to package managemement systems that can be setup on these machines would make that much easier to accomplish.
  • sounds like Encap (Score:5, Informative)

    by _|()|\| ( 159991 ) on Wednesday November 21, 2001 @11:17AM (#2595830)
    I think it is better to install all your programs binaries under a subdirectory, then symlink the executables

    You want the Encap package management system [uiuc.edu]. From the FAQ [uiuc.edu]:

    When you install an Encap package, the files are placed in their own subdirectory, usually under
    /usr/local/encap. For example, if you install GNU sed version 3.02, the following files will be included:
    • /usr/local/encap/sed-3.02/bin/sed
    • /usr/local/encap/sed-3.02/man/man1/sed.1
    Once these files have been installed, the Encap package manager will create the following symlinks:
    • /usr/local/bin/sed -> ../encap/sed-3.02/bin/sed
    • /usr/local/man/man1/sed.1 -> ../../encap/sed-3.02/man/man1/sed.1
    The normal user will have /usr/local/bin in his PATH and /usr/local/man in his MANPATH, so he will not even know that the Encap system is being used.
    The technique is essentially compatible with RPM, but Encap goes so far as to define a package format, which probably is not. If you like RPM, you might do better to simply follow the same convention.
  • by Ranalou ( 200662 ) on Wednesday November 21, 2001 @11:20AM (#2595846)
    To pick nits a touch, the reason X got its own sub directory was that it was often on a separate file system from the rest of /usr. In the long, long ago X was of such astounding size relative to the limited and expensive disk space of the day that special considerations had to be made upon its installation. It had little to do with any other sort of organization.

    As for the rest of the rant, to simply call the current practice of file organization horrendous behavior, sloppiness, or laziness without ample argument or demonstrable advantages as to why breaking every package into separate sub directories is damaging to the cause at best. Had the rant contained any sort of claim that there are an unacceptable number of name space clashes, that simply doing an 'ls' in one of these directories blew away the file name cache mechanisms in the kernel, forever making certain optimizations useless, or anything of that sort would hold more weight than unsupported bashing.

    The author laments the inability to manage these subdirectories effectively with standard tools, but as I see it, the option to not use package management has been there all along. Roll your own, putting things where you want them. Or, I might suggest broadening the concept of 'standard tools' to include the package management system installed, should the former option seem ludicrous.

    Not having to muck around with the PATH - and moreso, not having to support users mucking around with their own PATHs - far outweighs the disadvantages of not being able to use 'standard tools'. What time I lose learning and using my package management system I make up tenfold in not supporting the very issues which I forsee the author's solution creating.

    --Rana
  • no. (Score:1, Informative)

    by Anonymous Coward on Wednesday November 21, 2001 @11:22AM (#2595856)
    PATH is always explicit.
  • by Angry White Guy ( 521337 ) <CaptainBurly[AT]goodbadmovies.com> on Wednesday November 21, 2001 @11:23AM (#2595865)
    Don't know about the rest, but slack does the same thing if you let it. The nice thing about slack is that 9 times out of 10, you have to build from source, and get a much cleaner install. I find it easier to track broken symlinks than to remember the name of the binaries which run all the software on my server.

    Combat End User Ignorance - Tell them they're useless and can be replaced by VAX!

    AWG
  • Re:The Alternative? (Score:3, Informative)

    by AndyElf ( 23331 ) on Wednesday November 21, 2001 @11:24AM (#2595869) Homepage
    Section 4.1 of FHS:

    Large software packages must not use a direct subdirectory under the /usr hierarchy.
  • by TilJ ( 7607 ) on Wednesday November 21, 2001 @11:24AM (#2595875) Homepage
    On a Secure Computing Sidewinder (BSD based):
    % ls -l /usr/bin | wc -l
    258

    On an OpenBSD 2.8 server, minimal install + gcc stuff:
    $ ls -l /usr/bin | wc -l
    344

    On an OpenBSD 2.8 server, full install (including X):
    $ ls -l /usr/bin | wc -l
    373

    On a Mandrake 8.0 server:
    $ ls -l /usr/bin | wc -l
    1136

    On a RedHat 7.1 system with a fairly typical installation:
    $ ls -l /usr/bin | wc -l
    2203

    I want /opt (with subdir's per app) back ;-)

    It seems to mean that there's a lot of overlap/duplication in the tool set on Linux distributions versus the centralized managed BSD distributions. A crowded /usr/bin might be a consequence of the "choice is good" Linux philosophy.

    Not that I'm saying I disagree with "choice is good" ...
  • stow (Score:2, Informative)

    by Anonymous Coward on Wednesday November 21, 2001 @11:34AM (#2595945)
    For "package-managing" stuff you compiled yourself, "stow" is really useful. Install all your apps in their own directory, like :
    /usr/local/apps/foo
    /usr/local/apps/bar

    and then use stow to create the relevant links to /usr/local/bin , lib, man..

    You still end up with bloated /usr/local/* directories, but it's only links. To remove an application, just unstow it and then rm -rf the application directory.
    Another benefit is that you can keep several versions of the same app that way.
  • # rm -ff /usr/bin (Score:3, Informative)

    by ellem ( 147712 ) <ellem52.gmail@com> on Wednesday November 21, 2001 @11:41AM (#2595984) Homepage Journal
    The final solution to this mess.

    Unless you are hand writing each file in /usr/bin who cares how many files their are there?

    And Windows != /usr

    Program Files == /usr
  • Re:The Alternative? (Score:2, Informative)

    by -brazil- ( 111867 ) on Wednesday November 21, 2001 @11:46AM (#2596000) Homepage
    Um... You still want to RUN those applications, yes? And you don't want to remember and type in the full path for 2000 executables, yes? So you want your shell to find the executables for you. Which is what the PATH variable is for.
  • Look at opt_depot (Score:4, Informative)

    by jonabbey ( 2498 ) <jonabbey@ganymeta.org> on Wednesday November 21, 2001 @11:50AM (#2596025) Homepage

    Many years ago, we wrote a set of Perl utilities for automating symlink maintenance called opt_depot [utexas.edu].

    It's similar to the original CMU Depot program, but has built in support for linking to a set of NFS package volumes, and can cleanly interoperate with non-depot-managed files in the same file tree.

  • Re:The Alternative? (Score:1, Informative)

    by Anonymous Coward on Wednesday November 21, 2001 @11:50AM (#2596026)
    In Windows (tested with 2k and XP, using NTFS), argv[0] does contain the full path to the executable. This is for console programs, using cmd.exe as the console (not command.com, which may act differently and should be ignored anyway due to crapness).
  • by DGolden ( 17848 ) on Wednesday November 21, 2001 @11:54AM (#2596042) Homepage Journal
    HURD, and AmigaOS ahd a similar system called "assignments".
  • by Jack Auf ( 323064 ) on Wednesday November 21, 2001 @11:55AM (#2596043) Homepage
    Most major distros install quite a bit of stuff by default that you will 1) you probably will never use 2) you probably dont know what it is 3) if it's a server you don't need anyway

    This is one of the reasons I created Beehive Linux [www.beehive.nutargetnew]. It aims to be secure, stable, clean, FHS compliant, and optimized for hardware built in this century. Current version is 0.4.2, with 0.4.3 out in a week or so.

    On one point however I must disagee with Mosfet:

    The most obvious thing is separate the big projects like desktop projects into their own folders under /usr

    The FHS states: /opt is reserved for the installation of add-on application software packages. A package to be installed in /opt must locate its static files in a separate /opt/ directory tree, where is a name that describes the software package.

    Beehive puts large packages like apache, mysql, kde2 under /opt in their own subdirectory i.e.; /opt/kde2. I think this is a much better solution than cluttering up the /usr heirarchy, and makes it very simple to test a new version of without destroying the current setup.
  • Re:sounds like Encap (Score:3, Informative)

    by jonabbey ( 2498 ) <jonabbey@ganymeta.org> on Wednesday November 21, 2001 @11:58AM (#2596062) Homepage

    There have actually been many, many implementations of this basic idea, each with their own frills and features. I have a comprehensive listing of these programs on our opt_depot [utexas.edu] page.

    Take a look, if you're interested in that sort of thing.. I can think of relatively few ideas that have been implemented and re-implemented so many times.

  • Re:The Alternative? (Score:3, Informative)

    by ader ( 1402 ) on Wednesday November 21, 2001 @12:00PM (#2596072) Homepage
    Correct: this is not rocket science, people. It's called a software depot (at least it is now - see The Practice of System and Network Administration by Limoncelli & Hogan, chapter 23).

    How many directories in /usr does Mosfet want? One for X11, KDE, GNOME ... TeX, StarOffice, Perl, GNU, "misc", etc?? How large a PATH will that create?

    Actually, it's perfectly possible to use a separate directory for every single package - right down to GNU grep - if you:
    1. symlink all the relevant subdirectories for every package into a common set that is referred to in the various PATHs;
    2. manage those symlinks in some automated fashion.

    For the latter, try GNU Stow or (my favourite) Graft (available via Freshmeat). These tools could even be easily run as part of a package management post-install procedure.

    The depot approach has a number of advantages, not least of which the ease of upgrading package versions and maintaining different versions concurrently. And it's obvious what's installed and which files they provide.

    The challenge is in encouraging the vendors to embrace such a model as an integral part of their releases; that would require some significant reworking.

    Ade_
    /
  • by \/\/ ( 49485 ) on Wednesday November 21, 2001 @12:00PM (#2596073)
    I agree that this is a Linux-related issue that mostly stems from lazyness. I have been using the modules [modules.org] approach for tool management for years with very good results - even half a decade ago this was more advanced than any Linux approach out there today.

    With this approach each tool/version-combination gets its own directory, including subdirectories for libraries, source code, configuration files etc.

    You can then use a "module" commando to dynamically change your PATH, MANPATH, ... environment to reflect the tools you want to use (note that this supports the usage of a tool, it is therefore not a replacement for package management tools like rpm, which are mainly concerned with installation.)

    Each tool/version combination comes with an associated modulefile (which has a tcl-like syntax) where you can influence a user's system environment upon loading/unloading the module. It is also possible to e.g. create new directories, copy default configurations or do platform-specific stuff for a tool (which greatly helps users less fluent in Unix, since they do not have to care about stuff like shell-specific syntax for setting environment variables).

    It also allows you to give tool-specific help, e.g.
    $ module whatis wordnet
    wordnet: Lexical database for English, inspired by psycholinguistic theories of human memory.
    $ module add wordnet


    This is also very helpful if you want to keep different versions of the same tool (package, library) around and switch between them dynamically, e.g. for testing purposes (think different jdks, qt-libraries, etc.). With modules, you can e.g. do a simple
    module switch jdk/1.2.2 jdk/1.3.1
    and runs your tests again. And you never have to worry about overwriting libraries, configuration files etc. even if they have the same name (since they are kept in a subdirectory for each version).

    For our institute I've set up a transparent tool management system that works across our Linux/Solaris/Tru64 platforms. All tools are installed this way (except the basic system commandos which still go into /bin etc.).

    Of course, it's a lot of work to start a setup like this, but in a complex environment it is really worth it, especially in the long run.
  • another tool: graft (Score:3, Informative)

    by opus ( 543 ) on Wednesday November 21, 2001 @12:01PM (#2596083)
    The tool I use (and prefer to GNU stow) to manage the stuff that isn't managed by a package manager is graft [gormand.com.au].

    For stuff that uses GNU-style configure scripts to build, it's simply a matter of, e.g.

    $ ./configure --prefix=/usr/local/vim-6.0
    $ make
    # make install
    # graft -i vim-6.0

    The files themselves are stored in /usr/local/vim-6.0, and graft creates symlinks in /usr/local/bin, /usr/local/man, etc.

    Removing the software simply involves:

    # graft -d vim-6.0
    # rm -rf /usr/local/vim-6.0

    That said, I usually rely on the package manager, and don't really have a problem with 2000 files in /usr/bin.
  • GNU stow (Score:2, Informative)

    by ggeens ( 53767 ) <ggeens AT iggyland DOT com> on Wednesday November 21, 2001 @12:02PM (#2596087) Homepage Journal

    There is a package which does just that: GNU stow [mit.edu]. I use that to organize /usr/local. Very easy to use.

    You install each package under /usr/local/stow/<packagename>, and then you run stow <packagename> to make the links. After an upgrade, you do stow --restow <packagename>.

    (To me, having all binaries in /usr/bin is not a problem: the package manager takes care of them. And stow is sufficient to handle the things I install locally.)

  • by jonabbey ( 2498 ) <jonabbey@ganymeta.org> on Wednesday November 21, 2001 @12:06PM (#2596120) Homepage

    Have you actually had to manage a system that works like this? It's a royal pain in the ass.

    Yup, I have. In fact, we've managed all of our UNIX systems that way for the last 8 years or so. It's not a pain in the ass at all.. in fact, with the opt_depot [utexas.edu] scripts we wrote, we support automagic NFS sharing of packages for all Solaris systems in our laboratory. Indidivual system administrators can choose to use a particular package off of their choice of NFS servers, or they can simply copy the package's directory to their local system.

    Using symlinks gives you complete location independence.. all you need is a symlink from your PATH directory to the binaries, and a symlink from the canonical package location (e.g., /opt/depot/xemacs-21.5) to the actual location of the package directory, be it local or be it NFS.

    There's a group at NLM [nih.gov] who is working on tools and standard practices for managing NFS package archives using RPM, and then using the opt_depot scripts to integrate the package archives with each local system automatically.

  • One way to do it. (Score:1, Informative)

    by fungai ( 133594 ) on Wednesday November 21, 2001 @12:07PM (#2596131)
    Here's what I do on server systems (workstations I handle differently). I only install the minimum amout of packages during system install time. From that point onwards I only install under /usr/local/package_name. All my source goes into /usr/local/package_name/src. I always compile from source where possible. /opt is symlinked to /usr/local as well.


    $ cd /usr/local
    $ ls
    apache djbdns firewall info mysql redhat share
    bin doc freeswan kernel openldap rsync squid
    cvs e2fsprogs games lib openssh samba src
    cyrus etc imp man openssl saprouter wget
    dhcp exim include mathopd portfwd sbin wu-imap
    $


    I use ksh, so in /etc/profile I have:


    LOCALPATH=""
    for DIR in $( ls -d /usr/local/*/*bin )
    do
    if [[ -z $LOCALPATH ]]
    then
    LOCALPATH=$DIR
    else
    LOCALPATH=$LOCALPATH:$DIR
    fi
    done
    PATH=/root/bin:$PATH:$LOCALPATH

    MANPATH=/usr/man
    for DIR in $( ls -d /usr/local/*/man /usr/local/man/* )
    do
    MANPATH=$MANPATH:$DIR
    done


    And my PATH looks like:


    /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/openssh /b in:/usr/X11R6/bin:/usr/local/apache/bin:/usr/local /apache/cgi-bin:/usr/local/cvs/bin:/usr/local/cyru s/bin:/usr/local/cyrus/sbin:/usr/local/dhcp/sbin:/ usr/local/djbdns/bin:/usr/local/e2fsprogs/INSTALL. dllbin:/usr/local/e2fsprogs/INSTALL.elfbin:/usr/lo cal/exim/bin:/usr/local/firewall/bin:/usr/local/fr eeswan/bin:/usr/local/mathopd/bin:/usr/local/mysql /bin:/usr/local/openldap/bin:/usr/local/openldap/s bin:/usr/local/openssh/bin:/usr/local/openssh/sbin :/usr/local/openssl/bin:/usr/local/rsync/bin:/usr/ local/samba/bin:/usr/local/samba/sbin:/usr/local/s aprouter/bin:/usr/local/squid/bin:/usr/local/wget/ bin


    Which may look ugly, but I never actually look at it, and it works just fine. I've never noticed a speed decrease because of the long PATH... YMMV
  • not even close (Score:2, Informative)

    by David Jao ( 2759 ) <djao@dominia.org> on Wednesday November 21, 2001 @12:13PM (#2596167) Homepage
    If it's even close the the standard of the windows uninstaller, it'll leave a ton of files lying around

    It's not even close.

    The windows uninstaller, as far as I know, provides no way for you to:

    • list what files belong in a package
    • for a given file, list what package that file belongs to
    • list all other packages that a package depends on
    • list all other packages that depend on a given package
    Unix package managers do allow these things, so you can see exactly what it is doing and make sure that it works right.

    Also, in windows there is no centralized package management app -- Add/Remove programs pretends to give you central control, but behind the scenes it really just runs the uninstall app provided by the 3rd party vendor. In Unix, the situation is quite different -- there is a central program to manage packages (rpm on redhat, dpkg on debian, ports on BSD), and so there is no opportunity for lazy 3rd party developers to screw things up.

  • by rnturn ( 11092 ) on Wednesday November 21, 2001 @12:29PM (#2596266)

    We do the same thing on our Tru64 boxen. All 3rd party software goes in /opt or /usr/opt. 3rd party executables go in /usr/local/bin. Some executables live in an app-specific subdirectory under /opt and the symlink in /usr/local/bin points to the physical location. It makes OS upgrade time tons simpler. And the first step of our DR plan is to backup OS-related stuff and backup software on special tapes. Those get restored first so that we get a bootable system in a hurry. Then the rest of the software and data can be restored using the 3rd party backup software. None of this would be as easy to do if we had 2000 programs all living under /usr/bin. If Mosfet has a point it's that some distribution vendors make a mess out of the directory structure by dumping way, way too much stuff under, say, /usr/bin.

    \begin{rant}
    RedHat, are you listening? I like your distribution but the layout of the files you install sucks big time. Anyone who updates their applications (Apache, PostgreSQL, PHP, etc.) from the developer's sites has to undo the mess you guys create. Either that or don't install the versions on your CDs at all and just go with the source tars.
    \end{rant}

    (OK, I feel better now...)

  • Offtopic pedantry (Score:2, Informative)

    by tjgoodwin ( 133622 ) on Wednesday November 21, 2001 @12:43PM (#2596356) Homepage

    Possessive its has no apostrophe.

    The word it's, being a contraction of it is, has an apostrophe.

    The word its, meaning belonging to it has no more apostrophes than his.

    I know this is boring pedantry, which will be modded off topic. But the article in question misapostrophises its 9 (nine!) times, compared with 5 correct uses.

  • Re:The Alternative? (Score:2, Informative)

    by Icy ( 7612 ) on Wednesday November 21, 2001 @12:45PM (#2596367) Homepage
    You would not have 2000 PATH entries, only a few dozen. That would go completely against what the author is trying to say. You would group programs by what they do or whose program they belong too. I recently noticed that on my FreeBSD server, the postgresql port now installs its files into /usr/local/ while it used to install everything into /usr/local/pgsql/ by default. Although this makes it much easier with paths and such, it makes it much harder to say back up everything or even to just know what programs are included (although you can just make a package or use pkg_info). The one thing that i don't like is that its harder to have two different versions of a program since spreading it through /usr/local make installing the new version clobber the old.
  • by Marasmus ( 63844 ) on Wednesday November 21, 2001 @12:46PM (#2596372) Homepage Journal
    You're right - Slackware, Debian, and SuSE (relatively older players in the Linux game than RedHat) did do this heavily in older versions. However, there has been some work in each of these distributions to remedy this. For example, in Slackware 8, all GNOME default-install stuff is in /opt/gnome (which is sensible and clean), all KDE default-install stuff is in /opt/kde (likewise), and contrib packages normally get installed in /usr/local (the semi-official place for things you compile yourself) or /opt (more sensible, since these are still distro packages).

    As far as commercial UNIXes go, they really *are* better organized than the average Linux distribution. I'm speaking mainly from Solaris experience, but BSD/OS and HP/UX also keep a pretty good level of modularity to the filesystem structure.

    RedHat certainly didn't start this fiasco, but then again they haven't been very proactive in fixing these problems either. I can't speak for GNOME or KDE on RedHat (since I only use RedHat for servers without X), but the contrib packages practically all get thrown into /usr and make things a real nightmare to manage. Added atop that dependency conflicts where Program A needs library 2.3.4 while Program B needs library 2.4.5, and the system approaches unmanageable at a very high rate of speed.

    A little more modularity in the file organization department wouldn't hurt us. It could also help the dependency problems if the package maintainers use a more modular file structure to their advantage.
  • by mattdm ( 1931 ) on Wednesday November 21, 2001 @12:50PM (#2596394) Homepage
    Check it out (from: http://www.fywss.com/plan9/intro.html [fywss.com]):


    Plan 9 has "union directories" : directories made of several directories all bound to the same name. The directories making up a union directory are ordered in a list. When the bindings are made (see bind (1)), flags specify whether a made (see bind (1)), flags specify whether a newly bound member goes at the head or the tail of the list or completely replaces the list. To look up a name in a union directory, each member directory is searched in list order until the name is found. A bind flag specifies whether file creation is allowed in a member directory: a file created in the union directory goes in the first member directory in list order that allows creation, if any.

  • Stow! (Score:2, Informative)

    by Urban Garlic ( 447282 ) on Wednesday November 21, 2001 @12:50PM (#2596396)
    Because of it's openness, Linux is also self-repairing. Clutter in main directories is a real problem, and instead of ranting about it, at least one person went ahead and built a work-around, by writing "stow".

    This is a utility that allows you to put executables for packages where you like, and then automatically creates symlinks from the main /usr/bin, /usr/lib, etc. directories into the right place. It destroys those links when you unstow, and minimizes link-count by putting the links in at as high a level as possible, and unlike package-maintenance schemes, doesn't rely on a package database, so there's no danger of it being incomplete or out of date.
  • Re:Application Paths (Score:2, Informative)

    by Make ( 95577 ) <max.kellermann@gm[ ].com ['ail' in gap]> on Wednesday November 21, 2001 @12:58PM (#2596440) Homepage
    Ever heard about ./configure --prefix=/where/ever/you/like ? You have got the choice.. use rpm/dpkg or configure.

    And, hm, about your lovely windows: do your apps ask you in which c:\windows\system32 they should put their dll trash?
  • by mjh ( 57755 ) <(moc.nalcnroh) (ta) (kram)> on Wednesday November 21, 2001 @01:12PM (#2596506) Homepage Journal
    Score:+1, Insightful (Virtual Moderator Point)

    As a debian user, I'm a big proponant of using a well thought out package system. But, you're entirely correct. If you have a core system componant (like a library) and the packaged version doesn't provide a piece of functionality that you need, you are completely screwed.

    Installing that one library from source doesn't solve the problem. The package mgmt system doesn't see the lib that you installed so it still doesn't install the prog that you want.

    So you end up with two choices: install everything from source, or install everything from the package manager.

    Debian uses the equivs package to resolve this problem. Basically, you use equivs to create an entry into the package for everything that you install from source. So let's say you install libFoo from source. And the package bar depends on libFoo. You create an equivs package that you install that provides "libFoo". Now you can install the prepackaged bar and everything works.

    The other alternative is to add an additional step to everytime you compile from source: create a package for the system you're operating on. Sometimes this is easy, sometimes this is very difficult.

    My point is that there are ways of interoperting a packaging system with programs that are installed from source.

    Hope this is helpful.
  • by kanelephant ( 142254 ) on Wednesday November 21, 2001 @01:41PM (#2596735)
    I use this method and it works nicely for binaries/paths. But it does not work well for shared libraries because ldconfig does not follow symbolic links. Currently I have to add each directory to /etc/ld.so.conf. Is there a good way round this?

    Or am I just being stupid?

    -K
  • Re:The Alternative? (Score:4, Informative)

    by psamuels ( 64397 ) on Wednesday November 21, 2001 @01:56PM (#2596839) Homepage
    The system does not go through all of the directories in the path every time you type a command. No shell that I know of is stupid enough to do that.

    True, but it doesn't help in the situation where you have a short shell script running - the shell that runs the script has to hash all those directories.

    It just adds to overhead of running a shell script, and that is something I am opposed to on principle. (It's also why I use ash for /bin/sh rather than bash.)

    Now, I believe the truly intelligent shells do not pre-emptively cache your whole path, they just add entries to the cache as needed. Either way, though, having a long path is harmful to performance - and a short-running shell (running a short script, say) is penalised more than a long-running shell due to less use of cache.

    As an aside, I believe the only things that belong in bin dirs are binaries a user or administrator might ever actually want to run. In this regard, I think Debian packages sometimes go overboard - daemons, in my opinion, should go in /usr/lib/{subdir} or something rather than /usr/sbin, since you should really be invoking them via /etc/init.d/* scripts.

  • by DrSkwid ( 118965 ) on Wednesday November 21, 2001 @04:47PM (#2597824) Journal
    yes, that's exactly how it works

    instead of a really long $path you just have

    PATH=/bin

    and then in termrc (for example)

    bind /bin/$CPUTYPE /bin # cpu specific exes
    bind /usr/$user/bin/bin /bin # my exes
    bind /usr/$user/bin/rc /bin # my shell scripts
    bind /usr/someapp/bin /bin # some app I want

    the namespace is built on a per process group basis so I can pick and choose the exes ()or anything else) on a per process basis

    To compile a program with the C library from July 16, 1992:

    %mount /srv/boot /n/dump dump
    %bind /n/dump/1992/0716/mips/lib/libc.a /mips/lib/libc.a
    %mk

    you can have a different set of libs per window
    (or run the windowmanager INSIDE one of it's own windows and set one namespace for that whole group)

    plan9 has no symlinks

    because "everything is a file" this even works for remote servers & network stacks.

    import helix /net
    telnet tcp!ai.mit.edu

    more [bell-labs.com]
  • Re:The Alternative? (Score:3, Informative)

    by SlickMickTrick ( 443214 ) on Wednesday November 21, 2001 @05:19PM (#2597989)
    SuSe actuallly does this. On my /opt path I have:

    /opt/kde
    /opt/kde2
    /opt/gnome

    And they have bin directories under that. Funny, until now I've only ever heard people slam SuSe for doing it (something about not being Linux Standard Base compliant).

    I personally like it. The only thing, whenever you compile a kde program, you add --prefix=/opt/kde2 to the ./configure command.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...