Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Rage Against the File System Standard 612

pwagland submitted a rant by Mosfet on file system standards. I think he's sort of over simplified the whole issue, and definitely wrongly assigned blame, but it definitely warrants discussion. Why does my /usr/bin need 1500 files in it? Is it the fault of lazy distribution package management? Or is it irrelevant?
This discussion has been archived. No new comments can be posted.

Rage Against the File System Standard

Comments Filter:
  • and just install in /?

    Who in their right mind places stuff outside of a program specific folder, if it's not gonna be used in multiple programs (like shared libraries)?
  • Is it really that bad? Would I not have much control over where programs get installed to?
    I would think that even without a package handler to do it for me, the program itself would allow me to say where it should be installed...or is that just the Windows user in me talking?
  • The Alternative? (Score:4, Redundant)

    by Mike Connell ( 81274 ) on Wednesday November 21, 2001 @10:45AM (#2595647) Homepage
    I'd much rather have 2000 binaries in /usr/bin than 2000 path entries in my $PATH

    Mike
    • Re:The Alternative? (Score:2, Interesting)

      by dattaway ( 3088 )
      Is there such thing as a recursive PATH directive for executables? Like the ls -R or something for searching into subdirectories?
    • Re:The Alternative? (Score:3, Interesting)

      by kaisyain ( 15013 )
      You would only need 2000 path entries if your expect your shell to have the same exact semantics that it does today. There is no reason whatsoever that PATH couldn't mean "for every entry in my PATH environment variable look for executables in */bin". A smart shell could even hide all of these behind the scenes for you and provide a shell variable SMART_PATH that gets expanded to the big path for legacy apps.

      Or you could do what DJB does with /command and symlink everything to one place. Although I'm not sure if that solves the original complaint. Actually, I'm not sure what the original complaint is, having re-read the article.
    • by Meleneth ( 104287 ) on Wednesday November 21, 2001 @10:52AM (#2595688) Homepage
      *sigh*

      has anyone heard of symlinks? the theory is very simple - install the app into /opt/foo or wherever, then symlink to /usr/local/bin. yawn.

      or is that one of those secrets we're not supposed to tell the newbies?
    • Uh huh. And when something goes terribly wrong, how do you determine what went wrong? Our production servers (HPUX, Solaris, AIX) have in the /usr/* only what the system supplied. Everything else gets put in it's "proper place"- either /opt/, or /usr/local/ (it's own filesystem) or similar. The paths are not so bad- and the system is healty and clean. The alternative? A system easily attacked with a trojan horse.
      • by rnturn ( 11092 ) on Wednesday November 21, 2001 @12:29PM (#2596266)

        We do the same thing on our Tru64 boxen. All 3rd party software goes in /opt or /usr/opt. 3rd party executables go in /usr/local/bin. Some executables live in an app-specific subdirectory under /opt and the symlink in /usr/local/bin points to the physical location. It makes OS upgrade time tons simpler. And the first step of our DR plan is to backup OS-related stuff and backup software on special tapes. Those get restored first so that we get a bootable system in a hurry. Then the rest of the software and data can be restored using the 3rd party backup software. None of this would be as easy to do if we had 2000 programs all living under /usr/bin. If Mosfet has a point it's that some distribution vendors make a mess out of the directory structure by dumping way, way too much stuff under, say, /usr/bin.

        \begin{rant}
        RedHat, are you listening? I like your distribution but the layout of the files you install sucks big time. Anyone who updates their applications (Apache, PostgreSQL, PHP, etc.) from the developer's sites has to undo the mess you guys create. Either that or don't install the versions on your CDs at all and just go with the source tars.
        \end{rant}

        (OK, I feel better now...)

    • Re:The Alternative? (Score:5, Informative)

      by Anonymous Coward on Wednesday November 21, 2001 @11:00AM (#2595732)
      I'd much rather have 2000 binaries in /usr/bin than 2000 path entries in my $PATH



      Here's what every unix administrator I know (including myself) does:

      1. everything is installed in /opt, in its own directory:

        example$ ls /opt
        apache emacs krb5 lsof mysql openssl pico ucspi-tcp
        cvs iptables lprng make-links openssh php qmail

        (pico is for the PHBs, by the way)
      2. Every version of every program gets its own directory

        example$ ls /opt/emacs
        default emacs-21.1

      3. Each directory in /opt has a 'default' symlink to the version we're currently using

        example$ ls -ld /opt/emacs/default
        lrwxrwxrwx 1 root root 10 Oct 23 16:33 /opt/emacs/default -> emacs-21.1

      4. You write a small shell script that links everything in /opt/*/default/bin to /usr/local/bin, /opt/*/default/lib to /usr/local/lib, etc.

      Uninstalling software is 'rm -rf' and a find command to delete broken links. Upgrading software is making one link and running the script to make links again. No need to update anyone's PATH on a multi-user system and no need to mess with ld.so.conf. You can split /opt across multiple disks if you want. NO NEED FOR A PACKAGE MANAGER. This makes life much easier, trust me.
      • Re:The Alternative? (Score:2, Informative)

        by El Prebso ( 135671 )
        There is actually a Package Manager that does all this for you, only it make everything alot easier.

        http://pack.sunsite.dk/
      • Look at opt_depot (Score:4, Informative)

        by jonabbey ( 2498 ) <jonabbey@ganymeta.org> on Wednesday November 21, 2001 @11:50AM (#2596025) Homepage

        Many years ago, we wrote a set of Perl utilities for automating symlink maintenance called opt_depot [utexas.edu].

        It's similar to the original CMU Depot program, but has built in support for linking to a set of NFS package volumes, and can cleanly interoperate with non-depot-managed files in the same file tree.

      • Re:The Alternative? (Score:3, Informative)

        by ader ( 1402 )
        Correct: this is not rocket science, people. It's called a software depot (at least it is now - see The Practice of System and Network Administration by Limoncelli & Hogan, chapter 23).

        How many directories in /usr does Mosfet want? One for X11, KDE, GNOME ... TeX, StarOffice, Perl, GNU, "misc", etc?? How large a PATH will that create?

        Actually, it's perfectly possible to use a separate directory for every single package - right down to GNU grep - if you:
        1. symlink all the relevant subdirectories for every package into a common set that is referred to in the various PATHs;
        2. manage those symlinks in some automated fashion.

        For the latter, try GNU Stow or (my favourite) Graft (available via Freshmeat). These tools could even be easily run as part of a package management post-install procedure.

        The depot approach has a number of advantages, not least of which the ease of upgrading package versions and maintaining different versions concurrently. And it's obvious what's installed and which files they provide.

        The challenge is in encouraging the vendors to embrace such a model as an integral part of their releases; that would require some significant reworking.

        Ade_
        /
    • I don't recall LFS saying you couldn't use "/usr/appname", so the article title is a bit misleading, but you certainly don't need 2000 entries in your path. The best solution for the problem that I can see is for coders of the multi-binary applications to take a leaf out of Windows' book and use the equivalent of "C:\Program Files\Common Files". Using an application (or environment, or vendor) specific directory for programs that only other programs need to use. The best I can see would be to use "/usr/appname/" for binaries and "/usr/lib/appname/" for libraries.
    • by jcostom ( 14735 ) on Wednesday November 21, 2001 @12:22PM (#2596221) Homepage
      The alternative? Simple. /opt.

      Mosfet's not talking about a new directory for every little application. He's talking about moving out stuff like KDE and GNOME. So instead of just having /usr/bin in your $PATH, you would also include /opt/gnome/bin and/or /opt/kde/bin. Yes, this makes your path a bit larger, but unmanagable? Hardly.

      I just checked on one of my PCs that has KDE2 installed (from the RH 7.2 RPMs), and there are over 200 files that match /usr/bin/k*. The only one that wasn't a part of KDE was ksh. My /usr/bin has 1948 files in it. There's a 10% reduction with one change. I don't have GNOME installed on this box, so a similar comparison isn't really possible. However, I imagine that the number would be similar if not greater for GNOME.

      It's not like he's suggesting we sacrifice goats in the street. He's suggesting we actually implement what the FSS says.

      • SuSe actuallly does this. On my /opt path I have:

        /opt/kde
        /opt/kde2
        /opt/gnome

        And they have bin directories under that. Funny, until now I've only ever heard people slam SuSe for doing it (something about not being Linux Standard Base compliant).

        I personally like it. The only thing, whenever you compile a kde program, you add --prefix=/opt/kde2 to the ./configure command.
  • by TechnoVooDooDaddy ( 470187 ) on Wednesday November 21, 2001 @10:45AM (#2595648) Homepage
    imo, we need a better command path system thingy that allows easier categorization of executables and other stuff... Win32 has the System32 (or System) directory, *nix has /usr/bin, /usr/share/bin, /usr/local/bin etc...

    I don't have a solution, but i'll devote a few idle cycles to it...
    • c:\windows\system...

      oh yes, this is the way to go. Hundreds of applications, each storing different versions of the same needed system or application dll's in one dir, overwriting the one version that worked....
      </sarcasm>

      There is a reason that binaries are spread over different partitions on Real Operating Systems....

      btw, it's nice to see that html-formatting is actually making sense in my first line..: <br><br> :-)

    • What we need is a *limited* way to have a single $PATH definition that will address arbitrary packages. I was thinking about

      PATH="$PATH /opt/*/bin"

      This would look in /opt once and cache the dirread so the hit for this only happens once.

      Of course this adds the problem of ordering (/opt/a/bin/foo vs. /opt/b/bin/foo).
  • by kaisyain ( 15013 ) on Wednesday November 21, 2001 @10:45AM (#2595652)
    Anyone who claims that RedHat started the use of /usr/bin/ as a dumping ground can't be taken seriously. Pretty sure slackware and SLS did the same thing. Same goes for Solaris, AIX, AUX, Sun/OS, Irix, and HPUX.

    It's not about lazy distributors. It's about administrators who are used to doing things this way and distributors going along with tradition.
    • did you look at HER Site? he is a SHE and from the ooks of it she likes to get freekaaaayyy :-)......pretty damn hot for a geek girl.
    • Anyone who claims that RedHat started the use of /usr/bin/ as a dumping ground can't be taken seriously. Pretty sure slackware and SLS did the same thing. Same goes for Solaris, AIX, AUX, Sun/OS, Irix, and HPUX.

      Agreed, but does that make it right?

      For the last few years, this is the kind of thing that has really been nagging me. All OSes seem to suffer from the same problem. Why are we so stuck with the mindset that traditions of the past shouldn't be challenged? Can't we, as "brilliant" computer scientist, start solving these problems and move on?

      I recently demo'ed a good Linux distro to a friend and it finally dawned on me. When you load KDE, you are literally overwhelmed with options. My friend asked, "What is the difference between tools and utilities?". I didn't know. I tried to show him StarOffice and it took me a few minuets of digging in different menus.

      No, I don't use Linux on a daily basis, and no, I'm not the smartest person in the world. But I think I see the problem. Everything seems to be an imitation of something else (with more bells and whistles). Where is the true innovation? Our computers and software are not significantly different than they were 20 years ago.

      Why are we still using $PATH?

    • by Marasmus ( 63844 ) on Wednesday November 21, 2001 @12:46PM (#2596372) Homepage Journal
      You're right - Slackware, Debian, and SuSE (relatively older players in the Linux game than RedHat) did do this heavily in older versions. However, there has been some work in each of these distributions to remedy this. For example, in Slackware 8, all GNOME default-install stuff is in /opt/gnome (which is sensible and clean), all KDE default-install stuff is in /opt/kde (likewise), and contrib packages normally get installed in /usr/local (the semi-official place for things you compile yourself) or /opt (more sensible, since these are still distro packages).

      As far as commercial UNIXes go, they really *are* better organized than the average Linux distribution. I'm speaking mainly from Solaris experience, but BSD/OS and HP/UX also keep a pretty good level of modularity to the filesystem structure.

      RedHat certainly didn't start this fiasco, but then again they haven't been very proactive in fixing these problems either. I can't speak for GNOME or KDE on RedHat (since I only use RedHat for servers without X), but the contrib packages practically all get thrown into /usr and make things a real nightmare to manage. Added atop that dependency conflicts where Program A needs library 2.3.4 while Program B needs library 2.4.5, and the system approaches unmanageable at a very high rate of speed.

      A little more modularity in the file organization department wouldn't hurt us. It could also help the dependency problems if the package maintainers use a more modular file structure to their advantage.
    • While Red Hat is certainly a major offender, HP-UX 11.0 has device log files in the /etc hierarchy, and the runlevels are still under /sbin, and every "optional software" dumping ground ever invented (share, contrib, usr/local, opt, and more) as well as a totally brain-dead depot system that makes RPM look inspired.
      I've said it before - and I'm not the first or last to notice - HP-UX is a *train wreck* of a unix. HP puts Fibre Channel controllers that are necessary for the system to BOOT in the /opt folder!
      --Charlie
  • by nll8802 ( 536577 ) on Wednesday November 21, 2001 @10:46AM (#2595657) Homepage
    I think it is better to install all your programs binaries under a subdirectory, then symlink the executables to the /bin /usr/bin or /usr/local/bin directorys. This gives you a lot easier way to remove programs that don't have an uninstall script included, and Is a lot more organized.
    • You'd still have to clean up all the symlinks, so you're not really buying yourself anything.

      It's true that having all the files associated with a given package in a single location makes it easy to see what-all you've got and which files belong to which package, but you'll still require something that will clean up all the symlinks that point off to nowhere.
      • Yes, but dead symlinks are easy to see (on my system they make an annoying blinking action) and scripts can be written that recurse down the directory tree looking for invalid links. Another positive argument in favor of this approach is that many packages include several binaries, only one or two of which are ever going to be called directly from the command line in a situation where using a full path is not convenient. This also makes version control a lot more obvious (and having simultaneous multiple versions a lot easier, too).
      • by Daniel Serodio ( 74295 ) <dserodio@gmailPASCAL.com minus language> on Wednesday November 21, 2001 @11:05AM (#2595766) Homepage
        No need to do the dirty work by hand, that's what GNU Stow [gnu.org] is for. Quoting from the Debian package's description:
        GNU Stow helps the system administrator organise files under /usr/local/ by allowing each piece of software to be installed in its own tree under /usr/local/stow/, and then using symlinks to create the illusion that all the software is installed in the same place.
        • another tool: graft (Score:3, Informative)

          by opus ( 543 )
          The tool I use (and prefer to GNU stow) to manage the stuff that isn't managed by a package manager is graft [gormand.com.au].

          For stuff that uses GNU-style configure scripts to build, it's simply a matter of, e.g.

          $ ./configure --prefix=/usr/local/vim-6.0
          $ make
          # make install
          # graft -i vim-6.0

          The files themselves are stored in /usr/local/vim-6.0, and graft creates symlinks in /usr/local/bin, /usr/local/man, etc.

          Removing the software simply involves:

          # graft -d vim-6.0
          # rm -rf /usr/local/vim-6.0

          That said, I usually rely on the package manager, and don't really have a problem with 2000 files in /usr/bin.
      • but you'll still require something that will clean up all the symlinks that point off to nowhere.

        A combination of "ls -l", "cut" and grepping for the subfolder you just "rm -rf"'d fed into "rm" perhaps? It shouldn't be too difficult to work out the regexp to sort out the "symlink -> target" bit at the end, but it's late in my day, so I'll leave that as an exercise for the reader... ;)

    • Could this not be done with some kind of auto mirroring?

      Each application would have its own tree and could have bin, sbin, lib and/or other directories.
      These directories would be marked or registered so that they would appear as if they were part of /bin or /sbin exec... That way we only need a short path but we still maintain application separation.
    • I'd go one step further. Chroot the programs and hard link the required libraries into the chroot directory. Then you don't have to worry about annoying upgrade problems when one package insists on one set of libraries and another package insists on another. Also, when the hard link count goes down to 1 (the one in the master /lib directory), you can delete the file.
    • sounds like Encap (Score:5, Informative)

      by _|()|\| ( 159991 ) on Wednesday November 21, 2001 @11:17AM (#2595830)
      I think it is better to install all your programs binaries under a subdirectory, then symlink the executables

      You want the Encap package management system [uiuc.edu]. From the FAQ [uiuc.edu]:

      When you install an Encap package, the files are placed in their own subdirectory, usually under
      /usr/local/encap. For example, if you install GNU sed version 3.02, the following files will be included:
      • /usr/local/encap/sed-3.02/bin/sed
      • /usr/local/encap/sed-3.02/man/man1/sed.1
      Once these files have been installed, the Encap package manager will create the following symlinks:
      • /usr/local/bin/sed -> ../encap/sed-3.02/bin/sed
      • /usr/local/man/man1/sed.1 -> ../../encap/sed-3.02/man/man1/sed.1
      The normal user will have /usr/local/bin in his PATH and /usr/local/man in his MANPATH, so he will not even know that the Encap system is being used.
      The technique is essentially compatible with RPM, but Encap goes so far as to define a package format, which probably is not. If you like RPM, you might do better to simply follow the same convention.
      • Re:sounds like Encap (Score:3, Informative)

        by jonabbey ( 2498 )

        There have actually been many, many implementations of this basic idea, each with their own frills and features. I have a comprehensive listing of these programs on our opt_depot [utexas.edu] page.

        Take a look, if you're interested in that sort of thing.. I can think of relatively few ideas that have been implemented and re-implemented so many times.

  • Package Management (Score:4, Insightful)

    by Fiznarp ( 233 ) on Wednesday November 21, 2001 @10:47AM (#2595659)
    ...makes this unnecessary. When I can use RPM to verify the purpose and integrity of every binary in /usr/bin, I don't see a need for separating software into a meaningless directory structure.

    DOS put programs in different folders because there was no other way to tell what package the software belonged to.
    • by kramerj ( 161379 )
      And then you get into naming conflicts down the road.. MS has this problem now, and is dealing with it partly with the new fandangled "Private Packages" or whatever in XP.. Basically unsharing shared libraries.. There DOES need to be separation that can be controlled more than it can be now, or we are going to see problems in the future. Have you ever installed a package and a file was already there? Were they the same file? Do you know? Version? Its a bad idea to clump everything together... what we need is to make a path statement extension, that basically says /usr/bin/*/ to allow everything one directory down, OR, allow packages to register their own paths in their install directories (ie, a file that gets installed and then pointed to to say "search here for executables as well"). Make it an config in /etc that points to these other little files that contain places to look, then at boot time enumerate that all out and make a tree of the executables.. fast and easy to manage..

      Jay
      • No, you don't. It's the package manager's job do avoid any conflicts. Windows has these problems because each piec of software comes with its own installation program and does not know anything about the others.
  • in the dark old unixish days whenever you bought a bit of commercial software (remember that? buying? :) it'd install itself into /usr/local/daftname/ or /opt/daftname/ or somewhere. This meant there'd be a huge path variable to manage which was a nightmare. The reason the windows equivalent isn't a problem is that windows is not commandline based - users access peograms through a link in a start menu (gross oversimplification but you get the idea). This simply doesn't translate to the command line paradigm. So a simple answer - nice path variables, neat directory structures, usable command line interfaces, pick any two. ~mocktor
  • Linux From Scratch (Score:4, Interesting)

    by MadCamel ( 193459 ) <spam@cosmic-cow.net> on Wednesday November 21, 2001 @10:48AM (#2595664) Homepage
    This is _EXACTLY_ why I use LinuxFromScratch [linuxfromscratch.org]. You do not HAVE to use the package managment system, you can install anything *just* the way *you* want it. X applications in /usr/bin? No way jose! (My appoligies to anyone named Jose, I'm sure you are sick of hearing that one), /usr/X11 it is! If you are not happy with the standards, make your own, it just takes a little time and in-depth knowledge.
  • Response (Score:3, Insightful)

    by uslinux.net ( 152591 ) on Wednesday November 21, 2001 @10:50AM (#2595679) Homepage
    You have to use the package manager.


    And you should, normally. If you system installs binutils as an RPM, DEB, Sun/HP/SGI package, well, you _should_ use the package manage to upgrade/remove. After all, if you don't, you're going to start breaking your dependencies for other packages. That's why package managers exist!


    In some respects, Linux is better than many commercial unices. SGI uses /usr/freeware for GNU software. Solaris created /opt for "optional" packages (what the hell is an optional package? isn't that what /usr/local is for?!?!) At least all your system software gets installed in /usr/bin (well, unless you're using Caldera, which puts KDE in /opt... go figure), and if you use a package manager like they were intended, it's easy to clean them up. The difference between Windows and Linux/Unix is that the Linux/Unix package managers ARE SMART ENOUGH not to remove shared libraries unless NOTHING ELSE IS DEPENDING ON THEM! In Windows (and I haven't used it since 98 and NT 4), if you remove a package and there's a shared library (DLL), you have the option of removing it or leaving it - but you never KNOW if you can safely remove it, overwrite it, etc.


    I agree that there should be a new, standard directory structure, but I disagree that every package in the world should have its own directory. If you're using a decent package manager, included with ANY distro or commercial/free Unix variant, there's little need to do so.

    • Re:Response (Score:3, Insightful)

      by brunes69 ( 86786 )

      Ok, we all hate windows, but spreading FUD is useless, and makes you look as bad as they do. Every windows app I have _EVER_ uninstalled (and there has been alot!) _ALWAYS_ says something along the lines of "This is a shared DLL. The registry indicates no other programs are using it. I will delete it now unless you say otherwise". This sounds pretty much like it knows whats being used and what isn't. Unless you get your registry corrupted, which wouldn't be any different from having your package database (RPM or dpkg) corrupted.

  • hmmmm.... (Score:3, Informative)

    by Ender Ryan ( 79406 ) <MONET minus painter> on Wednesday November 21, 2001 @10:51AM (#2595683) Journal
    My /usr/bin has ~1,500 files in it. A whole bunch of it is gnome stuff, because Slack 7.1 didn't put gnome in a completely separate dir. But then there is also all kinds of crap that I have absolutely no clue what it does. Just looking at some of the filenames I think I know what they are for, but I have other utilities on my machine that do the same thing.

    So, I'd say yes, it probably is partly because of lazy distro package management, but then again some people might still use some of this stuff and expect it to be there.

    On most new distrubutions I've see this is actually getting better. The latest Slack at least completely separates gnome by putting it in /opt/gnome.

    In any case though, I think there are more important things to worry about, such as all-purpose configuration tools, or at least lump them all together into a graphical management tool. You should be able to configure everything from sound/video to printers all in the same place.

  • by codexus ( 538087 ) on Wednesday November 21, 2001 @10:52AM (#2595687)
    The database-like features of attributes/index of the BeOS filesystem could be an interesting solution to the problem of the PATH variable.

    BeOS keeps a record of all executables files on the disk and is able to find which one to use to open a specific file type. You don't have to register it with the system or anything, if it's on the disk it will be found. That makes it easy to install BeOS applications in their own directories. However, BeOS doesn't use this system to replace the PATH variable in the shell but one could imagine a system that does just that.
  • by Haeleth ( 414428 ) on Wednesday November 21, 2001 @10:53AM (#2595694) Journal
    This is somewhat parallel to the situation common in Windows, where every new application tries to place its shortcuts in a separate folder off Start Menu/Programs. It's common to see start menus that take up two screens or more, whereas everything could be found much faster if properly categorised. MS made things worse in Win98 by having the menu nonalphabetical by default.

    Limiting bad organisation to Red Hat is silly. The only Linux distros I've tried are Red Hat and Mandrake, both of which are equally poor in this regard. Nor, I have to say, does the FSS make it any easier to organise a hard drive properly. Is the /usr/local distinction useful, for example? Wouldn't it make more sense to have a setup like /usr/apps, /usr/utils, /usr/games, /usr/wm, and so on - to categorise items by their function, rather than by who compiled them?

    The whole /home thing is equally confusing to a Windows migrant. Yes, *nix is a multi-user OS. But is that a useful feature for the majority of home users? Providing irrelevant directories is a sure-fire way to confusion.

    It's impossible to have a perfectly organised hard disk, of course. You can't fight entropy.
  • Ah, yes... (Score:5, Funny)

    by Corgha ( 60478 ) on Wednesday November 21, 2001 @10:55AM (#2595698)
    /opt/LINWgrep/bin/grep
    /opt/LINWsed/bin/sed
    /opt/LINWdate/bin/date....
  • Why? (Score:5, Insightful)

    by DaveBarr ( 35447 ) on Wednesday November 21, 2001 @10:59AM (#2595719) Journal
    The one thing this guy fails to answer is "why is it bad that I have 2000 files in /usr/bin?". There are no tangible benefits I can see to splitting things up, other than perhaps a mild performance gain, and satisfying someone's overeager sense of order.

    Failing to answer that, I think his whole discussion is pointless.

    Blaming it on lazyness on not wanting to muck with PATH is wrong. Managing your PATH is a real issue, something an administrator with any experience should understand. In the bad old days we came up with ludicrious schemes that people would run in their dot files to manage user's PATH. I'm glad those days are over. Not having to worry about PATH is a tangible benefit. Forcing package mantainers to use a clear and concise standard on where to put programs is a tangible benefit.

    Perhaps I'm biased because these past many years I've always worked with operating systems (Solaris, Debian, *BSD) that have package management systems. I don't care where they get installed, as long as when I install the package and type the command it runs. This is a Good Thing.
  • by Baki ( 72515 ) on Wednesday November 21, 2001 @11:00AM (#2595733)
    ~> ls /usr/bin | wc -l
    403
    ~> ls /bin | wc -l
    36
    ~> ls /sbin | wc -l
    91
    ~> ls /usr/sbin | wc -l
    220
    ~> ls /usr/local/bin | wc -l
    796

    This is FreeBSD, which installs a relatively clean OS under /usr and puts all extra stuff in /usr/local (sometimes the executable is in /usr/local/bin, sometimes in /usr/local//bin).

    I like that much more, it is the old UNIX way to separate the essential OS from optional stuff. It really is a pity that most Linux distro's dump everything directly in /usr.

    As for my slackware, I installed only the minimum, and roll my own packages for everything I consider not to be 'core Linux'; all these packages go under /usr/local. It can be done, and keeps things tidy and clean.
  • Have a standard directory structure for every application. Put all the applications in /opt then require every application to have the subdirectory /bin so if you want to find the binaries of all applications you look through all the /opt/[app name]/bin directories. You could also have other dirs like /opt/[app name]/lib for libraries, etc... You don't need to know the specific name of each application to search all the /bin dirs, you just open /opt and get a list of the directories, then append /bin to all the names and try and open those, then search in those for the binaries.

    This keeps all the application files in one directory. If you want to remove an application, you just rm -rf that one directory. Upgrading applications is much simpler since you just point to that one dir and put the files there. You can also have multiple versions of an application installed just by renaming their root directory.

    Applications shouldn't spread themselves all over the system, they should be placed in one spot with a specific directory structure and be moduler to the rest of the system.
  • Tradeoffs/union fs (Score:2, Insightful)

    by apilosov ( 1810 )
    Here, the tradeoff is being able to quickly determine the files belonging to a particular package/software vs time spent managing PATH/LD_LIBRARY_PATH and all sorts of other entries.

    Also, the question is how should the files be arranged? By type (bin, share/bin, lib, etc) or by package?

    In Linux (redhat/FSSTD), the emphasis was placed on arranging files by type, and the file management was declared a separate problem with rpm (or other package managers) as a solution.

    There is another solution which combines best points of each:

    Install each package under /opt/packagename. Then, use unionfs to join all /opt/packagename's under /usr tree. Thus, you still will be able to figure out which package has which files without using any package manager, but at same time, you are provided unified view of all packages installed.

    Unfortunately, unionfs never worked on linux, and on other operating systems its very tricky. (Such as, how do you ensure that underlying directories will not have files with same name? And if they do, which one will be visible? What do you do when a file is deleted? etc).
  • by Pseudonym ( 62607 ) on Wednesday November 21, 2001 @11:02AM (#2595743)

    Even better would be if Linux had a translucent file system. Simply mount all the path directories on top of each other and let the OS do the rest.

    For the uninitiated, a translucent file system lets you mount one filesystem on top of another filesystem, the idea being that if you tried to open a file the OS would first search the top filesystem, then the bottom one. In conjunction with non-root mounting of filesystems (e.g. in the Hurd) it removes the need for $PATH because you can just mount all the relevant directories on top of each other.

    • Wait until KDE 3 / Gnome 2 com out with Xrender suport, and we can all have translucent filesystems!

      HAR HAR!

    • QNX has it (Score:3, Interesting)

      QNX has a package filesystem [qnx.com] like what you describe; it looks like it solves Mosfet's problem and keeps PATH simple.
  • by Steve Mitchell ( 3457 ) <steve@coOOOmponica.com minus threevowels> on Wednesday November 21, 2001 @11:03AM (#2595752) Homepage
    I wish Unix/Linux had a mechanism where a directory could be marked executable and executing the directory whould internally call some default dot file (such as .name_of_directory)within the directory, and some environmental variable (like $THIS_PATH) was set to the directory and passed to the application process.

    Maintance for applications like these whould be a no-brainer. Just move the directory and all the associated preference files and whatnot travel with the app.

    -Steve
  • by vrt3 ( 62368 ) on Wednesday November 21, 2001 @11:05AM (#2595760) Homepage

    I think the fundamental problem here is related to yesterday's story about new user interfaces [slashdot.org]. It's a problem of how and where storing our files. Regarding applicationsn, there are two ways to do it: you can store all files (binaries, config files, man pages, etc.) of the same application in the same directory, or you can store all files of the same type from different applications in their respective directories (all config files in /etc, man pages in /usr/share/man (I think), etc.).

    Both approaches have their advantages. The problem with hierarchical file systems is that we have to choose one of them. I would love to see a storage system where we can use both ways _at the same time_. A system that groups file depending on relationships they have. Such that 'ls /etc' gives me all config files for all apps, and 'ls /usr/local/mutt' shows me all mutt-related files, including it's config file(s).

    I have no idea how to implement such a beast. I'm thinking about a RDBMS with indices on 'filetype' and 'application', but I would love to see something much more flexible. All pictures should be accessible under ~/pictures and subdirectories, all files relating to my vacation last year in ~/summer2000. Files relating to both should be in ~/pictures/summer2000 _and_ ~/summer2000/pictures.

    To a certain extent, this can be done via symlinks, but it should be much easier to deal with. You shouldn't have much manual work

    • Isn't this what symbolic links are for...?

    • by droleary ( 47999 ) on Wednesday November 21, 2001 @12:53PM (#2596414) Homepage

      I think the fundamental problem here is related to yesterday's story about new user interfaces [slashdot.org] [slashdot.org]. It's a problem of how and where storing our files.

      You could also trace it back to the hierarchical database article [slashdot.org], which is when I started making a lot of posts on the subject. It seems there is finally a lot of interest being generated about this sort of thing.

      I have no idea how to implement such a beast. I'm thinking about a RDBMS with indices on 'filetype' and 'application', but I would love to see something much more flexible. All pictures should be accessible under ~/pictures and subdirectories, all files relating to my vacation last year in ~/summer2000. Files relating to both should be in ~/pictures/summer2000 _and_ ~/summer2000/pictures.

      This is exactly the sort of thing I'm doing with my Meta Object Manager (MOM) software called Mary. Metadata in the form of attributes and values is associated with each file/object and you can do a query (both textually and graphically) on that metadata. For simple paths like you describe, it is a value query irrespective of a particular attribute, but there is support for a more structured "path" (I actually call it a "focus" as it restricts your focus to a subset of the objects on the system) like /type=picture/location=Hawaii/year=2000. Because the focus items are metadata attributes, order is not significant. With such a system, there are no directories or symbolic links; it's all dynamically structured based on what your metadata focus is at any particular time.

      Mary is just in the alpha stages at this point, but it already works well on the command line for the type of things you describe and I'm using it myself to manage nearly 350,000 objects that have been flowing through my system. I'm not exactly sure when it'll be ready for public consumption, and it'll require a GNUstep [gnustep.org] port to get working on Linux (I'm doing development on Mac OS X) systems. I was hoping year end, but I don't think I'll have the time. Summer 2002 has a nice ring to it, though. :-)

    • I've been hacking with this idea in my head. It seems to make the most sense. It is a sort-of multidimensional file system, where every file has to be placed in the dimensions in which it belongs. The tree is used only as a single representation of a single dimension.

      There are three reasons I can think for this.

      • Package management (checking out program configs etc. without surfing the whole directory hierarchy)
      • System maintinance (splitting volumes, managing space and performance tweaking)
      • User friendliness!!! ( user's can hit rm -rf and never have to worry about messing anything up! )

      I figure if MS does something like this, it would save them from their drive-letter hell, and solve one of their greatest disadvantages when compared to UNIX... the impact to such a scheme to UNIX would be minimal.

      Database systems would probably be the best place to start looking for methods to do this sort of thing.

  • by tjwhaynes ( 114792 ) on Wednesday November 21, 2001 @11:05AM (#2595765)

    The unix system doesn't really dump all the files in /usr/bin. These are, almost without exception, executable files. For each executable, support files are usually installed into one or more directory trees, such as /usr/share/executable_name/. The main convenience gained by having all the main binaries in one place (or two - I usually try to leave system binaries in /usr/bin and my own installations in /usr/local/bin) is convenience for searching paths when looking for the binaries.

    However, this paradigm is pretty ugly if you are browsing through your files graphically. It would be nice if each application/package installed into one directory tree, so you could reorganise the system simply by moving applications around. For example,

    /usr/applications/

    /usr/applications/games/

    /usr/applications/games/quake3/

    .. this dir holds all quake 3 files ...

    ...etc..

    /usr/applications/graphics/

    /usr/applications/graphics/gimp/

    ... this dir hold all gimp files

    ...etc...

    If this appeals to you, you might like to check out the ROX project [sourceforge.net]. This sort of directory tree layout was the standard on the Acorn Risc OS and made life extremely easy for GUI organisation. It makes a lot of sense to use the directory tree to categorise the apps and files.

    Cheers,

    Toby Haynes

  • RiscOS... (Score:4, Interesting)

    by mirko ( 198274 ) on Wednesday November 21, 2001 @11:06AM (#2595767) Journal
    In RiscOS, applications are directories which contains several useful files (besides the app binaries, conf or data files):
    • !Sprites[mode] contains the icons to be used with the app and whichever file to be associated with after its filetype
    • !boot which contains directives (associations, globalvariables, etc.) to be executed the first time the Filer window that contain this app is opened (the app is "seen" by the Filer)
    • !run which describes any action to be associated with a double-click on the app icon

    There's also a unique shared modules directory in the System folder.

    This system is at least 10 to 15 years old (not sure Arthur was as modulable, though) and sure proved to be an excellent way to deal with this problem...
  • Um, so? (Score:3, Informative)

    by bugzilla ( 21620 ) on Wednesday November 21, 2001 @11:06AM (#2595771) Homepage
    Much better to have a few thousand files in one dir than to have so many dirs that need to be in your $PATH that some shells will barf.

    For instance, the POSIX standard (I believe) is 1024 characters for $PATH statements. That's a minimum. My users at work sometimes have need for much longer $PATH's. Some OS vendors say, ok, 1024 is the minimum for POSIX compliance, that's what we're doing. Some, like HP-UX (believe it or not) have increased this at user request to 4K.

    In any case, this all seems pretty petty. It's not like our current and future filesystems can't handle it, and package managers are pretty good and know what they put where.
  • Six of one... (Score:2, Insightful)

    Half a dozen of the other. Of course there are pros/cons to both way; having all executeables in one (or O(1)) location/s makes finding programs also O(1), and a PATH length of O(1). Having one dir/"folder" for each program (or O(X) directories) would then have O(X) search time for a particular program, and O(X) entries in your PATH. On the other hand, finding and deleting entire packages becomes much harder if not all filenames belonging to that package are known. Personally I think it it doesn't matter either way.
  • by jilles ( 20976 ) on Wednesday November 21, 2001 @11:09AM (#2595783) Homepage
    This is only part of the problem and characteristic for the way unix has evolved. The whole problem is that there are no standards, just conventions which most unix programmers are only partly aware of. I imagine the whole reason for putting all binaries in a single directory was that you then only have to add one directory to the path variable. In other words because of genuine lazyness you have around 2000 executables in your /usr/bin directory. Of course adding all 2000 programs to the path is not the right solution either (that would be moving the problem rather than solving it). Obviously the path variable itself is not a very scalable solution and needs to be reconsidered.

    To sum it up UNIX programs all have their own sets of parameters, their own semantics for those parameters, their own config files with their own syntax. Generally a program's related files are scattered through out the system. Just making things consistent would hugely improve usability of unix and reduce system administrator training cost. Most of the art of maintaining a unix system goes into memorizing commandline parameters, configuration file locations and syntax and endless man pages. Basically the ideal system administrator is not to bright (after all it is quite simple work), can work very precise, and has memorized every manpage he ever encountered. The not to bright part is essential because otherwise he'll get a good job offer and he'll be gone in no time.

    Here's a sample better solution for the problem (inspired by mac os X packages): give each app its very own directory structure with e.g. the directories bin, man, etc for binaries, documentation and configuration. In the root of each package specify a meta information file (preferably xml based) with information about how to integrate the program with the system (e.g. commands that should be in the path, menu items, etc.). Standardize this format and make sure that the OS automatically integrates the program (i.e. adds the menu items, adds the right binaries to a global path, integrates the documentation with the help system). Of course you can elaborate greatly on these concepts but the result would be that you no longer need package managers except perhaps for assisting with configuration.
  • I came away thinking "this man is insane".

    1. He claims DOS had a better way of organizing applications. This is a red herring. I don't want to organize my applications. Ever. I want to organize my data. I don't remember many applications in DOS that were compatible with the same type of data. If there had been, the limitations of the DOS structure would have been readily made apparent. First, CD into the directory where your audio recording utility is and make a .wav file. Then, move the .wav file into the directory where your audio editing utility is and edit it. It works, but why not keep the data in one place and run programs on it as you see fit without regard for their location on your hard drive, and without having a 10-second seek through your PATH variable?

    2. Besides which, DOS had c:\msdos50 (or whichever version you used). That was DOS's variation on /bin. Ever look in that directory and attempt to hand-reduce the number of binaries in it to save disk space? I did. A package management system would have made that doable.

    3. You can have all the localized application directories you want in /usr/local. The point of /usr/local is to hold larger packages which are local to the system. (hmm... /usr/local/games/UnrealTournament, /usr/local/games/Quake3, /usr/local/games/Terminus, /usr/local/games/RT2...) And as a bonus, thanks to the miracle of symbolic links you can have your cake and eat it too - as long as the application knows where the data files are installed you can make a symlink of the binary to /usr/local/bin and run it without editing your PATH variable too! Isn't UNIX grand?

  • by ivan256 ( 17499 ) on Wednesday November 21, 2001 @11:10AM (#2595788)
    How many of those 1500 binaries do you run, hmm?

    Many distributions install lots of packages you don't need nowadays. Uninstall some, or switch to a more minimalist distribution. Try installing debian with only the base packages. Then whenever you need a program you don't have, apt-get it. It'll make for an annoying few weeks perhaps, but at the end you'll have a system with just what you need on it. I'll bet you will end up with only around 600 binaries in the end (Unless you install gnome... That's like 600 binaries on it's own.)

    What does it matter anyway? If you have 1500 programs it's no better to have them in their own directories then to have them in one place. Also, it's not like you're dealing with all of them at once.
  • by Waffle Iron ( 339739 ) on Wednesday November 21, 2001 @11:10AM (#2595792)
    The root problem for all of this seems to be the limits of a hierarchical data organization such as a file system. The debate is if the heirarchy should be organized by application (as the article proposes), file type (all binaries in 'bin'), or some broad attribute of the application ('/usr' vs '/usr/local', 'bin' vs 'sbin').

    There probably is no way to solve all of the issues simultaneously in one hierachical scheme. Symlinks could help because they crosslink the tree. Package managers add a more sophisticated database of relations. These relations are much more useful, but unfortunately are accessible only through the package manager program.

    All in all, though, it seems that organizing by package makes the most intuitive sense, and the helpers like package managers should be responsible for figuring out how to run the app when you type it on the command line.

  • 1. package managers should make it easy to move things around. I should be able to install the latest perl-xxx.rpm in a test location, test my scripts against it, and then reinstall it in the canonical place.

    2. this needs to include all the files in /etc so app installers need to support flexible package management. Also note, the #!/shebang is totally broken in this sort of environment.

    3. "the canonical places" (/usr, /etc, etc. :) should be a family of canonical places. The sysadmin group might not want to upgrade their perl scripts at the same time as the dbadmin group. decoupling their interdependency will lead to much more flexibility and quicker overall upgrading.

    4. we can achieve this best if / is no longer / but is instead /root so there could be a /root1 and /root2 . Think of this, one file system containing two different distros that don't wrassle with one another.

    do not evaluate this on whether you think it's a good idea. the point is that software allows soft parameterization, reentrency, soft configuration, etc. So, why can't we have it? Programmers need to stop hard coding shit, binding locations to one place.

    I'd love to upgrade my workstation from RedHat 7.1 to RedHat 7.2 by installing onto the same partition without trashing the old. Then, over the course of the week I could work out the kinks and delete the old, knowing that at any time I could reboot the old to send a fax or whatever. There are 1000s of corporate uses for this type of environment too... how many times have you heard "we're taking the mailserver down to upgrade it overnight" and then heard "um... it didn't come back up..."

  • Unless you can set a recursive PATH, I don't think it would be viable to split things into their own directories... Could you imagine how long (and how slow) it would be to have 20, 40, 60 directories or more listed in your PATH?

    With package management software, who cares if it's all in one place? That's fine with me...

    Besides, anything *I* add to the system, depending, usually ends up in /usr/local - which is a further distinction.

  • Having KDE binaries in /usr/bin completely destroys the possibility of simultaneouly having KDE 2.x and KDE 3 on the same system (say, a server with dozens of users where you want to slowly migrate from one environment to another). Having them on /usr/kde2 and /usr/kde3, or even /opt, sounds much more saner to me. (Shared resources may stay at a common place, but it's up to the upstream maintainers to allow these "shared resources" work as expected.)

    One workaround to remain LSB-compliant and still having them separated would be throwing them on /usr/lib/kde2 and /usr/lib/kde3 -- but it's an ugly hack. But so is arbitrarily breaking the standard and placing them in the correct place. Ugh.

  • To pick nits a touch, the reason X got its own sub directory was that it was often on a separate file system from the rest of /usr. In the long, long ago X was of such astounding size relative to the limited and expensive disk space of the day that special considerations had to be made upon its installation. It had little to do with any other sort of organization.

    As for the rest of the rant, to simply call the current practice of file organization horrendous behavior, sloppiness, or laziness without ample argument or demonstrable advantages as to why breaking every package into separate sub directories is damaging to the cause at best. Had the rant contained any sort of claim that there are an unacceptable number of name space clashes, that simply doing an 'ls' in one of these directories blew away the file name cache mechanisms in the kernel, forever making certain optimizations useless, or anything of that sort would hold more weight than unsupported bashing.

    The author laments the inability to manage these subdirectories effectively with standard tools, but as I see it, the option to not use package management has been there all along. Roll your own, putting things where you want them. Or, I might suggest broadening the concept of 'standard tools' to include the package management system installed, should the former option seem ludicrous.

    Not having to muck around with the PATH - and moreso, not having to support users mucking around with their own PATHs - far outweighs the disadvantages of not being able to use 'standard tools'. What time I lose learning and using my package management system I make up tenfold in not supporting the very issues which I forsee the author's solution creating.

    --Rana
  • Comment removed based on user account deletion
  • FreeBSD (Score:4, Interesting)

    by sirket ( 60694 ) on Wednesday November 21, 2001 @11:22AM (#2595859)
    The file systems on a Unix system make a lot of sense, when people use them correctly.

    /bin for binaries needed to boot a corrupted system.

    /sbin for system binaries needed to boot a system.

    /usr/bin for userland binaries installed with the base system.

    /usr/sbin for system binaries installed with the base system. The are not programs required to boot the system.

    /usr/local/bin for locally installed user binaries such as minicom, mutt, or bitchx.

    /usr/local/sbin for locally installed system binaries such as apache.

    Large locally installed programs such as Word Perfect get installed in a sub directory of /usr/local but they put a single executable in /usr/local/bin so that you do not need to change your path.

    FreeBSD has only about 400 programs in a complete /usr/bin. Other programs are spread about the file system in sensible locations or are user installed. Possibly the only directory that does not make a whole lot of sense is /usr/libexec (where most of the internet daemons are kept).

    -sirket
  • Well...looking at my Debian system...
    /sbin contains stuff that requires superuser priveleges. Stuff specific to maintaining the hardware, etc.

    /bin contains solid, standard system binaries need to work (bash, grep, chmod, z-tools, gzip, etc). Stuff that you basically need.

    /usr/bin/ contains... userland stuff. software installed/removed for general use.. I don't know the right way to describe it.

    /usr/local/bin.. contains nothing. This is where, generally, I choose to put things I compile myself, so as not to confuse the package management system.

    If we look at ,say, systems where many things are mounted over nfs.. /usr/bin is one of these. /usr/local/bin is for things local to your machine.
  • by TilJ ( 7607 )
    On a Secure Computing Sidewinder (BSD based):
    % ls -l /usr/bin | wc -l
    258

    On an OpenBSD 2.8 server, minimal install + gcc stuff:
    $ ls -l /usr/bin | wc -l
    344

    On an OpenBSD 2.8 server, full install (including X):
    $ ls -l /usr/bin | wc -l
    373

    On a Mandrake 8.0 server:
    $ ls -l /usr/bin | wc -l
    1136

    On a RedHat 7.1 system with a fairly typical installation:
    $ ls -l /usr/bin | wc -l
    2203

    I want /opt (with subdir's per app) back ;-)

    It seems to mean that there's a lot of overlap/duplication in the tool set on Linux distributions versus the centralized managed BSD distributions. A crowded /usr/bin might be a consequence of the "choice is good" Linux philosophy.

    Not that I'm saying I disagree with "choice is good" ...
  • by kune ( 63504 ) on Wednesday November 21, 2001 @11:25AM (#2595880)
    From my .zshenv, works in .profile too. Could be used also for other path variables. Works for all Operating Systems with a reasonable Bourne Shell.

    export PATH

    reset_path() {
    NPATH=''
    }

    set_path() {
    if [ -d "$1" ]; then
    if [ -n "$NPATH" ]; then
    NPATH="$NPATH:$1"
    else
    NPATH="$1"
    fi
    fi
    }

    reset_path
    set_path $HOME/bin
    set_path /usr/local/gcc-2.95.2/bin
    set_path /opt/kde/bin
    set_path /usr/lib/java/bin
    set_path /usr/X11R6/bin
    set_path /usr/local/samba/bin
    set_path /usr/local/ssl/bin
    set_path /usr/local/bin
    set_path /usr/local/bin/gnu
    set_path /usr/bin
    set_path /bin
    set_path /usr/local/sbin
    set_path /usr/sbin
    set_path /sbin
    set_path /usr/ucb
    set_path /usr/bin/X11
    set_path /usr/ccs/bin
    PATH="$NPATH:."

    unset reset_path set_path
  • Clueless... (Score:3, Insightful)

    by LunaticLeo ( 3949 ) on Wednesday November 21, 2001 @11:30AM (#2595918) Homepage
    Mosfet is a emotionally unstable GUI hacker. His knowlege of the long history and tradition of UNIX administration is pathetic. He ignores simple observables like PATH searches are more expensive than bin lookups. One executable dir per App would be FAR SLOWER than 2000 executables in a single dir.

    This is another classic example of not letting programmers, especially GUI progrmmers, be involved in OS design.

    For those of you who might be swayed by his foolish arguemnts, please read LHS, and the last decade of USENIX papers and LISA papers. Unix systems organization has been openly and vigorously debated for 15years. It has not be dictated by mere programmers from high on above like MS. And RedHat is to be applauded for properly implementing the FHS which is a standard, others like SUSE should be encouraged to become compliant (/sbin/init.d ... mindless infidels :).
  • by ACK!! ( 10229 ) on Wednesday November 21, 2001 @11:33AM (#2595937) Journal
    I have been lazy before with my linux box and let package management systems lay out files all over the freakin' place.

    I have done things the "right" way (according to my mentor admin anyway :->) with my Solaris box and followed this standard:

    /usr/bin - sh*t Sun put in.

    Let pkgadd throw your basic gnu commands into: /usr/local/bin

    Compile from source all major apps and services Database services, Web Servers etc...etc.. and put them into /opt:
    /opt/daftname

    symlink any executable needed by users into /usr/local/bin
    (if you think like a sysadmin you realize most users do not need to automatically run most services)

    Any commercial software goes to /opt and put the damn symlink in /usr/local/bin.

    Yes, it is extra work but it keeps you PATH short and fat and your users happy. This is not a problem with distros or package management systems as much as it is an issue of poor system administration.

    I also understand it is a mixed approach with some things put under seperate directory structures for each program and some things in a comman /usr/local base.

    Common users do NOT need access to the Oracle or Samba bin. Give them a symlink to sqlplus and they are happy. Even though it is mixed if you stay consistent across all your boxes then the users are happy.

    I understand it is tough but we have control in *nixes to put things where we want the deal is to use it.

    PATH=/usr/bin:/usr/ucb:/usr/local/bin:.
    export PATH

    All a regular user needs.
  • # rm -ff /usr/bin (Score:3, Informative)

    by ellem ( 147712 ) <{moc.liamg} {ta} {25melle}> on Wednesday November 21, 2001 @11:41AM (#2595984) Homepage Journal
    The final solution to this mess.

    Unless you are hand writing each file in /usr/bin who cares how many files their are there?

    And Windows != /usr

    Program Files == /usr
  • by GISboy ( 533907 ) on Wednesday November 21, 2001 @11:44AM (#2595995) Homepage
    When you consider the /usr or /local was similar in purpose as "program files" (or progra~1 if you want to be specific) had the best of intentions.
    Well we know about which road going where based on good intentions.

    At any rate, part of the "problem" is there is a certatin point a section of the file system gets unmanageable. Where that is, quite frankly, varies.

    RedHat has impressed me with its compatability but it does so with static libs. There are times when god forbid you should wish to compile something and get gripe messages that you window manager was done under X set of libs, your theme manager under Y's libs and your shared libs are of version Z.
    That is just trying to update the WM, god forbid you wish to compile a kernel.
    And with the static libs, the performance hit is astounding.

    The other side, as with Slackware, is shared libraries can be as unforgiving as well.
    Heh, as a newbie I deleted a link to a ld.so.X.
    Hint: never, ever do this! ls, ln, mv et al stop working...oops.
    Stupidity on my part, but, hey, I was a newbie. (finger; fire; burn; learn. simple.)
    Back on track. Slack is fast, configurable but through sheer will, accident, or stupidity can be broken a lot faster (and in some cases fixed a lot faster).

    Windows...well the sword cuts both ways. It impresses and suffers *both* of the good and bad points of RH/SL (or static and dynamic libs).

    And, if the above does not either blow your mind or make you nod off consider OS X.1.1 (.1.1.1....)

    Under OS X's packages system a 'binary/folder/application' (oye) can and does contain static libs. Ok, that can be good/bad.
    Here is the kicker (and cool part): if it finds *better* or more *up to date* libs it can use them and ignore what *it* has.
    If the new libs break the app, or cause problems, the application can be "told" or "made" to use only its own libs, or update the newer libs.

    Most will see where that is going. It will be good to keep "static" then use "dynamic" or update the "dynamic/shared" libs.
    The down side is the potential to fix one application and break 10+ others.

    This has not happened...yet. However, the *ability* to make or break is there, just no information is given until a spec/CVS set of rules is fleshed out.

    I will be the first to admit that the "binary folder" or "fat binary" (arstechnica.com article) idea sounded "less than thrilling"...until you realize the headache's it cures with this kind of file system bloat.

    Think about it: You have an app, that is really a folder, that you can't see inside/manipulate/fix/break unless you know how *and* have a reason to.

    In all three cases there are limits to even the most intelligent of design. Knowing this truth is easy to accept. Finding where it lies and where it breaks down...that is another discussion.
  • by renehollan ( 138013 ) <rhollan@@@clearwire...net> on Wednesday November 21, 2001 @11:48AM (#2596015) Homepage Journal
    This is one of the things that FHS tries to address. I used FHS 2.1 in Teradyne to manage a custom GNU/Linux distro for one of their products [If you purchased NetFlare from them, you should have all the updated GPL goodies and additions I put there on a source companion CD].

    While not perfect, it addressed the following issues:

    1) separating O/S from "other" packages;

    2) maintain a sane place to put different packages;

    3) support the notion of linking to specific package directories from a common place to keep PATH small;

    4) was compatible with a number of "traditional" conventions.

    Of course, FHS 2.1 has this concept of the "operating system" files and "other files". Presumably the "operating system" is that which the distro bundler provides... so Red Hat would be free to put as much as it wants under /usr. But this causes a problem if you looks at a common standard base for several distros, like the LSB.

    Do you have a "standard base" part, and a "distro part", and then a "local part"? Clearly what's needed is a hierarchical way of taking an existing "operating system" and customizing it to a "custom operating system". Right now, FHS allows this for distro bundler and end user, but there is no support for the process iterating.

    Of course, my experience has been with FHS 2.1 and have since moved on to employment elsewhere, so perhaps the FHS addresses these issues.

  • by Bazman ( 4849 ) on Wednesday November 21, 2001 @11:51AM (#2596029) Journal
    The reason windows apps can happily install binaries in any directory is because they then go install their shortcuts in the Start menu, or the desktop. Of course if you want to run one from a command line interpreter you're pretty stuck.

    So now my windows Start menu has 1000 items in it, but at least they are arranged hierarchically in 850 vendor program groups...

    Baz
  • by Jack Auf ( 323064 ) on Wednesday November 21, 2001 @11:55AM (#2596043) Homepage
    Most major distros install quite a bit of stuff by default that you will 1) you probably will never use 2) you probably dont know what it is 3) if it's a server you don't need anyway

    This is one of the reasons I created Beehive Linux [www.beehive.nutargetnew]. It aims to be secure, stable, clean, FHS compliant, and optimized for hardware built in this century. Current version is 0.4.2, with 0.4.3 out in a week or so.

    On one point however I must disagee with Mosfet:

    The most obvious thing is separate the big projects like desktop projects into their own folders under /usr

    The FHS states: /opt is reserved for the installation of add-on application software packages. A package to be installed in /opt must locate its static files in a separate /opt/ directory tree, where is a name that describes the software package.

    Beehive puts large packages like apache, mysql, kde2 under /opt in their own subdirectory i.e.; /opt/kde2. I think this is a much better solution than cluttering up the /usr heirarchy, and makes it very simple to test a new version of without destroying the current setup.
  • by Anonymous Coward on Wednesday November 21, 2001 @11:57AM (#2596061)
    One of the major points of the FSS is to organize files by type. What I mean by that is executables are placed together, configuration files are placed together, man pages are placed together, etc. This is important for a number of reasons:

    - systems may need a small partition with all files needed to boot
    - configuration files need to be on a RW filesystem, while executables can be RO.
    - many other reasons (read the FSS)

    That doesn't mean all executables need to be in a single directory under /usr/bin. I agree it would be nice to come up with a good way to allow subdirectories and change the FSS accordingly. Just don't argue that all files related to a given piece of software be in a single directory as some have requested. That will make the life of an administrator of large systems even more difficult. My wife works in a place that does that and their system is nearly impossible to maintain.

    Sure the FSS isn't perfect, but I have yet to see another system that does as good a job. Don't throw it away simply because you don't understand it, or even worse, because its biggest fault is a directory with 2000 entries.

    -- YAAC (Yet Another Anonymous Coward)

  • by \/\/ ( 49485 ) on Wednesday November 21, 2001 @12:00PM (#2596073)
    I agree that this is a Linux-related issue that mostly stems from lazyness. I have been using the modules [modules.org] approach for tool management for years with very good results - even half a decade ago this was more advanced than any Linux approach out there today.

    With this approach each tool/version-combination gets its own directory, including subdirectories for libraries, source code, configuration files etc.

    You can then use a "module" commando to dynamically change your PATH, MANPATH, ... environment to reflect the tools you want to use (note that this supports the usage of a tool, it is therefore not a replacement for package management tools like rpm, which are mainly concerned with installation.)

    Each tool/version combination comes with an associated modulefile (which has a tcl-like syntax) where you can influence a user's system environment upon loading/unloading the module. It is also possible to e.g. create new directories, copy default configurations or do platform-specific stuff for a tool (which greatly helps users less fluent in Unix, since they do not have to care about stuff like shell-specific syntax for setting environment variables).

    It also allows you to give tool-specific help, e.g.
    $ module whatis wordnet
    wordnet: Lexical database for English, inspired by psycholinguistic theories of human memory.
    $ module add wordnet


    This is also very helpful if you want to keep different versions of the same tool (package, library) around and switch between them dynamically, e.g. for testing purposes (think different jdks, qt-libraries, etc.). With modules, you can e.g. do a simple
    module switch jdk/1.2.2 jdk/1.3.1
    and runs your tests again. And you never have to worry about overwriting libraries, configuration files etc. even if they have the same name (since they are kept in a subdirectory for each version).

    For our institute I've set up a transparent tool management system that works across our Linux/Solaris/Tru64 platforms. All tools are installed this way (except the basic system commandos which still go into /bin etc.).

    Of course, it's a lot of work to start a setup like this, but in a complex environment it is really worth it, especially in the long run.
  • Specialize! (Score:3, Insightful)

    by rice_burners_suck ( 243660 ) on Wednesday November 21, 2001 @02:59PM (#2597215)

    The biggest problem with Linux is, in my opinion, the fact that people try to solve all the problems of the world with a single solution. Red Hat is a worthwhile cause, but I don't think a single distro can handle every possible use of Linux. I thought Linux was about choice. In that case, there should be many smaller distributions aimed at specific (or at least more specific) purposes.

    No, I'm not a luser, nor am I a newbie. I know that there are countless distros out there, which fit on a single floppy, six CDs, and everything in between. (I've purchased so many distributions for myself and for others that I'm drowning in Linux CDs.) But everybody and his uncle uses Red Hat. (I personally like SuSE a LOT better, because it is far better organized in my opinion.)

    Many common problems make the file system layout and package management suck. I don't mean to start a flamewar, but this problem is far smaller on FreeBSD, where the file system layout is a lot better organized than that of a Red Hat Linux system. (It's even better organized than a SuSE system.) The ports and packages collection, which works through Makefiles, makes installation and removal of many programs very easy, with dependency checks. Unless I'm imagining things, it does find dependencies that you install manually, as long as they're where the system expects them. However, glitches still exist, mainly in the removal of software, that require user intervention to remove some remaining files and directories.

    When it comes down to it, I think that package management systems--whether they're Debian's system, RPMs, or the *BSDs' ports and packages--are supposed to serve as a shortcut for the system administrator, who still knows how to manage programs manually. The Linux community seems to have forgotten this, and expect package management to be a flawless installation system for any user with any amount of experience. Unfortunately, this is not the case, and it would be extremely difficult, maybe impossible, to make such a system. I believe this doesn't matter.

    Skilled admins need control and flexibility over their programs. This is especially true for critical servers, but also applies to workstations. If the setup they want can be achieved with a package manager, they'll use it. If not, they can opt to build the program from source, or, if this installation takes place often, they might make their own package, perhaps customizing paths or configuration files for site-specific purposes. A well-organized hierarchy is very important.

    Novice users are very different. They just want to install this thing called Linux from the CD and surf the web or burn some MP3s. For them, the solution isn't a great package management system, because a novice user probably doesn't know where to obtain programs. In some cases, there are hundreds of similar programs to choose from--novices can't handle all that choice! The solution for them is a distro that supports a very specific set of programs, and supports them well:

    • Everything should be managed through clickable graphical dialogs. Enabling web serving or whatnot would take one click on a checkbox.
    • The installation would be extremely simple:
      • Where possible, there are no choices. You simply install the distro and get all the "standard" programs, precompiled, preconfigured and ready to use.
      • During installation, a preconfigured image of a 500 megs (or so) partition would just be copied verbatim onto a partition on the user's hard drive.
      • Another partition, taking up the remaining available space, would be mounted on /home.
      • Installation could happen in 5 minutes flat.
    • A single desktop environment would be present. Novice users shouldn't have to try ten different window managers and docking programs and whatnot. Choose something and put it on this distro. If you want to support multiple desktop environments, package multiple distros.
    • The same rule holds true for all programs that would come with the installation. Instead of making one huge distro that supports everything from 10,000 text editors to biological analysis programs, make 10 different distros. One would be for "Home" use and would include stuff like a word processor and spreadsheet, a banking program, web browser, email client, calendar program, MP3 player, video editing software, and whatever else you want to include. These don't even need to be 100% free software. Put some quality programs on the CD and charge for them.
    • To make a long story short, limit the user's exposure to problems. Every choice you present to the user is a possible problem. We're talking about people who don't know where the "any" key is for crying out loud.

    Finally, I would recommend that in the spirit of giving back to the community, any admin who makes his own packages should submit them back to the developer for distribution to others. (Unless these packages are designed for site-specific purposes, of course.)

    Oh yeah, and I almost forgot the obligatory "oh well."

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...