Please create an account to participate in the Slashdot moderation system

typodupeerror

# Slashdot videos: Now with more Slashdot!

• View

• Discuss

• Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

## Rage Against the File System Standard612

Posted by CmdrTaco
pwagland submitted a rant by Mosfet on file system standards. I think he's sort of over simplified the whole issue, and definitely wrongly assigned blame, but it definitely warrants discussion. Why does my /usr/bin need 1500 files in it? Is it the fault of lazy distribution package management? Or is it irrelevant?
This discussion has been archived. No new comments can be posted.

## Rage Against the File System Standard

• #### Why not go the extra step (Score:2, Insightful)

and just install in /?

Who in their right mind places stuff outside of a program specific folder, if it's not gonna be used in multiple programs (like shared libraries)?
• #### Still new to GNU/Linux (Score:2, Interesting)

Is it really that bad? Would I not have much control over where programs get installed to?
I would think that even without a package handler to do it for me, the program itself would allow me to say where it should be installed...or is that just the Windows user in me talking?
• #### The Alternative? (Score:4, Redundant)

on Wednesday November 21, 2001 @10:45AM (#2595647) Homepage
I'd much rather have 2000 binaries in /usr/bin than 2000 path entries in my $PATH Mike • #### Re:The Alternative? (Score:2, Interesting) Is there such thing as a recursive PATH directive for executables? Like the ls -R or something for searching into subdirectories? • #### Re:The Alternative? (Score:3, Interesting) You would only need 2000 path entries if your expect your shell to have the same exact semantics that it does today. There is no reason whatsoever that PATH couldn't mean "for every entry in my PATH environment variable look for executables in */bin". A smart shell could even hide all of these behind the scenes for you and provide a shell variable SMART_PATH that gets expanded to the big path for legacy apps. Or you could do what DJB does with /command and symlink everything to one place. Although I'm not sure if that solves the original complaint. Actually, I'm not sure what the original complaint is, having re-read the article. • #### Re:The Alternative? (Score:5, Insightful) on Wednesday November 21, 2001 @10:52AM (#2595688) Homepage *sigh* has anyone heard of symlinks? the theory is very simple - install the app into /opt/foo or wherever, then symlink to /usr/local/bin. yawn. or is that one of those secrets we're not supposed to tell the newbies? • #### Re:The Alternative? No Alternative! (Score:2, Interesting) Uh huh. And when something goes terribly wrong, how do you determine what went wrong? Our production servers (HPUX, Solaris, AIX) have in the /usr/* only what the system supplied. Everything else gets put in it's "proper place"- either /opt/, or /usr/local/ (it's own filesystem) or similar. The paths are not so bad- and the system is healty and clean. The alternative? A system easily attacked with a trojan horse. • #### Re:The Alternative? No Alternative! (Score:5, Informative) on Wednesday November 21, 2001 @12:29PM (#2596266) We do the same thing on our Tru64 boxen. All 3rd party software goes in /opt or /usr/opt. 3rd party executables go in /usr/local/bin. Some executables live in an app-specific subdirectory under /opt and the symlink in /usr/local/bin points to the physical location. It makes OS upgrade time tons simpler. And the first step of our DR plan is to backup OS-related stuff and backup software on special tapes. Those get restored first so that we get a bootable system in a hurry. Then the rest of the software and data can be restored using the 3rd party backup software. None of this would be as easy to do if we had 2000 programs all living under /usr/bin. If Mosfet has a point it's that some distribution vendors make a mess out of the directory structure by dumping way, way too much stuff under, say, /usr/bin. \begin{rant} RedHat, are you listening? I like your distribution but the layout of the files you install sucks big time. Anyone who updates their applications (Apache, PostgreSQL, PHP, etc.) from the developer's sites has to undo the mess you guys create. Either that or don't install the versions on your CDs at all and just go with the source tars. \end{rant} (OK, I feel better now...) • #### Re:The Alternative? (Score:5, Informative) by Anonymous Coward on Wednesday November 21, 2001 @11:00AM (#2595732) I'd much rather have 2000 binaries in /usr/bin than 2000 path entries in my$PATH

Here's what every unix administrator I know (including myself) does:

1. everything is installed in /opt, in its own directory:

example$ls /opt apache emacs krb5 lsof mysql openssl pico ucspi-tcp cvs iptables lprng make-links openssh php qmail (pico is for the PHBs, by the way) 2. Every version of every program gets its own directory example$ ls /opt/emacs
default emacs-21.1

3. Each directory in /opt has a 'default' symlink to the version we're currently using

example$ls -ld /opt/emacs/default lrwxrwxrwx 1 root root 10 Oct 23 16:33 /opt/emacs/default -> emacs-21.1 4. You write a small shell script that links everything in /opt/*/default/bin to /usr/local/bin, /opt/*/default/lib to /usr/local/lib, etc. Uninstalling software is 'rm -rf' and a find command to delete broken links. Upgrading software is making one link and running the script to make links again. No need to update anyone's PATH on a multi-user system and no need to mess with ld.so.conf. You can split /opt across multiple disks if you want. NO NEED FOR A PACKAGE MANAGER. This makes life much easier, trust me. • #### Re:The Alternative? (Score:2, Informative) There is actually a Package Manager that does all this for you, only it make everything alot easier. http://pack.sunsite.dk/ • #### Look at opt_depot (Score:4, Informative) <jonabbey@ganymeta.org> on Wednesday November 21, 2001 @11:50AM (#2596025) Homepage Many years ago, we wrote a set of Perl utilities for automating symlink maintenance called opt_depot [utexas.edu]. It's similar to the original CMU Depot program, but has built in support for linking to a set of NFS package volumes, and can cleanly interoperate with non-depot-managed files in the same file tree. • #### Re:The Alternative? (Score:3, Informative) Correct: this is not rocket science, people. It's called a software depot (at least it is now - see The Practice of System and Network Administration by Limoncelli & Hogan, chapter 23). How many directories in /usr does Mosfet want? One for X11, KDE, GNOME ... TeX, StarOffice, Perl, GNU, "misc", etc?? How large a PATH will that create? Actually, it's perfectly possible to use a separate directory for every single package - right down to GNU grep - if you: 1. symlink all the relevant subdirectories for every package into a common set that is referred to in the various PATHs; 2. manage those symlinks in some automated fashion. For the latter, try GNU Stow or (my favourite) Graft (available via Freshmeat). These tools could even be easily run as part of a package management post-install procedure. The depot approach has a number of advantages, not least of which the ease of upgrading package versions and maintaining different versions concurrently. And it's obvious what's installed and which files they provide. The challenge is in encouraging the vendors to embrace such a model as an integral part of their releases; that would require some significant reworking. Ade_ / • #### Re:The Alternative? (Score:2) I don't recall LFS saying you couldn't use "/usr/appname", so the article title is a bit misleading, but you certainly don't need 2000 entries in your path. The best solution for the problem that I can see is for coders of the multi-binary applications to take a leaf out of Windows' book and use the equivalent of "C:\Program Files\Common Files". Using an application (or environment, or vendor) specific directory for programs that only other programs need to use. The best I can see would be to use "/usr/appname/" for binaries and "/usr/lib/appname/" for libraries. • #### Re:The Alternative? (Score:3, Informative) Section 4.1 of FHS: Large software packages must not use a direct subdirectory under the /usr hierarchy. • #### Re:The Alternative? (Score:4, Insightful) on Wednesday November 21, 2001 @12:22PM (#2596221) Homepage The alternative? Simple. /opt. Mosfet's not talking about a new directory for every little application. He's talking about moving out stuff like KDE and GNOME. So instead of just having /usr/bin in your$PATH, you would also include /opt/gnome/bin and/or /opt/kde/bin. Yes, this makes your path a bit larger, but unmanagable? Hardly.

I just checked on one of my PCs that has KDE2 installed (from the RH 7.2 RPMs), and there are over 200 files that match /usr/bin/k*. The only one that wasn't a part of KDE was ksh. My /usr/bin has 1948 files in it. There's a 10% reduction with one change. I don't have GNOME installed on this box, so a similar comparison isn't really possible. However, I imagine that the number would be similar if not greater for GNOME.

It's not like he's suggesting we sacrifice goats in the street. He's suggesting we actually implement what the FSS says.

• #### Re:The Alternative? (Score:3, Informative)

SuSe actuallly does this. On my /opt path I have:

/opt/kde
/opt/kde2
/opt/gnome

And they have bin directories under that. Funny, until now I've only ever heard people slam SuSe for doing it (something about not being Linux Standard Base compliant).

I personally like it. The only thing, whenever you compile a kde program, you add --prefix=/opt/kde2 to the ./configure command.
• #### better command path system? (Score:3, Insightful)

on Wednesday November 21, 2001 @10:45AM (#2595648) Homepage
imo, we need a better command path system thingy that allows easier categorization of executables and other stuff... Win32 has the System32 (or System) directory, *nix has /usr/bin, /usr/share/bin, /usr/local/bin etc...

I don't have a solution, but i'll devote a few idle cycles to it...
• #### Re:better command path system? (Score:2)

c:\windows\system...

oh yes, this is the way to go. Hundreds of applications, each storing different versions of the same needed system or application dll's in one dir, overwriting the one version that worked....
</sarcasm>

There is a reason that binaries are spread over different partitions on Real Operating Systems....

btw, it's nice to see that html-formatting is actually making sense in my first line..: <br><br> :-)

• #### Re:better command path system? (Score:3, Insightful)

What we need is a *limited* way to have a single $PATH definition that will address arbitrary packages. I was thinking about PATH="$PATH /opt/*/bin"

This would look in /opt once and cache the dirread so the hit for this only happens once.

Of course this adds the problem of ordering (/opt/a/bin/foo vs. /opt/b/bin/foo).
• #### he's pretty far off base (Score:5, Interesting)

on Wednesday November 21, 2001 @10:45AM (#2595652)
Anyone who claims that RedHat started the use of /usr/bin/ as a dumping ground can't be taken seriously. Pretty sure slackware and SLS did the same thing. Same goes for Solaris, AIX, AUX, Sun/OS, Irix, and HPUX.

• #### Re:he's pretty far off base (Score:2, Funny)

did you look at HER Site? he is a SHE and from the ooks of it she likes to get freekaaaayyy :-)......pretty damn hot for a geek girl.
• #### Re:he's pretty far off base (Score:3, Interesting)

Anyone who claims that RedHat started the use of /usr/bin/ as a dumping ground can't be taken seriously. Pretty sure slackware and SLS did the same thing. Same goes for Solaris, AIX, AUX, Sun/OS, Irix, and HPUX.

Agreed, but does that make it right?

For the last few years, this is the kind of thing that has really been nagging me. All OSes seem to suffer from the same problem. Why are we so stuck with the mindset that traditions of the past shouldn't be challenged? Can't we, as "brilliant" computer scientist, start solving these problems and move on?

I recently demo'ed a good Linux distro to a friend and it finally dawned on me. When you load KDE, you are literally overwhelmed with options. My friend asked, "What is the difference between tools and utilities?". I didn't know. I tried to show him StarOffice and it took me a few minuets of digging in different menus.

No, I don't use Linux on a daily basis, and no, I'm not the smartest person in the world. But I think I see the problem. Everything seems to be an imitation of something else (with more bells and whistles). Where is the true innovation? Our computers and software are not significantly different than they were 20 years ago.

• #### Re:Translucent file system (Score:3, Funny)

Wait until KDE 3 / Gnome 2 com out with Xrender suport, and we can all have translucent filesystems!

HAR HAR!

• #### QNX has it (Score:3, Interesting)

QNX has a package filesystem [qnx.com] like what you describe; it looks like it solves Mosfet's problem and keeps PATH simple.
• #### I wish unix had this... (Score:3, Interesting)

<steve@nOSPAM.componica.com> on Wednesday November 21, 2001 @11:03AM (#2595752) Homepage
I wish Unix/Linux had a mechanism where a directory could be marked executable and executing the directory whould internally call some default dot file (such as .name_of_directory)within the directory, and some environmental variable (like $THIS_PATH) was set to the directory and passed to the application process. Maintance for applications like these whould be a no-brainer. Just move the directory and all the associated preference files and whatnot travel with the app. -Steve • #### Related to yesterday's story (Score:5, Interesting) on Wednesday November 21, 2001 @11:05AM (#2595760) Homepage I think the fundamental problem here is related to yesterday's story about new user interfaces [slashdot.org]. It's a problem of how and where storing our files. Regarding applicationsn, there are two ways to do it: you can store all files (binaries, config files, man pages, etc.) of the same application in the same directory, or you can store all files of the same type from different applications in their respective directories (all config files in /etc, man pages in /usr/share/man (I think), etc.). Both approaches have their advantages. The problem with hierarchical file systems is that we have to choose one of them. I would love to see a storage system where we can use both ways _at the same time_. A system that groups file depending on relationships they have. Such that 'ls /etc' gives me all config files for all apps, and 'ls /usr/local/mutt' shows me all mutt-related files, including it's config file(s). I have no idea how to implement such a beast. I'm thinking about a RDBMS with indices on 'filetype' and 'application', but I would love to see something much more flexible. All pictures should be accessible under ~/pictures and subdirectories, all files relating to my vacation last year in ~/summer2000. Files relating to both should be in ~/pictures/summer2000 _and_ ~/summer2000/pictures. To a certain extent, this can be done via symlinks, but it should be much easier to deal with. You shouldn't have much manual work • #### Re:Related to yesterday's story (Score:2) Isn't this what symbolic links are for...? • #### Re:Related to yesterday's story (Score:5, Interesting) on Wednesday November 21, 2001 @12:53PM (#2596414) Homepage I think the fundamental problem here is related to yesterday's story about new user interfaces [slashdot.org] [slashdot.org]. It's a problem of how and where storing our files. You could also trace it back to the hierarchical database article [slashdot.org], which is when I started making a lot of posts on the subject. It seems there is finally a lot of interest being generated about this sort of thing. I have no idea how to implement such a beast. I'm thinking about a RDBMS with indices on 'filetype' and 'application', but I would love to see something much more flexible. All pictures should be accessible under ~/pictures and subdirectories, all files relating to my vacation last year in ~/summer2000. Files relating to both should be in ~/pictures/summer2000 _and_ ~/summer2000/pictures. This is exactly the sort of thing I'm doing with my Meta Object Manager (MOM) software called Mary. Metadata in the form of attributes and values is associated with each file/object and you can do a query (both textually and graphically) on that metadata. For simple paths like you describe, it is a value query irrespective of a particular attribute, but there is support for a more structured "path" (I actually call it a "focus" as it restricts your focus to a subset of the objects on the system) like /type=picture/location=Hawaii/year=2000. Because the focus items are metadata attributes, order is not significant. With such a system, there are no directories or symbolic links; it's all dynamically structured based on what your metadata focus is at any particular time. Mary is just in the alpha stages at this point, but it already works well on the command line for the type of things you describe and I'm using it myself to manage nearly 350,000 objects that have been flowing through my system. I'm not exactly sure when it'll be ready for public consumption, and it'll require a GNUstep [gnustep.org] port to get working on Linux (I'm doing development on Mac OS X) systems. I was hoping year end, but I don't think I'll have the time. Summer 2002 has a nice ring to it, though. :-) • #### Re:Related to yesterday's story (Score:3, Insightful) I've been hacking with this idea in my head. It seems to make the most sense. It is a sort-of multidimensional file system, where every file has to be placed in the dimensions in which it belongs. The tree is used only as a single representation of a single dimension. There are three reasons I can think for this. • Package management (checking out program configs etc. without surfing the whole directory hierarchy) • System maintinance (splitting volumes, managing space and performance tweaking) • User friendliness!!! ( user's can hit rm -rf and never have to worry about messing anything up! ) I figure if MS does something like this, it would save them from their drive-letter hell, and solve one of their greatest disadvantages when compared to UNIX... the impact to such a scheme to UNIX would be minimal. Database systems would probably be the best place to start looking for methods to do this sort of thing. • #### Keeping one applications files in one place (Score:5, Informative) on Wednesday November 21, 2001 @11:05AM (#2595765) The unix system doesn't really dump all the files in /usr/bin. These are, almost without exception, executable files. For each executable, support files are usually installed into one or more directory trees, such as /usr/share/executable_name/. The main convenience gained by having all the main binaries in one place (or two - I usually try to leave system binaries in /usr/bin and my own installations in /usr/local/bin) is convenience for searching paths when looking for the binaries. However, this paradigm is pretty ugly if you are browsing through your files graphically. It would be nice if each application/package installed into one directory tree, so you could reorganise the system simply by moving applications around. For example, /usr/applications/ /usr/applications/games/ /usr/applications/games/quake3/ .. this dir holds all quake 3 files ... ...etc.. /usr/applications/graphics/ /usr/applications/graphics/gimp/ ... this dir hold all gimp files ...etc... If this appeals to you, you might like to check out the ROX project [sourceforge.net]. This sort of directory tree layout was the standard on the Acorn Risc OS and made life extremely easy for GUI organisation. It makes a lot of sense to use the directory tree to categorise the apps and files. Cheers, Toby Haynes • #### RiscOS... (Score:4, Interesting) on Wednesday November 21, 2001 @11:06AM (#2595767) Journal In RiscOS, applications are directories which contains several useful files (besides the app binaries, conf or data files): • !Sprites[mode] contains the icons to be used with the app and whichever file to be associated with after its filetype • !boot which contains directives (associations, globalvariables, etc.) to be executed the first time the Filer window that contain this app is opened (the app is "seen" by the Filer) • !run which describes any action to be associated with a double-click on the app icon There's also a unique shared modules directory in the System folder. This system is at least 10 to 15 years old (not sure Arthur was as modulable, though) and sure proved to be an excellent way to deal with this problem... • #### Um, so? (Score:3, Informative) on Wednesday November 21, 2001 @11:06AM (#2595771) Homepage Much better to have a few thousand files in one dir than to have so many dirs that need to be in your$PATH that some shells will barf.

For instance, the POSIX standard (I believe) is 1024 characters for $PATH statements. That's a minimum. My users at work sometimes have need for much longer$PATH's. Some OS vendors say, ok, 1024 is the minimum for POSIX compliance, that's what we're doing. Some, like HP-UX (believe it or not) have increased this at user request to 4K.

In any case, this all seems pretty petty. It's not like our current and future filesystems can't handle it, and package managers are pretty good and know what they put where.
• #### Six of one... (Score:2, Insightful)

Half a dozen of the other. Of course there are pros/cons to both way; having all executeables in one (or O(1)) location/s makes finding programs also O(1), and a PATH length of O(1). Having one dir/"folder" for each program (or O(X) directories) would then have O(X) search time for a particular program, and O(X) entries in your PATH. On the other hand, finding and deleting entire packages becomes much harder if not all filenames belonging to that package are known. Personally I think it it doesn't matter either way.
• #### UNIX is a mess in multiple ways (Score:4, Troll)

on Wednesday November 21, 2001 @11:09AM (#2595783) Homepage
This is only part of the problem and characteristic for the way unix has evolved. The whole problem is that there are no standards, just conventions which most unix programmers are only partly aware of. I imagine the whole reason for putting all binaries in a single directory was that you then only have to add one directory to the path variable. In other words because of genuine lazyness you have around 2000 executables in your /usr/bin directory. Of course adding all 2000 programs to the path is not the right solution either (that would be moving the problem rather than solving it). Obviously the path variable itself is not a very scalable solution and needs to be reconsidered.

To sum it up UNIX programs all have their own sets of parameters, their own semantics for those parameters, their own config files with their own syntax. Generally a program's related files are scattered through out the system. Just making things consistent would hugely improve usability of unix and reduce system administrator training cost. Most of the art of maintaining a unix system goes into memorizing commandline parameters, configuration file locations and syntax and endless man pages. Basically the ideal system administrator is not to bright (after all it is quite simple work), can work very precise, and has memorized every manpage he ever encountered. The not to bright part is essential because otherwise he'll get a good job offer and he'll be gone in no time.

Here's a sample better solution for the problem (inspired by mac os X packages): give each app its very own directory structure with e.g. the directories bin, man, etc for binaries, documentation and configuration. In the root of each package specify a meta information file (preferably xml based) with information about how to integrate the program with the system (e.g. commands that should be in the path, menu items, etc.). Standardize this format and make sure that the OS automatically integrates the program (i.e. adds the menu items, adds the right binaries to a global path, integrates the documentation with the help system). Of course you can elaborate greatly on these concepts but the result would be that you no longer need package managers except perhaps for assisting with configuration.
• #### I read this last night... (Score:2, Interesting)

I came away thinking "this man is insane".

1. He claims DOS had a better way of organizing applications. This is a red herring. I don't want to organize my applications. Ever. I want to organize my data. I don't remember many applications in DOS that were compatible with the same type of data. If there had been, the limitations of the DOS structure would have been readily made apparent. First, CD into the directory where your audio recording utility is and make a .wav file. Then, move the .wav file into the directory where your audio editing utility is and edit it. It works, but why not keep the data in one place and run programs on it as you see fit without regard for their location on your hard drive, and without having a 10-second seek through your PATH variable?

2. Besides which, DOS had c:\msdos50 (or whichever version you used). That was DOS's variation on /bin. Ever look in that directory and attempt to hand-reduce the number of binaries in it to save disk space? I did. A package management system would have made that doable.

3. You can have all the localized application directories you want in /usr/local. The point of /usr/local is to hold larger packages which are local to the system. (hmm... /usr/local/games/UnrealTournament, /usr/local/games/Quake3, /usr/local/games/Terminus, /usr/local/games/RT2...) And as a bonus, thanks to the miracle of symbolic links you can have your cake and eat it too - as long as the application knows where the data files are installed you can make a symlink of the binary to /usr/local/bin and run it without editing your PATH variable too! Isn't UNIX grand?

• #### Don't install so much stuff! (Score:3)

on Wednesday November 21, 2001 @11:10AM (#2595788)
How many of those 1500 binaries do you run, hmm?

Many distributions install lots of packages you don't need nowadays. Uninstall some, or switch to a more minimalist distribution. Try installing debian with only the base packages. Then whenever you need a program you don't have, apt-get it. It'll make for an annoying few weeks perhaps, but at the end you'll have a system with just what you need on it. I'll bet you will end up with only around 600 binaries in the end (Unless you install gnome... That's like 600 binaries on it's own.)

What does it matter anyway? If you have 1500 programs it's no better to have them in their own directories then to have them in one place. Also, it's not like you're dealing with all of them at once.
• #### Hierarchy (Score:3)

on Wednesday November 21, 2001 @11:10AM (#2595792)
The root problem for all of this seems to be the limits of a hierarchical data organization such as a file system. The debate is if the heirarchy should be organized by application (as the article proposes), file type (all binaries in 'bin'), or some broad attribute of the application ('/usr' vs '/usr/local', 'bin' vs 'sbin').

There probably is no way to solve all of the issues simultaneously in one hierachical scheme. Symlinks could help because they crosslink the tree. Package managers add a more sophisticated database of relations. These relations are much more useful, but unfortunately are accessible only through the package manager program.

All in all, though, it seems that organizing by package makes the most intuitive sense, and the helpers like package managers should be responsible for figuring out how to run the app when you type it on the command line.

• #### the problem is deeper (Score:2)

1. package managers should make it easy to move things around. I should be able to install the latest perl-xxx.rpm in a test location, test my scripts against it, and then reinstall it in the canonical place.

2. this needs to include all the files in /etc so app installers need to support flexible package management. Also note, the #!/shebang is totally broken in this sort of environment.

3. "the canonical places" (/usr, /etc, etc. :) should be a family of canonical places. The sysadmin group might not want to upgrade their perl scripts at the same time as the dbadmin group. decoupling their interdependency will lead to much more flexibility and quicker overall upgrading.

4. we can achieve this best if / is no longer / but is instead /root so there could be a /root1 and /root2 . Think of this, one file system containing two different distros that don't wrassle with one another.

do not evaluate this on whether you think it's a good idea. the point is that software allows soft parameterization, reentrency, soft configuration, etc. So, why can't we have it? Programmers need to stop hard coding shit, binding locations to one place.

I'd love to upgrade my workstation from RedHat 7.1 to RedHat 7.2 by installing onto the same partition without trashing the old. Then, over the course of the week I could work out the kinks and delete the old, knowing that at any time I could reboot the old to send a fax or whatever. There are 1000s of corporate uses for this type of environment too... how many times have you heard "we're taking the mailserver down to upgrade it overnight" and then heard "um... it didn't come back up..."

• #### Recursive PATH (Score:2)

Unless you can set a recursive PATH, I don't think it would be viable to split things into their own directories... Could you imagine how long (and how slow) it would be to have 20, 40, 60 directories or more listed in your PATH?

With package management software, who cares if it's all in one place? That's fine with me...

Besides, anything *I* add to the system, depending, usually ends up in /usr/local - which is a further distinction.

• #### KDE binaries in /usr/bin (Score:2)

Having KDE binaries in /usr/bin completely destroys the possibility of simultaneouly having KDE 2.x and KDE 3 on the same system (say, a server with dozens of users where you want to slowly migrate from one environment to another). Having them on /usr/kde2 and /usr/kde3, or even /opt, sounds much more saner to me. (Shared resources may stay at a common place, but it's up to the upstream maintainers to allow these "shared resources" work as expected.)

One workaround to remain LSB-compliant and still having them separated would be throwing them on /usr/lib/kde2 and /usr/lib/kde3 -- but it's an ugly hack. But so is arbitrarily breaking the standard and placing them in the correct place. Ugh.

• #### So, what's the reason to do this again? (Score:2, Informative)

To pick nits a touch, the reason X got its own sub directory was that it was often on a separate file system from the rest of /usr. In the long, long ago X was of such astounding size relative to the limited and expensive disk space of the day that special considerations had to be made upon its installation. It had little to do with any other sort of organization.

As for the rest of the rant, to simply call the current practice of file organization horrendous behavior, sloppiness, or laziness without ample argument or demonstrable advantages as to why breaking every package into separate sub directories is damaging to the cause at best. Had the rant contained any sort of claim that there are an unacceptable number of name space clashes, that simply doing an 'ls' in one of these directories blew away the file name cache mechanisms in the kernel, forever making certain optimizations useless, or anything of that sort would hold more weight than unsupported bashing.

The author laments the inability to manage these subdirectories effectively with standard tools, but as I see it, the option to not use package management has been there all along. Roll your own, putting things where you want them. Or, I might suggest broadening the concept of 'standard tools' to include the package management system installed, should the former option seem ludicrous.

Not having to muck around with the PATH - and moreso, not having to support users mucking around with their own PATHs - far outweighs the disadvantages of not being able to use 'standard tools'. What time I lose learning and using my package management system I make up tenfold in not supporting the very issues which I forsee the author's solution creating.

--Rana
• #### wolfenstein test does it right! (Score:2)

Last night, when I installed the Wolfenstein test, it put it all under /usr/local/games/wolf and made symlinks to /usr/bin for the executables. I wish more apps did that.

Imagine every app installed on your machine did this... so much more manageable....
• #### FreeBSD (Score:4, Interesting)

on Wednesday November 21, 2001 @11:22AM (#2595859)
The file systems on a Unix system make a lot of sense, when people use them correctly.

/bin for binaries needed to boot a corrupted system.

/sbin for system binaries needed to boot a system.

/usr/bin for userland binaries installed with the base system.

/usr/sbin for system binaries installed with the base system. The are not programs required to boot the system.

/usr/local/bin for locally installed user binaries such as minicom, mutt, or bitchx.

/usr/local/sbin for locally installed system binaries such as apache.

Large locally installed programs such as Word Perfect get installed in a sub directory of /usr/local but they put a single executable in /usr/local/bin so that you do not need to change your path.

FreeBSD has only about 400 programs in a complete /usr/bin. Other programs are spread about the file system in sensible locations or are user installed. Possibly the only directory that does not make a whole lot of sense is /usr/libexec (where most of the internet daemons are kept).

-sirket
• #### My take. (Score:2)

Well...looking at my Debian system...
/sbin contains stuff that requires superuser priveleges. Stuff specific to maintaining the hardware, etc.

/bin contains solid, standard system binaries need to work (bash, grep, chmod, z-tools, gzip, etc). Stuff that you basically need.

/usr/bin/ contains... userland stuff. software installed/removed for general use.. I don't know the right way to describe it.

/usr/local/bin.. contains nothing. This is where, generally, I choose to put things I compile myself, so as not to confuse the package management system.

If we look at ,say, systems where many things are mounted over nfs.. /usr/bin is one of these. /usr/local/bin is for things local to your machine.
• #### /usr/bin for different OSen (Score:2, Informative)

On a Secure Computing Sidewinder (BSD based):
% ls -l /usr/bin | wc -l
258

On an OpenBSD 2.8 server, minimal install + gcc stuff:
$ls -l /usr/bin | wc -l 344 On an OpenBSD 2.8 server, full install (including X):$ ls -l /usr/bin | wc -l
373

On a Mandrake 8.0 server:
$ls -l /usr/bin | wc -l 1136 On a RedHat 7.1 system with a fairly typical installation:$ ls -l /usr/bin | wc -l
2203

I want /opt (with subdir's per app) back ;-)

It seems to mean that there's a lot of overlap/duplication in the tool set on Linux distributions versus the centralized managed BSD distributions. A crowded /usr/bin might be a consequence of the "choice is good" Linux philosophy.

Not that I'm saying I disagree with "choice is good" ...
• #### How to handle PATH... (Score:3, Interesting)

on Wednesday November 21, 2001 @11:25AM (#2595880)
From my .zshenv, works in .profile too. Could be used also for other path variables. Works for all Operating Systems with a reasonable Bourne Shell.

export PATH

reset_path() {
NPATH=''
}

set_path() {
if [ -d "$1" ]; then if [ -n "$NPATH" ]; then
NPATH="$NPATH:$1"
else
NPATH="$1" fi fi } reset_path set_path$HOME/bin
set_path /usr/local/gcc-2.95.2/bin
set_path /opt/kde/bin
set_path /usr/lib/java/bin
set_path /usr/X11R6/bin
set_path /usr/local/samba/bin
set_path /usr/local/ssl/bin
set_path /usr/local/bin
set_path /usr/local/bin/gnu
set_path /usr/bin
set_path /bin
set_path /usr/local/sbin
set_path /usr/sbin
set_path /sbin
set_path /usr/ucb
set_path /usr/bin/X11
set_path /usr/ccs/bin
PATH="$NPATH:." unset reset_path set_path • #### Clueless... (Score:3, Insightful) on Wednesday November 21, 2001 @11:30AM (#2595918) Homepage Mosfet is a emotionally unstable GUI hacker. His knowlege of the long history and tradition of UNIX administration is pathetic. He ignores simple observables like PATH searches are more expensive than bin lookups. One executable dir per App would be FAR SLOWER than 2000 executables in a single dir. This is another classic example of not letting programmers, especially GUI progrmmers, be involved in OS design. For those of you who might be swayed by his foolish arguemnts, please read LHS, and the last decade of USENIX papers and LISA papers. Unix systems organization has been openly and vigorously debated for 15years. It has not be dictated by mere programmers from high on above like MS. And RedHat is to be applauded for properly implementing the FHS which is a standard, others like SUSE should be encouraged to become compliant (/sbin/init.d ... mindless infidels :). • #### I have played both sides of this arg (Score:5, Insightful) on Wednesday November 21, 2001 @11:33AM (#2595937) Journal I have been lazy before with my linux box and let package management systems lay out files all over the freakin' place. I have done things the "right" way (according to my mentor admin anyway :->) with my Solaris box and followed this standard: /usr/bin - sh*t Sun put in. Let pkgadd throw your basic gnu commands into: /usr/local/bin Compile from source all major apps and services Database services, Web Servers etc...etc.. and put them into /opt: /opt/daftname symlink any executable needed by users into /usr/local/bin (if you think like a sysadmin you realize most users do not need to automatically run most services) Any commercial software goes to /opt and put the damn symlink in /usr/local/bin. Yes, it is extra work but it keeps you PATH short and fat and your users happy. This is not a problem with distros or package management systems as much as it is an issue of poor system administration. I also understand it is a mixed approach with some things put under seperate directory structures for each program and some things in a comman /usr/local base. Common users do NOT need access to the Oracle or Samba bin. Give them a symlink to sqlplus and they are happy. Even though it is mixed if you stay consistent across all your boxes then the users are happy. I understand it is tough but we have control in *nixes to put things where we want the deal is to use it. PATH=/usr/bin:/usr/ucb:/usr/local/bin:. export PATH All a regular user needs. • #### # rm -ff /usr/bin (Score:3, Informative) <ellem52NO@SPAMgmail.com> on Wednesday November 21, 2001 @11:41AM (#2595984) Homepage Journal The final solution to this mess. Unless you are hand writing each file in /usr/bin who cares how many files their are there? And Windows != /usr Program Files == /usr • #### In some respects, I agree. (Score:3, Interesting) on Wednesday November 21, 2001 @11:44AM (#2595995) Homepage When you consider the /usr or /local was similar in purpose as "program files" (or progra~1 if you want to be specific) had the best of intentions. Well we know about which road going where based on good intentions. At any rate, part of the "problem" is there is a certatin point a section of the file system gets unmanageable. Where that is, quite frankly, varies. RedHat has impressed me with its compatability but it does so with static libs. There are times when god forbid you should wish to compile something and get gripe messages that you window manager was done under X set of libs, your theme manager under Y's libs and your shared libs are of version Z. That is just trying to update the WM, god forbid you wish to compile a kernel. And with the static libs, the performance hit is astounding. The other side, as with Slackware, is shared libraries can be as unforgiving as well. Heh, as a newbie I deleted a link to a ld.so.X. Hint: never, ever do this! ls, ln, mv et al stop working...oops. Stupidity on my part, but, hey, I was a newbie. (finger; fire; burn; learn. simple.) Back on track. Slack is fast, configurable but through sheer will, accident, or stupidity can be broken a lot faster (and in some cases fixed a lot faster). Windows...well the sword cuts both ways. It impresses and suffers *both* of the good and bad points of RH/SL (or static and dynamic libs). And, if the above does not either blow your mind or make you nod off consider OS X.1.1 (.1.1.1....) Under OS X's packages system a 'binary/folder/application' (oye) can and does contain static libs. Ok, that can be good/bad. Here is the kicker (and cool part): if it finds *better* or more *up to date* libs it can use them and ignore what *it* has. If the new libs break the app, or cause problems, the application can be "told" or "made" to use only its own libs, or update the newer libs. Most will see where that is going. It will be good to keep "static" then use "dynamic" or update the "dynamic/shared" libs. The down side is the potential to fix one application and break 10+ others. This has not happened...yet. However, the *ability* to make or break is there, just no information is given until a spec/CVS set of rules is fleshed out. I will be the first to admit that the "binary folder" or "fat binary" (arstechnica.com article) idea sounded "less than thrilling"...until you realize the headache's it cures with this kind of file system bloat. Think about it: You have an app, that is really a folder, that you can't see inside/manipulate/fix/break unless you know how *and* have a reason to. In all three cases there are limits to even the most intelligent of design. Knowing this truth is easy to accept. Finding where it lies and where it breaks down...that is another discussion. • #### FHS 2.x - Filesystem Hierarchy System (Score:3, Interesting) <rhollan@cleaAUDENrwire.net minus poet> on Wednesday November 21, 2001 @11:48AM (#2596015) Homepage Journal This is one of the things that FHS tries to address. I used FHS 2.1 in Teradyne to manage a custom GNU/Linux distro for one of their products [If you purchased NetFlare from them, you should have all the updated GPL goodies and additions I put there on a source companion CD]. While not perfect, it addressed the following issues: 1) separating O/S from "other" packages; 2) maintain a sane place to put different packages; 3) support the notion of linking to specific package directories from a common place to keep PATH small; 4) was compatible with a number of "traditional" conventions. Of course, FHS 2.1 has this concept of the "operating system" files and "other files". Presumably the "operating system" is that which the distro bundler provides... so Red Hat would be free to put as much as it wants under /usr. But this causes a problem if you looks at a common standard base for several distros, like the LSB. Do you have a "standard base" part, and a "distro part", and then a "local part"? Clearly what's needed is a hierarchical way of taking an existing "operating system" and customizing it to a "custom operating system". Right now, FHS allows this for distro bundler and end user, but there is no support for the process iterating. Of course, my experience has been with FHS 2.1 and have since moved on to employment elsewhere, so perhaps the FHS addresses these issues. • #### Why windows doesnt... (Score:5, Funny) on Wednesday November 21, 2001 @11:51AM (#2596029) Journal The reason windows apps can happily install binaries in any directory is because they then go install their shortcuts in the Start menu, or the desktop. Of course if you want to run one from a command line interpreter you're pretty stuck. So now my windows Start menu has 1000 items in it, but at least they are arranged hierarchically in 850 vendor program groups... Baz • #### I think he's is quite correct (Score:3, Informative) on Wednesday November 21, 2001 @11:55AM (#2596043) Homepage Most major distros install quite a bit of stuff by default that you will 1) you probably will never use 2) you probably dont know what it is 3) if it's a server you don't need anyway This is one of the reasons I created Beehive Linux [www.beehive.nutargetnew]. It aims to be secure, stable, clean, FHS compliant, and optimized for hardware built in this century. Current version is 0.4.2, with 0.4.3 out in a week or so. On one point however I must disagee with Mosfet: The most obvious thing is separate the big projects like desktop projects into their own folders under /usr The FHS states: /opt is reserved for the installation of add-on application software packages. A package to be installed in /opt must locate its static files in a separate /opt/ directory tree, where is a name that describes the software package. Beehive puts large packages like apache, mysql, kde2 under /opt in their own subdirectory i.e.; /opt/kde2. I think this is a much better solution than cluttering up the /usr heirarchy, and makes it very simple to test a new version of without destroying the current setup. • #### He missed a major point of the FSS (Score:3, Insightful) by Anonymous Coward on Wednesday November 21, 2001 @11:57AM (#2596061) One of the major points of the FSS is to organize files by type. What I mean by that is executables are placed together, configuration files are placed together, man pages are placed together, etc. This is important for a number of reasons: - systems may need a small partition with all files needed to boot - configuration files need to be on a RW filesystem, while executables can be RO. - many other reasons (read the FSS) That doesn't mean all executables need to be in a single directory under /usr/bin. I agree it would be nice to come up with a good way to allow subdirectories and change the FSS accordingly. Just don't argue that all files related to a given piece of software be in a single directory as some have requested. That will make the life of an administrator of large systems even more difficult. My wife works in a place that does that and their system is nearly impossible to maintain. Sure the FSS isn't perfect, but I have yet to see another system that does as good a job. Don't throw it away simply because you don't understand it, or even worse, because its biggest fault is a directory with 2000 entries. -- YAAC (Yet Another Anonymous Coward) • #### Look at the "modules" project (Score:3, Informative) on Wednesday November 21, 2001 @12:00PM (#2596073) I agree that this is a Linux-related issue that mostly stems from lazyness. I have been using the modules [modules.org] approach for tool management for years with very good results - even half a decade ago this was more advanced than any Linux approach out there today. With this approach each tool/version-combination gets its own directory, including subdirectories for libraries, source code, configuration files etc. You can then use a "module" commando to dynamically change your PATH, MANPATH, ... environment to reflect the tools you want to use (note that this supports the usage of a tool, it is therefore not a replacement for package management tools like rpm, which are mainly concerned with installation.) Each tool/version combination comes with an associated modulefile (which has a tcl-like syntax) where you can influence a user's system environment upon loading/unloading the module. It is also possible to e.g. create new directories, copy default configurations or do platform-specific stuff for a tool (which greatly helps users less fluent in Unix, since they do not have to care about stuff like shell-specific syntax for setting environment variables). It also allows you to give tool-specific help, e.g.$ module whatis wordnet
wordnet: Lexical database for English, inspired by psycholinguistic theories of human memory.

This is also very helpful if you want to keep different versions of the same tool (package, library) around and switch between them dynamically, e.g. for testing purposes (think different jdks, qt-libraries, etc.). With modules, you can e.g. do a simple
module switch jdk/1.2.2 jdk/1.3.1
and runs your tests again. And you never have to worry about overwriting libraries, configuration files etc. even if they have the same name (since they are kept in a subdirectory for each version).

For our institute I've set up a transparent tool management system that works across our Linux/Solaris/Tru64 platforms. All tools are installed this way (except the basic system commandos which still go into /bin etc.).

Of course, it's a lot of work to start a setup like this, but in a complex environment it is really worth it, especially in the long run.
• #### Specialize! (Score:3, Insightful)

on Wednesday November 21, 2001 @02:59PM (#2597215)

The biggest problem with Linux is, in my opinion, the fact that people try to solve all the problems of the world with a single solution. Red Hat is a worthwhile cause, but I don't think a single distro can handle every possible use of Linux. I thought Linux was about choice. In that case, there should be many smaller distributions aimed at specific (or at least more specific) purposes.

No, I'm not a luser, nor am I a newbie. I know that there are countless distros out there, which fit on a single floppy, six CDs, and everything in between. (I've purchased so many distributions for myself and for others that I'm drowning in Linux CDs.) But everybody and his uncle uses Red Hat. (I personally like SuSE a LOT better, because it is far better organized in my opinion.)

Many common problems make the file system layout and package management suck. I don't mean to start a flamewar, but this problem is far smaller on FreeBSD, where the file system layout is a lot better organized than that of a Red Hat Linux system. (It's even better organized than a SuSE system.) The ports and packages collection, which works through Makefiles, makes installation and removal of many programs very easy, with dependency checks. Unless I'm imagining things, it does find dependencies that you install manually, as long as they're where the system expects them. However, glitches still exist, mainly in the removal of software, that require user intervention to remove some remaining files and directories.

When it comes down to it, I think that package management systems--whether they're Debian's system, RPMs, or the *BSDs' ports and packages--are supposed to serve as a shortcut for the system administrator, who still knows how to manage programs manually. The Linux community seems to have forgotten this, and expect package management to be a flawless installation system for any user with any amount of experience. Unfortunately, this is not the case, and it would be extremely difficult, maybe impossible, to make such a system. I believe this doesn't matter.

Skilled admins need control and flexibility over their programs. This is especially true for critical servers, but also applies to workstations. If the setup they want can be achieved with a package manager, they'll use it. If not, they can opt to build the program from source, or, if this installation takes place often, they might make their own package, perhaps customizing paths or configuration files for site-specific purposes. A well-organized hierarchy is very important.

Novice users are very different. They just want to install this thing called Linux from the CD and surf the web or burn some MP3s. For them, the solution isn't a great package management system, because a novice user probably doesn't know where to obtain programs. In some cases, there are hundreds of similar programs to choose from--novices can't handle all that choice! The solution for them is a distro that supports a very specific set of programs, and supports them well:

• Everything should be managed through clickable graphical dialogs. Enabling web serving or whatnot would take one click on a checkbox.
• The installation would be extremely simple:
• Where possible, there are no choices. You simply install the distro and get all the "standard" programs, precompiled, preconfigured and ready to use.
• During installation, a preconfigured image of a 500 megs (or so) partition would just be copied verbatim onto a partition on the user's hard drive.
• Another partition, taking up the remaining available space, would be mounted on /home.
• Installation could happen in 5 minutes flat.
• A single desktop environment would be present. Novice users shouldn't have to try ten different window managers and docking programs and whatnot. Choose something and put it on this distro. If you want to support multiple desktop environments, package multiple distros.
• The same rule holds true for all programs that would come with the installation. Instead of making one huge distro that supports everything from 10,000 text editors to biological analysis programs, make 10 different distros. One would be for "Home" use and would include stuff like a word processor and spreadsheet, a banking program, web browser, email client, calendar program, MP3 player, video editing software, and whatever else you want to include. These don't even need to be 100% free software. Put some quality programs on the CD and charge for them.
• To make a long story short, limit the user's exposure to problems. Every choice you present to the user is a possible problem. We're talking about people who don't know where the "any" key is for crying out loud.

Finally, I would recommend that in the spirit of giving back to the community, any admin who makes his own packages should submit them back to the developer for distribution to others. (Unless these packages are designed for site-specific purposes, of course.)

Oh yeah, and I almost forgot the obligatory "oh well."

#### Related LinksTop of the: day, week, month.

No problem is so large it can't be fit in somewhere.

Working...