Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Rage Against the File System Standard 612

pwagland submitted a rant by Mosfet on file system standards. I think he's sort of over simplified the whole issue, and definitely wrongly assigned blame, but it definitely warrants discussion. Why does my /usr/bin need 1500 files in it? Is it the fault of lazy distribution package management? Or is it irrelevant?
This discussion has been archived. No new comments can be posted.

Rage Against the File System Standard

Comments Filter:
  • by Hektor_Troy ( 262592 ) on Wednesday November 21, 2001 @10:43AM (#2595641)
    and just install in /?

    Who in their right mind places stuff outside of a program specific folder, if it's not gonna be used in multiple programs (like shared libraries)?
  • by TechnoVooDooDaddy ( 470187 ) on Wednesday November 21, 2001 @10:45AM (#2595648) Homepage
    imo, we need a better command path system thingy that allows easier categorization of executables and other stuff... Win32 has the System32 (or System) directory, *nix has /usr/bin, /usr/share/bin, /usr/local/bin etc...

    I don't have a solution, but i'll devote a few idle cycles to it...
  • by nll8802 ( 536577 ) on Wednesday November 21, 2001 @10:46AM (#2595657) Homepage
    I think it is better to install all your programs binaries under a subdirectory, then symlink the executables to the /bin /usr/bin or /usr/local/bin directorys. This gives you a lot easier way to remove programs that don't have an uninstall script included, and Is a lot more organized.
  • Package Management (Score:4, Insightful)

    by Fiznarp ( 233 ) on Wednesday November 21, 2001 @10:47AM (#2595659)
    ...makes this unnecessary. When I can use RPM to verify the purpose and integrity of every binary in /usr/bin, I don't see a need for separating software into a meaningless directory structure.

    DOS put programs in different folders because there was no other way to tell what package the software belonged to.
  • by TheM0cktor ( 536124 ) on Wednesday November 21, 2001 @10:47AM (#2595661) Homepage
    in the dark old unixish days whenever you bought a bit of commercial software (remember that? buying? :) it'd install itself into /usr/local/daftname/ or /opt/daftname/ or somewhere. This meant there'd be a huge path variable to manage which was a nightmare. The reason the windows equivalent isn't a problem is that windows is not commandline based - users access peograms through a link in a start menu (gross oversimplification but you get the idea). This simply doesn't translate to the command line paradigm. So a simple answer - nice path variables, neat directory structures, usable command line interfaces, pick any two. ~mocktor
  • Response (Score:3, Insightful)

    by uslinux.net ( 152591 ) on Wednesday November 21, 2001 @10:50AM (#2595679) Homepage
    You have to use the package manager.


    And you should, normally. If you system installs binutils as an RPM, DEB, Sun/HP/SGI package, well, you _should_ use the package manage to upgrade/remove. After all, if you don't, you're going to start breaking your dependencies for other packages. That's why package managers exist!


    In some respects, Linux is better than many commercial unices. SGI uses /usr/freeware for GNU software. Solaris created /opt for "optional" packages (what the hell is an optional package? isn't that what /usr/local is for?!?!) At least all your system software gets installed in /usr/bin (well, unless you're using Caldera, which puts KDE in /opt... go figure), and if you use a package manager like they were intended, it's easy to clean them up. The difference between Windows and Linux/Unix is that the Linux/Unix package managers ARE SMART ENOUGH not to remove shared libraries unless NOTHING ELSE IS DEPENDING ON THEM! In Windows (and I haven't used it since 98 and NT 4), if you remove a package and there's a shared library (DLL), you have the option of removing it or leaving it - but you never KNOW if you can safely remove it, overwrite it, etc.


    I agree that there should be a new, standard directory structure, but I disagree that every package in the world should have its own directory. If you're using a decent package manager, included with ANY distro or commercial/free Unix variant, there's little need to do so.

  • by codexus ( 538087 ) on Wednesday November 21, 2001 @10:52AM (#2595687)
    The database-like features of attributes/index of the BeOS filesystem could be an interesting solution to the problem of the PATH variable.

    BeOS keeps a record of all executables files on the disk and is able to find which one to use to open a specific file type. You don't have to register it with the system or anything, if it's on the disk it will be found. That makes it easy to install BeOS applications in their own directories. However, BeOS doesn't use this system to replace the PATH variable in the shell but one could imagine a system that does just that.
  • by Meleneth ( 104287 ) on Wednesday November 21, 2001 @10:52AM (#2595688) Homepage
    *sigh*

    has anyone heard of symlinks? the theory is very simple - install the app into /opt/foo or wherever, then symlink to /usr/local/bin. yawn.

    or is that one of those secrets we're not supposed to tell the newbies?
  • by Haeleth ( 414428 ) on Wednesday November 21, 2001 @10:53AM (#2595694) Journal
    This is somewhat parallel to the situation common in Windows, where every new application tries to place its shortcuts in a separate folder off Start Menu/Programs. It's common to see start menus that take up two screens or more, whereas everything could be found much faster if properly categorised. MS made things worse in Win98 by having the menu nonalphabetical by default.

    Limiting bad organisation to Red Hat is silly. The only Linux distros I've tried are Red Hat and Mandrake, both of which are equally poor in this regard. Nor, I have to say, does the FSS make it any easier to organise a hard drive properly. Is the /usr/local distinction useful, for example? Wouldn't it make more sense to have a setup like /usr/apps, /usr/utils, /usr/games, /usr/wm, and so on - to categorise items by their function, rather than by who compiled them?

    The whole /home thing is equally confusing to a Windows migrant. Yes, *nix is a multi-user OS. But is that a useful feature for the majority of home users? Providing irrelevant directories is a sure-fire way to confusion.

    It's impossible to have a perfectly organised hard disk, of course. You can't fight entropy.
  • Why? (Score:5, Insightful)

    by DaveBarr ( 35447 ) on Wednesday November 21, 2001 @10:59AM (#2595719) Journal
    The one thing this guy fails to answer is "why is it bad that I have 2000 files in /usr/bin?". There are no tangible benefits I can see to splitting things up, other than perhaps a mild performance gain, and satisfying someone's overeager sense of order.

    Failing to answer that, I think his whole discussion is pointless.

    Blaming it on lazyness on not wanting to muck with PATH is wrong. Managing your PATH is a real issue, something an administrator with any experience should understand. In the bad old days we came up with ludicrious schemes that people would run in their dot files to manage user's PATH. I'm glad those days are over. Not having to worry about PATH is a tangible benefit. Forcing package mantainers to use a clear and concise standard on where to put programs is a tangible benefit.

    Perhaps I'm biased because these past many years I've always worked with operating systems (Solaris, Debian, *BSD) that have package management systems. I don't care where they get installed, as long as when I install the package and type the command it runs. This is a Good Thing.
  • by ichimunki ( 194887 ) on Wednesday November 21, 2001 @11:00AM (#2595734)
    Yes, but dead symlinks are easy to see (on my system they make an annoying blinking action) and scripts can be written that recurse down the directory tree looking for invalid links. Another positive argument in favor of this approach is that many packages include several binaries, only one or two of which are ever going to be called directly from the command line in a situation where using a full path is not convenient. This also makes version control a lot more obvious (and having simultaneous multiple versions a lot easier, too).
  • Tradeoffs/union fs (Score:2, Insightful)

    by apilosov ( 1810 ) on Wednesday November 21, 2001 @11:02AM (#2595741) Homepage
    Here, the tradeoff is being able to quickly determine the files belonging to a particular package/software vs time spent managing PATH/LD_LIBRARY_PATH and all sorts of other entries.

    Also, the question is how should the files be arranged? By type (bin, share/bin, lib, etc) or by package?

    In Linux (redhat/FSSTD), the emphasis was placed on arranging files by type, and the file management was declared a separate problem with rpm (or other package managers) as a solution.

    There is another solution which combines best points of each:

    Install each package under /opt/packagename. Then, use unionfs to join all /opt/packagename's under /usr tree. Thus, you still will be able to figure out which package has which files without using any package manager, but at same time, you are provided unified view of all packages installed.

    Unfortunately, unionfs never worked on linux, and on other operating systems its very tricky. (Such as, how do you ensure that underlying directories will not have files with same name? And if they do, which one will be visible? What do you do when a file is deleted? etc).
  • Re:Response (Score:3, Insightful)

    by brunes69 ( 86786 ) <[slashdot] [at] [keirstead.org]> on Wednesday November 21, 2001 @11:02AM (#2595744)

    Ok, we all hate windows, but spreading FUD is useless, and makes you look as bad as they do. Every windows app I have _EVER_ uninstalled (and there has been alot!) _ALWAYS_ says something along the lines of "This is a shared DLL. The registry indicates no other programs are using it. I will delete it now unless you say otherwise". This sounds pretty much like it knows whats being used and what isn't. Unless you get your registry corrupted, which wouldn't be any different from having your package database (RPM or dpkg) corrupted.

  • Six of one... (Score:2, Insightful)

    by Marx_Mrvelous ( 532372 ) on Wednesday November 21, 2001 @11:08AM (#2595780) Homepage
    Half a dozen of the other. Of course there are pros/cons to both way; having all executeables in one (or O(1)) location/s makes finding programs also O(1), and a PATH length of O(1). Having one dir/"folder" for each program (or O(X) directories) would then have O(X) search time for a particular program, and O(X) entries in your PATH. On the other hand, finding and deleting entire packages becomes much harder if not all filenames belonging to that package are known. Personally I think it it doesn't matter either way.
  • Clueless... (Score:3, Insightful)

    by LunaticLeo ( 3949 ) on Wednesday November 21, 2001 @11:30AM (#2595918) Homepage
    Mosfet is a emotionally unstable GUI hacker. His knowlege of the long history and tradition of UNIX administration is pathetic. He ignores simple observables like PATH searches are more expensive than bin lookups. One executable dir per App would be FAR SLOWER than 2000 executables in a single dir.

    This is another classic example of not letting programmers, especially GUI progrmmers, be involved in OS design.

    For those of you who might be swayed by his foolish arguemnts, please read LHS, and the last decade of USENIX papers and LISA papers. Unix systems organization has been openly and vigorously debated for 15years. It has not be dictated by mere programmers from high on above like MS. And RedHat is to be applauded for properly implementing the FHS which is a standard, others like SUSE should be encouraged to become compliant (/sbin/init.d ... mindless infidels :).
  • by ACK!! ( 10229 ) on Wednesday November 21, 2001 @11:33AM (#2595937) Journal
    I have been lazy before with my linux box and let package management systems lay out files all over the freakin' place.

    I have done things the "right" way (according to my mentor admin anyway :->) with my Solaris box and followed this standard:

    /usr/bin - sh*t Sun put in.

    Let pkgadd throw your basic gnu commands into: /usr/local/bin

    Compile from source all major apps and services Database services, Web Servers etc...etc.. and put them into /opt:
    /opt/daftname

    symlink any executable needed by users into /usr/local/bin
    (if you think like a sysadmin you realize most users do not need to automatically run most services)

    Any commercial software goes to /opt and put the damn symlink in /usr/local/bin.

    Yes, it is extra work but it keeps you PATH short and fat and your users happy. This is not a problem with distros or package management systems as much as it is an issue of poor system administration.

    I also understand it is a mixed approach with some things put under seperate directory structures for each program and some things in a comman /usr/local base.

    Common users do NOT need access to the Oracle or Samba bin. Give them a symlink to sqlplus and they are happy. Even though it is mixed if you stay consistent across all your boxes then the users are happy.

    I understand it is tough but we have control in *nixes to put things where we want the deal is to use it.

    PATH=/usr/bin:/usr/ucb:/usr/local/bin:.
    export PATH

    All a regular user needs.
  • by mrsbrisby ( 60242 ) on Wednesday November 21, 2001 @11:34AM (#2595950) Homepage
    i'd like to point out that djb came up with a wonderful solution to this very problem.

    http://cr.yp.to/slash.html [cr.yp.to]

    it's not perfect, but it divides the filesystem (mostly) by maintainer - similarly to how packages are already deployed. but beyond that, it creates symlinks into one directory (in his example: /command) to keep $PATH sane.

    package management _still_ makes my life easier- i don't like to hunt around packages manually. but if the filesystem mimicks the packages, we have solved the three biggest problems with package management at the same time:

    • incorrect dependancy names (moot: all other packages have a formed-name)
    • deleting too much (packages are stacked into seperate directories)
    • what happens when the package database goes "poof"

    i'm not saying don't use package management. i'm not saying don't use rpm. i'm actually agreeing with the topic for once and suggesting that we actually do need to change the filesystem.

  • I agree (Score:1, Insightful)

    by Anonymous Coward on Wednesday November 21, 2001 @11:46AM (#2596002)
    I think the way to fix the problem is the following. Add subdirectories to "/usr/bin" (and "/usr/lib", etc). You would have a directory for "/usr/bin/gnome", "/usr/bin/kde", "/usr/bin/X11R6". Eight to thirty-two subdirectories would yield a highly organized file structure. And you would only have to add eight to thirty-two directories to your path.

    I'd rather subdivide this way than the windows way. That is I'd rather have:

    "/usr/bin/gnome"
    "/usr/lib/gnome"
    "/usr/share/gnome"

    than:

    "/usr/gnome/bin"
    "/usr/gnome/lib"
    "/usr/gnome/share"

    This way a smart path system could be setup, where every subdirectory under "/usr/bin" is in your path.

    -Nathan
  • by Anonymous Coward on Wednesday November 21, 2001 @11:57AM (#2596061)
    One of the major points of the FSS is to organize files by type. What I mean by that is executables are placed together, configuration files are placed together, man pages are placed together, etc. This is important for a number of reasons:

    - systems may need a small partition with all files needed to boot
    - configuration files need to be on a RW filesystem, while executables can be RO.
    - many other reasons (read the FSS)

    That doesn't mean all executables need to be in a single directory under /usr/bin. I agree it would be nice to come up with a good way to allow subdirectories and change the FSS accordingly. Just don't argue that all files related to a given piece of software be in a single directory as some have requested. That will make the life of an administrator of large systems even more difficult. My wife works in a place that does that and their system is nearly impossible to maintain.

    Sure the FSS isn't perfect, but I have yet to see another system that does as good a job. Don't throw it away simply because you don't understand it, or even worse, because its biggest fault is a directory with 2000 entries.

    -- YAAC (Yet Another Anonymous Coward)

  • Re:Why? (Score:3, Insightful)

    by hexix ( 9514 ) on Wednesday November 21, 2001 @12:02PM (#2596093) Homepage
    Windows doesn't have package management, that's why you have those problems on windows. Comparing RPM, DPKG, etc to the uninstall programs in windows just doesn't work.

    RPM and DPKG know every single file that was installed, and will remove every single file that was installed. And it actually keeps a database of dependencies so it won't let you uninstall a program if another program depends on it.

    In the windows world, a program has the option of having an uninstall available. But from what I can tell it's really just a cheesy hack to get uninstall features without going through the work to setup a nice package manager. It seems to just have a list of the files it supposively installed and then mark some as shared and then uninstall the programs and ask the user if they want to uninstall the shared files, with no knowledge of whether or not they're being used by other programs.

    That's why we don't need subdirectories for programs. Although it probably wouldn't be a bad thing because it would help people find global config files and stuff for programs. But really, if you know how to use RPM and DPKG there isn't a need, as you can ask it what the files are that belong to a program and other things.
  • by Galvanick Lucipher ( 52042 ) on Wednesday November 21, 2001 @12:08PM (#2596137)
    Every unix administrator I know does _not_ do it that way. That way seems crazy to me. You still end up with 1500 links in /usr/local/bin and without a package manager you have no dependency tracking, no automated update system, nada.

    I would much rather have a good package manager. I don't care if there are 2000 files in /usr/bin as long as the filesystem driver can handle it (and any good filesystem can) and I can do "rpm -qf *" (or equivalent) and see the package ownership of every file in the directory. This whole thing is a non-issue. If you do "ls /usr/bin" and get freaked out by the size of the output you need to change your preconceptions, not your filesystem.
  • by Anonymous Coward on Wednesday November 21, 2001 @12:20PM (#2596213)
    i think it would be much better to have apps in separate directories and links to binaries in somewhere like /usr/bin (or /usr/links ?)

    so if you'll install the app you just unpack the app and make links
    and to uninstall delete app dir and invalid symlinks. it should be easy to automate both using simple shell script.

    sry for bad english (i'm not native speaker)

  • by jcostom ( 14735 ) on Wednesday November 21, 2001 @12:22PM (#2596221) Homepage
    The alternative? Simple. /opt.

    Mosfet's not talking about a new directory for every little application. He's talking about moving out stuff like KDE and GNOME. So instead of just having /usr/bin in your $PATH, you would also include /opt/gnome/bin and/or /opt/kde/bin. Yes, this makes your path a bit larger, but unmanagable? Hardly.

    I just checked on one of my PCs that has KDE2 installed (from the RH 7.2 RPMs), and there are over 200 files that match /usr/bin/k*. The only one that wasn't a part of KDE was ksh. My /usr/bin has 1948 files in it. There's a 10% reduction with one change. I don't have GNOME installed on this box, so a similar comparison isn't really possible. However, I imagine that the number would be similar if not greater for GNOME.

    It's not like he's suggesting we sacrifice goats in the street. He's suggesting we actually implement what the FSS says.

  • by MarkCC ( 40181 ) on Wednesday November 21, 2001 @12:23PM (#2596224)

    The system does not go through all of the directories in the path every time you type a command. No shell that I know of is stupid enough to do that.

    Shells do a lot of cacheing. The most common strategy these days is to automatically regenerate the path cache every time you change your cache. Many shells also have a way of manually directing it to rebuild it's cache.

    With an intelligently designed cache, the memory use difference between cacheing binaries from a small number of huge directories, and a huge number of small directories is small to zero.

    That said, I still disagree with Mosfet. I've also done time as a sysadmin. Personally, I think that having the binaries stored together is preferable, because I'm capable of using a package manager to manage my applications; but many of my users find it extremely difficult to deal with paths. (Not to mention the degree of sensitivity it produces when you change a system. If I use RPM to install a new version of something, then the RPM database id modified with information about the new version. If I install something in a way that modifies the directory heirarchy, then I have to make sure that every user of my system correctly modifies their path.

    Personally, I think RPM style package managers are a huge step forwards, and they make the admins job a lot easier. Why should I care that there are thousands of files in my /usr/bin, as long as I have a useful tool for managing them?

    Now, data files are a different matter... But they get separate directories in the current style. So that's not a problem.

  • by mendepie ( 228850 ) <mende@@@mendepie...com> on Wednesday November 21, 2001 @12:24PM (#2596230) Homepage
    What we need is a *limited* way to have a single $PATH definition that will address arbitrary packages. I was thinking about

    PATH="$PATH /opt/*/bin"

    This would look in /opt once and cache the dirread so the hit for this only happens once.

    Of course this adds the problem of ordering (/opt/a/bin/foo vs. /opt/b/bin/foo).
  • by BetaJim ( 140649 ) on Wednesday November 21, 2001 @12:33PM (#2596286)
    give each app its very own directory structure with e.g. the directories bin, man, etc for binaries, documentation and configuration. In the root of each package specify a meta information file (preferably xml based) with information about how to integrate the program with the system

    I use a tool that does most of those things. Check out encap [uiuc.edu] and the package manager epkg [uiuc.edu].

    I install most things from source. What I do is specify some prefix during ./configure and have the package installed to say /usr/local/encap/foo-1.2. Then use epkg to sym link everything into the /usr/local/ directories. This makes package upgrades easy and a simple ls shows what is installed. Very handy.

  • by heh2k ( 84254 ) on Wednesday November 21, 2001 @12:33PM (#2596289) Homepage
    NO NEED FOR A PACKAGE MANAGER

    one word: dependencies

  • by bc3-au ( 538157 ) on Wednesday November 21, 2001 @12:35PM (#2596300) Homepage
    As several people point out, install the packages into their own area (/usr/local/package ?) and then symlink the binaries you require.

    All well and good, but this sort of thing gets on my nerves:

    mkdir /usr/local/samba
    cd /usr/local/samba

    # /usr is a very stupid place to keep logfiles - /var is for dynamic stuff
    mkdir /var/log/samba
    ln -s /var/log/samba var
    # And why the &&&*&^%&& would I want manpages in their own little trees - 1 per package where I can't read them without stupid man options ?????
    ln -s /usr/local/man man

    # Now we can get to and install the bloody thing - because of the symlink the manpages will be put with the others where man and apropos can find them
    cd /usr/src/samba/source
    make install

    # cd /usr/local/bin
    ln -s /usr/local/samba/bin/smbstatus smbstatus
    .... etc

    The trouble is that many packages install multiple classes of data into their trees (bloody postgres will wack a database area onto your /usr partition if you don't watch it carefully)

    This is especially a problem if you're setting up network shared partitions. (netboot anyone ?)

  • by -brazil- ( 111867 ) on Wednesday November 21, 2001 @12:45PM (#2596365) Homepage
    "Could you help me please? You see, I just wrote this new shell script, and the shell says it can't find it, but I didn't mistype it, and I did set the executable bit..."


    Imagine the above, 5 times a day, in greatly varying shades of cluefulness and politeness...

  • by omega9 ( 138280 ) on Wednesday November 21, 2001 @01:20PM (#2596566)
    I've been slowly reading these posts and it seems like, while there are a few genuine issues, there are also a lof of lazy complainers out there. Aren't we all goofy for our favorite operating system because it's 'open' and we can 'do what we want'? We'll then freakin' do it.

    The parent poster, for example, is trying so hard to impress with his knowledge of the history of /var (available in pretty much any *nix book), yet fails miserably when it comes to understanding that there is no gun to his head forcing home to keep html in /var/www. I agree, I think /var/www is an odd place for html data, but there's always the option of changing your http root directory, moving it somewhere else and symlinking back to /var/www, or a few other options.

    As for the posters mentioning how the Dark Side (Windows) does it, remember that even though Windows has the Progra~1 structure to keep things seperated, if any of that stuff is needed at a system level or if any of those programs need to call proprietary libraries, many times they will dump bloat into your /winnt or /winnt/system(32) directories or add to your PATH. I hate it when I'm uninstalling and it asks something like "Would you like to remove C:\Winnt\System32\SGF32.DLL? Some programs may still be using it!" It turns into a lose-lose situation: I'm either going to break some software or create complete bloat. And with Windows you're pretty much stuck with what you got. Even if there were symlinks in Windows you still wouldn't be able to create a makeshift /usr/bin for yourself. Most of the programs keep their own libraries in their own directories and would require a "working path" pointing to it to operate.

    There are a few people in here that seem to have found personal solutions for getting around FSS quirks. Quirks there are, but with a few symlinks, #!'s, and other toys it's possible to build a very comfortable and logical system.

    Just be careful you're not turning into Twoface. It's not really effective to preach about Open Source virtues and then turn around and bitch about something when it's those same virtues that will solve your problems.
  • by monksp ( 447675 ) <monksp@monksp.org> on Wednesday November 21, 2001 @01:38PM (#2596717) Homepage
    If you use a decent system (read Debian ;) then the package wouldn't be available to install if all its dependencies weren't available either. So you wouldn't have this problem.

    I don't think that's really what he's talking about.. With both rpm and apt, if you compile a library from source, the package manager won't consider said library to be installed. So once you upgrade your library, the pm will still tell you 'Library foo.so not available' and not install (Or worse, install a different copy when you're not looking.)

    This becomes especially nasty when a package maintainer hasn't updated their package yet, and you need a bugfix/feature that the newer version has.

  • by Dr. Evil ( 3501 ) on Wednesday November 21, 2001 @02:19PM (#2596965)

    I've been hacking with this idea in my head. It seems to make the most sense. It is a sort-of multidimensional file system, where every file has to be placed in the dimensions in which it belongs. The tree is used only as a single representation of a single dimension.

    There are three reasons I can think for this.

    • Package management (checking out program configs etc. without surfing the whole directory hierarchy)
    • System maintinance (splitting volumes, managing space and performance tweaking)
    • User friendliness!!! ( user's can hit rm -rf and never have to worry about messing anything up! )

    I figure if MS does something like this, it would save them from their drive-letter hell, and solve one of their greatest disadvantages when compared to UNIX... the impact to such a scheme to UNIX would be minimal.

    Database systems would probably be the best place to start looking for methods to do this sort of thing.

  • by Znork ( 31774 ) on Wednesday November 21, 2001 @02:30PM (#2597040)
    Definitely.

    Use. The. Package. Manager.

    If there's one thing that is a total complete pain in the ass with SysV, it is the theory of installing separate applications in /opt. Well, nice and fine when you have /opt/oracle, but when you have about 50 to 100 applications in /opt, not to mention library paths, not to mention having those 2000 symlinks in /usr/local instead, and then keeping track of them or keeping track and managing the paths, library paths, etc, agh... I maintain systems like this every day, and if it wasnt for estoteric things like self contained HA cluster packages I'd throw it all in /usr and be rid of all the grief that clean separation causes.

    If someone wants to go back and do it all manually, go ahead. Hell, compile it all from source and decide exactly where to put everything, but I got over that several years ago and I'll take the rpm managed stuff in /usr/bin _please_ and my own compiled cruft in /usr/local.

    True, most people wont compile their own, but those most people _SHOULD_ be using the package manager and _nothing_ else to manage their application installation places or they're gonna break'em anyway.

    The virtual separation the package manager provides is enough.
  • Specialize! (Score:3, Insightful)

    by rice_burners_suck ( 243660 ) on Wednesday November 21, 2001 @02:59PM (#2597215)

    The biggest problem with Linux is, in my opinion, the fact that people try to solve all the problems of the world with a single solution. Red Hat is a worthwhile cause, but I don't think a single distro can handle every possible use of Linux. I thought Linux was about choice. In that case, there should be many smaller distributions aimed at specific (or at least more specific) purposes.

    No, I'm not a luser, nor am I a newbie. I know that there are countless distros out there, which fit on a single floppy, six CDs, and everything in between. (I've purchased so many distributions for myself and for others that I'm drowning in Linux CDs.) But everybody and his uncle uses Red Hat. (I personally like SuSE a LOT better, because it is far better organized in my opinion.)

    Many common problems make the file system layout and package management suck. I don't mean to start a flamewar, but this problem is far smaller on FreeBSD, where the file system layout is a lot better organized than that of a Red Hat Linux system. (It's even better organized than a SuSE system.) The ports and packages collection, which works through Makefiles, makes installation and removal of many programs very easy, with dependency checks. Unless I'm imagining things, it does find dependencies that you install manually, as long as they're where the system expects them. However, glitches still exist, mainly in the removal of software, that require user intervention to remove some remaining files and directories.

    When it comes down to it, I think that package management systems--whether they're Debian's system, RPMs, or the *BSDs' ports and packages--are supposed to serve as a shortcut for the system administrator, who still knows how to manage programs manually. The Linux community seems to have forgotten this, and expect package management to be a flawless installation system for any user with any amount of experience. Unfortunately, this is not the case, and it would be extremely difficult, maybe impossible, to make such a system. I believe this doesn't matter.

    Skilled admins need control and flexibility over their programs. This is especially true for critical servers, but also applies to workstations. If the setup they want can be achieved with a package manager, they'll use it. If not, they can opt to build the program from source, or, if this installation takes place often, they might make their own package, perhaps customizing paths or configuration files for site-specific purposes. A well-organized hierarchy is very important.

    Novice users are very different. They just want to install this thing called Linux from the CD and surf the web or burn some MP3s. For them, the solution isn't a great package management system, because a novice user probably doesn't know where to obtain programs. In some cases, there are hundreds of similar programs to choose from--novices can't handle all that choice! The solution for them is a distro that supports a very specific set of programs, and supports them well:

    • Everything should be managed through clickable graphical dialogs. Enabling web serving or whatnot would take one click on a checkbox.
    • The installation would be extremely simple:
      • Where possible, there are no choices. You simply install the distro and get all the "standard" programs, precompiled, preconfigured and ready to use.
      • During installation, a preconfigured image of a 500 megs (or so) partition would just be copied verbatim onto a partition on the user's hard drive.
      • Another partition, taking up the remaining available space, would be mounted on /home.
      • Installation could happen in 5 minutes flat.
    • A single desktop environment would be present. Novice users shouldn't have to try ten different window managers and docking programs and whatnot. Choose something and put it on this distro. If you want to support multiple desktop environments, package multiple distros.
    • The same rule holds true for all programs that would come with the installation. Instead of making one huge distro that supports everything from 10,000 text editors to biological analysis programs, make 10 different distros. One would be for "Home" use and would include stuff like a word processor and spreadsheet, a banking program, web browser, email client, calendar program, MP3 player, video editing software, and whatever else you want to include. These don't even need to be 100% free software. Put some quality programs on the CD and charge for them.
    • To make a long story short, limit the user's exposure to problems. Every choice you present to the user is a possible problem. We're talking about people who don't know where the "any" key is for crying out loud.

    Finally, I would recommend that in the spirit of giving back to the community, any admin who makes his own packages should submit them back to the developer for distribution to others. (Unless these packages are designed for site-specific purposes, of course.)

    Oh yeah, and I almost forgot the obligatory "oh well."

  • Re:Dumb Dumb Dumb (Score:3, Insightful)

    by kinkie ( 15482 ) on Wednesday November 21, 2001 @03:43PM (#2597444) Homepage
    it makes more sense to waste a little space duplicating shared libs and simply install programs into their own directories....

    Shared libs are not only about wasting disk space (which we usually have plenty of). They're much more gained from them, namely sharing RAM by mapping common code pages into different processes' address spaces.

    Think if you had a duplicate libc in every damned process running in a system.
  • by staeci ( 85394 ) on Wednesday November 21, 2001 @07:06PM (#2598510) Homepage Journal
    "I'd much rather have 2000 binaries in /usr/bin than 2000 path entries in my $PATH"

    Who the hell moderated that as insightful?

    how about:

    /usr/bin
    /usr/games/
    /usr/gnome/bin
    /usr/kde/bin
    /usr/java/bin
    /usr/adobe/bin
    /usr/netscape/bin
    /usr/mozilla/bin
    /usr/real/bin

    Was that so hard. It would work for me. Most program with more than just an executable gets a subdir. Suddenly it would be possible to wander around in the directories without ls scrolling off the screen.

    And all the related files which seem to live in /usr/share like all the kde config directories can go in /usr/kde too.

    And as far as just use a package-manager... there is no point in using one tool to avoid a problem - the problem is still there. Package-managers solve the problem of package dependencies, not messy filesystems.
  • my /usr/bin (Score:2, Insightful)

    by ultrapenguin ( 2643 ) on Wednesday November 21, 2001 @07:18PM (#2598567)
    only has 380 files in it, and I know exactly what each one of those files is for.

    I don't know about mosfet's problems, but I have no problem managing my filesystems.

Always draw your curves, then plot your reading.

Working...