Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Rage Against the File System Standard 612

pwagland submitted a rant by Mosfet on file system standards. I think he's sort of over simplified the whole issue, and definitely wrongly assigned blame, but it definitely warrants discussion. Why does my /usr/bin need 1500 files in it? Is it the fault of lazy distribution package management? Or is it irrelevant?
This discussion has been archived. No new comments can be posted.

Rage Against the File System Standard

Comments Filter:
  • by PigeonGB ( 515576 ) on Wednesday November 21, 2001 @10:43AM (#2595642) Homepage
    Is it really that bad? Would I not have much control over where programs get installed to?
    I would think that even without a package handler to do it for me, the program itself would allow me to say where it should be installed...or is that just the Windows user in me talking?
  • by kaisyain ( 15013 ) on Wednesday November 21, 2001 @10:45AM (#2595652)
    Anyone who claims that RedHat started the use of /usr/bin/ as a dumping ground can't be taken seriously. Pretty sure slackware and SLS did the same thing. Same goes for Solaris, AIX, AUX, Sun/OS, Irix, and HPUX.

    It's not about lazy distributors. It's about administrators who are used to doing things this way and distributors going along with tradition.
  • Linux From Scratch (Score:4, Interesting)

    by MadCamel ( 193459 ) <spam@cosmic-cow.net> on Wednesday November 21, 2001 @10:48AM (#2595664) Homepage
    This is _EXACTLY_ why I use LinuxFromScratch [linuxfromscratch.org]. You do not HAVE to use the package managment system, you can install anything *just* the way *you* want it. X applications in /usr/bin? No way jose! (My appoligies to anyone named Jose, I'm sure you are sick of hearing that one), /usr/X11 it is! If you are not happy with the standards, make your own, it just takes a little time and in-depth knowledge.
  • Re:The Alternative? (Score:2, Interesting)

    by dattaway ( 3088 ) on Wednesday November 21, 2001 @10:49AM (#2595672) Homepage Journal
    Is there such thing as a recursive PATH directive for executables? Like the ls -R or something for searching into subdirectories?
  • Re:The Alternative? (Score:3, Interesting)

    by kaisyain ( 15013 ) on Wednesday November 21, 2001 @10:50AM (#2595676)
    You would only need 2000 path entries if your expect your shell to have the same exact semantics that it does today. There is no reason whatsoever that PATH couldn't mean "for every entry in my PATH environment variable look for executables in */bin". A smart shell could even hide all of these behind the scenes for you and provide a shell variable SMART_PATH that gets expanded to the big path for legacy apps.

    Or you could do what DJB does with /command and symlink everything to one place. Although I'm not sure if that solves the original complaint. Actually, I'm not sure what the original complaint is, having re-read the article.
  • by CrazySecurityGuy ( 529210 ) on Wednesday November 21, 2001 @10:56AM (#2595704) Homepage
    Uh huh. And when something goes terribly wrong, how do you determine what went wrong? Our production servers (HPUX, Solaris, AIX) have in the /usr/* only what the system supplied. Everything else gets put in it's "proper place"- either /opt/, or /usr/local/ (it's own filesystem) or similar. The paths are not so bad- and the system is healty and clean. The alternative? A system easily attacked with a trojan horse.
  • by Baki ( 72515 ) on Wednesday November 21, 2001 @11:00AM (#2595733)
    ~> ls /usr/bin | wc -l
    403
    ~> ls /bin | wc -l
    36
    ~> ls /sbin | wc -l
    91
    ~> ls /usr/sbin | wc -l
    220
    ~> ls /usr/local/bin | wc -l
    796

    This is FreeBSD, which installs a relatively clean OS under /usr and puts all extra stuff in /usr/local (sometimes the executable is in /usr/local/bin, sometimes in /usr/local//bin).

    I like that much more, it is the old UNIX way to separate the essential OS from optional stuff. It really is a pity that most Linux distro's dump everything directly in /usr.

    As for my slackware, I installed only the minimum, and roll my own packages for everything I consider not to be 'core Linux'; all these packages go under /usr/local. It can be done, and keeps things tidy and clean.
  • by kramerj ( 161379 ) on Wednesday November 21, 2001 @11:01AM (#2595736)
    And then you get into naming conflicts down the road.. MS has this problem now, and is dealing with it partly with the new fandangled "Private Packages" or whatever in XP.. Basically unsharing shared libraries.. There DOES need to be separation that can be controlled more than it can be now, or we are going to see problems in the future. Have you ever installed a package and a file was already there? Were they the same file? Do you know? Version? Its a bad idea to clump everything together... what we need is to make a path statement extension, that basically says /usr/bin/*/ to allow everything one directory down, OR, allow packages to register their own paths in their install directories (ie, a file that gets installed and then pointed to to say "search here for executables as well"). Make it an config in /etc that points to these other little files that contain places to look, then at boot time enumerate that all out and make a tree of the executables.. fast and easy to manage..

    Jay
  • by Pseudonym ( 62607 ) on Wednesday November 21, 2001 @11:02AM (#2595743)

    Even better would be if Linux had a translucent file system. Simply mount all the path directories on top of each other and let the OS do the rest.

    For the uninitiated, a translucent file system lets you mount one filesystem on top of another filesystem, the idea being that if you tried to open a file the OS would first search the top filesystem, then the bottom one. In conjunction with non-root mounting of filesystems (e.g. in the Hurd) it removes the need for $PATH because you can just mount all the relevant directories on top of each other.

  • by Steve Mitchell ( 3457 ) <steve@NOSPAM.componica.com> on Wednesday November 21, 2001 @11:03AM (#2595752) Homepage
    I wish Unix/Linux had a mechanism where a directory could be marked executable and executing the directory whould internally call some default dot file (such as .name_of_directory)within the directory, and some environmental variable (like $THIS_PATH) was set to the directory and passed to the application process.

    Maintance for applications like these whould be a no-brainer. Just move the directory and all the associated preference files and whatnot travel with the app.

    -Steve
  • by vrt3 ( 62368 ) on Wednesday November 21, 2001 @11:05AM (#2595760) Homepage

    I think the fundamental problem here is related to yesterday's story about new user interfaces [slashdot.org]. It's a problem of how and where storing our files. Regarding applicationsn, there are two ways to do it: you can store all files (binaries, config files, man pages, etc.) of the same application in the same directory, or you can store all files of the same type from different applications in their respective directories (all config files in /etc, man pages in /usr/share/man (I think), etc.).

    Both approaches have their advantages. The problem with hierarchical file systems is that we have to choose one of them. I would love to see a storage system where we can use both ways _at the same time_. A system that groups file depending on relationships they have. Such that 'ls /etc' gives me all config files for all apps, and 'ls /usr/local/mutt' shows me all mutt-related files, including it's config file(s).

    I have no idea how to implement such a beast. I'm thinking about a RDBMS with indices on 'filetype' and 'application', but I would love to see something much more flexible. All pictures should be accessible under ~/pictures and subdirectories, all files relating to my vacation last year in ~/summer2000. Files relating to both should be in ~/pictures/summer2000 _and_ ~/summer2000/pictures.

    To a certain extent, this can be done via symlinks, but it should be much easier to deal with. You shouldn't have much manual work

  • RiscOS... (Score:4, Interesting)

    by mirko ( 198274 ) on Wednesday November 21, 2001 @11:06AM (#2595767) Journal
    In RiscOS, applications are directories which contains several useful files (besides the app binaries, conf or data files):
    • !Sprites[mode] contains the icons to be used with the app and whichever file to be associated with after its filetype
    • !boot which contains directives (associations, globalvariables, etc.) to be executed the first time the Filer window that contain this app is opened (the app is "seen" by the Filer)
    • !run which describes any action to be associated with a double-click on the app icon

    There's also a unique shared modules directory in the System folder.

    This system is at least 10 to 15 years old (not sure Arthur was as modulable, though) and sure proved to be an excellent way to deal with this problem...
  • by cthulhubob ( 161144 ) on Wednesday November 21, 2001 @11:09AM (#2595785) Homepage

    I came away thinking "this man is insane".

    1. He claims DOS had a better way of organizing applications. This is a red herring. I don't want to organize my applications. Ever. I want to organize my data. I don't remember many applications in DOS that were compatible with the same type of data. If there had been, the limitations of the DOS structure would have been readily made apparent. First, CD into the directory where your audio recording utility is and make a .wav file. Then, move the .wav file into the directory where your audio editing utility is and edit it. It works, but why not keep the data in one place and run programs on it as you see fit without regard for their location on your hard drive, and without having a 10-second seek through your PATH variable?

    2. Besides which, DOS had c:\msdos50 (or whichever version you used). That was DOS's variation on /bin. Ever look in that directory and attempt to hand-reduce the number of binaries in it to save disk space? I did. A package management system would have made that doable.

    3. You can have all the localized application directories you want in /usr/local. The point of /usr/local is to hold larger packages which are local to the system. (hmm... /usr/local/games/UnrealTournament, /usr/local/games/Quake3, /usr/local/games/Terminus, /usr/local/games/RT2...) And as a bonus, thanks to the miracle of symbolic links you can have your cake and eat it too - as long as the application knows where the data files are installed you can make a symlink of the binary to /usr/local/bin and run it without editing your PATH variable too! Isn't UNIX grand?

  • FreeBSD (Score:4, Interesting)

    by sirket ( 60694 ) on Wednesday November 21, 2001 @11:22AM (#2595859)
    The file systems on a Unix system make a lot of sense, when people use them correctly.

    /bin for binaries needed to boot a corrupted system.

    /sbin for system binaries needed to boot a system.

    /usr/bin for userland binaries installed with the base system.

    /usr/sbin for system binaries installed with the base system. The are not programs required to boot the system.

    /usr/local/bin for locally installed user binaries such as minicom, mutt, or bitchx.

    /usr/local/sbin for locally installed system binaries such as apache.

    Large locally installed programs such as Word Perfect get installed in a sub directory of /usr/local but they put a single executable in /usr/local/bin so that you do not need to change your path.

    FreeBSD has only about 400 programs in a complete /usr/bin. Other programs are spread about the file system in sensible locations or are user installed. Possibly the only directory that does not make a whole lot of sense is /usr/libexec (where most of the internet daemons are kept).

    -sirket
  • by kune ( 63504 ) on Wednesday November 21, 2001 @11:25AM (#2595880)
    From my .zshenv, works in .profile too. Could be used also for other path variables. Works for all Operating Systems with a reasonable Bourne Shell.

    export PATH

    reset_path() {
    NPATH=''
    }

    set_path() {
    if [ -d "$1" ]; then
    if [ -n "$NPATH" ]; then
    NPATH="$NPATH:$1"
    else
    NPATH="$1"
    fi
    fi
    }

    reset_path
    set_path $HOME/bin
    set_path /usr/local/gcc-2.95.2/bin
    set_path /opt/kde/bin
    set_path /usr/lib/java/bin
    set_path /usr/X11R6/bin
    set_path /usr/local/samba/bin
    set_path /usr/local/ssl/bin
    set_path /usr/local/bin
    set_path /usr/local/bin/gnu
    set_path /usr/bin
    set_path /bin
    set_path /usr/local/sbin
    set_path /usr/sbin
    set_path /sbin
    set_path /usr/ucb
    set_path /usr/bin/X11
    set_path /usr/ccs/bin
    PATH="$NPATH:."

    unset reset_path set_path
  • I don't mind (Score:2, Interesting)

    by Anonymous Coward on Wednesday November 21, 2001 @11:34AM (#2595943)
    having 2000 entries in /usr/bin, just so long as they are executables that do not require dependencies, or libraries that are too specific for the task. When applications grow bigger, their dependencies on other things in the filesystem increase. They'll want an /etc entry, some icons here and there, specific libraries, development include files and so on. That's when the time comes to simply mkdir /usr/local/xxx and ./configure --prefix=/usr/local/xxx. After all that, you can still have symlinks in /usr/bin, but it won't matter, because when you rm -rf /usr/local/xxx, the symlinks go dead and you can remove them.
  • by SilLumTao ( 134743 ) on Wednesday November 21, 2001 @11:34AM (#2595944) Homepage
    Anyone who claims that RedHat started the use of /usr/bin/ as a dumping ground can't be taken seriously. Pretty sure slackware and SLS did the same thing. Same goes for Solaris, AIX, AUX, Sun/OS, Irix, and HPUX.

    Agreed, but does that make it right?

    For the last few years, this is the kind of thing that has really been nagging me. All OSes seem to suffer from the same problem. Why are we so stuck with the mindset that traditions of the past shouldn't be challenged? Can't we, as "brilliant" computer scientist, start solving these problems and move on?

    I recently demo'ed a good Linux distro to a friend and it finally dawned on me. When you load KDE, you are literally overwhelmed with options. My friend asked, "What is the difference between tools and utilities?". I didn't know. I tried to show him StarOffice and it took me a few minuets of digging in different menus.

    No, I don't use Linux on a daily basis, and no, I'm not the smartest person in the world. But I think I see the problem. Everything seems to be an imitation of something else (with more bells and whistles). Where is the true innovation? Our computers and software are not significantly different than they were 20 years ago.

    Why are we still using $PATH?

  • Re:Response (Score:3, Interesting)

    by sfe_software ( 220870 ) on Wednesday November 21, 2001 @11:37AM (#2595963) Homepage
    I agree, Windows isn't the problem in the case of DLLs. It really is stupid for an uninstall routine to ask the user whether to delete a DLL. It seems it should either know that it's not needed by any other program, or leave it alone. Asking the user (and really think about your typical Windows user) about deleting system files is a mistake. I've walked more friends and family through reinstalls after having uninstalled crappy shareware...

    Unfortunately this practice is common thanks to InstallShield being used by so many programs, as InstallShield always asks before deleting a so-called "shared" DLL. Keep in mind, half of the time the DLL is program-specific (ie not shared), and other times it's something the program itself did not install in the first place (was already there). I don't think Windows itself is to blame here...

    Win2k still suffers from this, but if you do delete a DLL it almost always magically reappears. It's part of some scheme to protect the system from its users I believe, but it is a real pain when you actually want to remove a DLL...

    As for the Unix side, I've always wondered about the organisation (or lack thereof) of programs. Many tools do IMO belong in central locations (cat, grep, ls...) but anything larger should have its own directory. I long for the day when I can say:

    export PATH=$PATH:/usr/programs/*/bin

    or something to that effect...

    Most of your larger packages do attempt to install into their own locations; Apache by default ends up with /usr/local/apache/* though it does tend to scatter a few things around. MySQL, Qmail, and a few others generally create subdirectories for most of their files. Not perfect, but it's a step in the right direction anyway.

    I personally hate RPM, and I generally snag a tarball over an .rpm any day. I do like *BSD's ports collection quite a bit, but on RedHat RPM is about the best we've got. RPM is fine for the initial install, and even for adding some system-level tools/packages/upgrades, but any major software instalation after that I prefer to install manually; and of course, this doesn't help the issue at hand one bit...

    Unfortunately, I have my complaints about filesystem standards, but I don't have any solutions either, really. Too much software exists that depends upon our current system, though a proposed future standard might be nice. Maybe a new POSIX recommendation is in order... and once some years go by, software vendors will slowly migrate to the new standard... of course I don't know what that standard might be...
  • Missing the point (Score:2, Interesting)

    by Birdie-PL ( 255639 ) on Wednesday November 21, 2001 @11:44AM (#2595994) Homepage
    I think that he is missing the point.
    One the aims of all the package management tools is to make the management easier. In particular this means, that you don't have to care, where are the files of application XYZ. So, if you wish to delete this, you ask your manager instead of finding all the subdirectories created by the package. You want to save your time, so you use the tools available. Era of manually managing everything is long gone.
    Please note, that under Unices most of the applications are not installed in single directory - one is for binaries, one for documents, etc.
    Under DOS and Windows, even the apps that went into their subdirectory had an annoing habbit of creating miscellaneous temporary/configuration files all over the place. And lack of file attributes did a lot to help this.
  • by GISboy ( 533907 ) on Wednesday November 21, 2001 @11:44AM (#2595995) Homepage
    When you consider the /usr or /local was similar in purpose as "program files" (or progra~1 if you want to be specific) had the best of intentions.
    Well we know about which road going where based on good intentions.

    At any rate, part of the "problem" is there is a certatin point a section of the file system gets unmanageable. Where that is, quite frankly, varies.

    RedHat has impressed me with its compatability but it does so with static libs. There are times when god forbid you should wish to compile something and get gripe messages that you window manager was done under X set of libs, your theme manager under Y's libs and your shared libs are of version Z.
    That is just trying to update the WM, god forbid you wish to compile a kernel.
    And with the static libs, the performance hit is astounding.

    The other side, as with Slackware, is shared libraries can be as unforgiving as well.
    Heh, as a newbie I deleted a link to a ld.so.X.
    Hint: never, ever do this! ls, ln, mv et al stop working...oops.
    Stupidity on my part, but, hey, I was a newbie. (finger; fire; burn; learn. simple.)
    Back on track. Slack is fast, configurable but through sheer will, accident, or stupidity can be broken a lot faster (and in some cases fixed a lot faster).

    Windows...well the sword cuts both ways. It impresses and suffers *both* of the good and bad points of RH/SL (or static and dynamic libs).

    And, if the above does not either blow your mind or make you nod off consider OS X.1.1 (.1.1.1....)

    Under OS X's packages system a 'binary/folder/application' (oye) can and does contain static libs. Ok, that can be good/bad.
    Here is the kicker (and cool part): if it finds *better* or more *up to date* libs it can use them and ignore what *it* has.
    If the new libs break the app, or cause problems, the application can be "told" or "made" to use only its own libs, or update the newer libs.

    Most will see where that is going. It will be good to keep "static" then use "dynamic" or update the "dynamic/shared" libs.
    The down side is the potential to fix one application and break 10+ others.

    This has not happened...yet. However, the *ability* to make or break is there, just no information is given until a spec/CVS set of rules is fleshed out.

    I will be the first to admit that the "binary folder" or "fat binary" (arstechnica.com article) idea sounded "less than thrilling"...until you realize the headache's it cures with this kind of file system bloat.

    Think about it: You have an app, that is really a folder, that you can't see inside/manipulate/fix/break unless you know how *and* have a reason to.

    In all three cases there are limits to even the most intelligent of design. Knowing this truth is easy to accept. Finding where it lies and where it breaks down...that is another discussion.
  • by renehollan ( 138013 ) <[rhollan] [at] [clearwire.net]> on Wednesday November 21, 2001 @11:48AM (#2596015) Homepage Journal
    This is one of the things that FHS tries to address. I used FHS 2.1 in Teradyne to manage a custom GNU/Linux distro for one of their products [If you purchased NetFlare from them, you should have all the updated GPL goodies and additions I put there on a source companion CD].

    While not perfect, it addressed the following issues:

    1) separating O/S from "other" packages;

    2) maintain a sane place to put different packages;

    3) support the notion of linking to specific package directories from a common place to keep PATH small;

    4) was compatible with a number of "traditional" conventions.

    Of course, FHS 2.1 has this concept of the "operating system" files and "other files". Presumably the "operating system" is that which the distro bundler provides... so Red Hat would be free to put as much as it wants under /usr. But this causes a problem if you looks at a common standard base for several distros, like the LSB.

    Do you have a "standard base" part, and a "distro part", and then a "local part"? Clearly what's needed is a hierarchical way of taking an existing "operating system" and customizing it to a "custom operating system". Right now, FHS allows this for distro bundler and end user, but there is no support for the process iterating.

    Of course, my experience has been with FHS 2.1 and have since moved on to employment elsewhere, so perhaps the FHS addresses these issues.

  • trouble... (Score:2, Interesting)

    by curtis ( 18867 ) on Wednesday November 21, 2001 @11:52AM (#2596036) Homepage Journal
    Is it me or does controversey always follow this guy around? :-)

    He does make a good point but I think once the history of the file system evolution is taken into account, the layout makes sense. The problem is, not every distribution adheres to the fs layout unwritten rules for various reasons and the result is a mess.

    Hopefully, the Linux Standards Base will help to address this.

  • Re:The Alternative? (Score:2, Interesting)

    by SuperQ ( 431 ) on Wednesday November 21, 2001 @11:58AM (#2596064) Homepage
    You miss one critical problem. package management doesn't just handle path issues.. it handles dependancies, conflicts, version control, etc.

    unfortunaltly, not all software is written to the same standards. some code comes with Makefiles that have hard-coded paths, different build systems, hard-coded -L paths, things that make installing software on a random system painfull.

    Package management is a way to standardize the way software is installed, upgraded, and removed.

    I work in a system where we maintain our own software packages. the system was originaly slackware, but is now a custom distribution for our 200 odd linux workstations. Some days I would like to burn it to the ground, and replace it with a sane package managed system (debian). We have no real automated way to build the system up from source. software that conflicts is just piled on top of each other durring our automated install process. upgrading packages is painfull.

    basicaly what we do to update the software on workstations is to have the workstations format, and re-install with an automated build process once ever 2 months. that way over a period of 2 months, we get all of our software fixes out to the workstations. we also have scripts to automate emergency patches to all workstations.

    sometimes this is nice, because as we have to make changes to the system (new version of fvwm2 breaks old user config files). we only get calls every few days, and not all at once in the morning. and it keeps users from depending on /tmp for storage, it forces them to use their home dir for storage.

    but having two people reproduce the work of a linux development group seems to be a waste of time in the long run. I am currently trying to come up with a good, solid sales pitch for my boss and senior admin to move us to a debian base system. and for us as a department to use our system maintence time to help give back to the debian community.
  • by TomatoMan ( 93630 ) on Wednesday November 21, 2001 @12:06PM (#2596122) Homepage Journal
    Package management is a way to standardize the way software is installed, upgraded, and removed.

    It sounds very appealing. The problem is that a lot of the software I need right now (openLDAP, openSSL, etc) has packages that are a full development generation old. There isn't a 2.x package yet for openLDAP on RH 6.2, for example, and I don't think anybody in particular is in charge of building it.

    Building from source is the only way to be current, although it is often an immense pain in the ass.

    The other gripe I have is about packages failing to recognize libraries that are installed just because they weren't installed by a package manager. Yes, you can force a --nodeps sometimes and cross your fingers, but you shouldn't have to lie to the software to get it to work. Package managers should be a little smarter and be able to look around a little to satisfy dependencies.

    If the package system really worked cleanly, it would be great, but I'm still using Pine 4.20 on my box because of conflicting dependencies in the 4.3x packages. I'm about to nuke the whole thing and build Pine from source - which I'll do as soon as I can get those library dependencies solved.

    Grr.
  • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Wednesday November 21, 2001 @12:10PM (#2596149) Homepage Journal
    FreeBSD comes with something called a union filesystem that is exactly what the poster described. From man (8) mount_union:

    DESCRIPTION
    The mount_union command attaches directory above uniondir in such a way
    that the contents of both directory trees remain visible. By default,
    directory becomes the upper layer and uniondir becomes the lower layer.

    Non-FreeBSD users can read an online version [freebsd.org].
  • by mattdm ( 1931 ) on Wednesday November 21, 2001 @12:31PM (#2596273) Homepage
    Yes, plan 9 pretty much revolves around this idea.

    It's my impression that recent developments in the Linux kernel (along with bind mounts [ibm.com], etc.) are moving towards making this easy to implement.
  • by mattdm ( 1931 ) on Wednesday November 21, 2001 @12:53PM (#2596410) Homepage
    Also see the plan 9 "rc" shell docs [le.ac.uk]:


    Extensive use of the $path variable is discouraged in Plan 9. Instead, use the default (. /bin) and bind what you need into /bin.
  • by droleary ( 47999 ) on Wednesday November 21, 2001 @12:53PM (#2596414) Homepage

    I think the fundamental problem here is related to yesterday's story about new user interfaces [slashdot.org] [slashdot.org]. It's a problem of how and where storing our files.

    You could also trace it back to the hierarchical database article [slashdot.org], which is when I started making a lot of posts on the subject. It seems there is finally a lot of interest being generated about this sort of thing.

    I have no idea how to implement such a beast. I'm thinking about a RDBMS with indices on 'filetype' and 'application', but I would love to see something much more flexible. All pictures should be accessible under ~/pictures and subdirectories, all files relating to my vacation last year in ~/summer2000. Files relating to both should be in ~/pictures/summer2000 _and_ ~/summer2000/pictures.

    This is exactly the sort of thing I'm doing with my Meta Object Manager (MOM) software called Mary. Metadata in the form of attributes and values is associated with each file/object and you can do a query (both textually and graphically) on that metadata. For simple paths like you describe, it is a value query irrespective of a particular attribute, but there is support for a more structured "path" (I actually call it a "focus" as it restricts your focus to a subset of the objects on the system) like /type=picture/location=Hawaii/year=2000. Because the focus items are metadata attributes, order is not significant. With such a system, there are no directories or symbolic links; it's all dynamically structured based on what your metadata focus is at any particular time.

    Mary is just in the alpha stages at this point, but it already works well on the command line for the type of things you describe and I'm using it myself to manage nearly 350,000 objects that have been flowing through my system. I'm not exactly sure when it'll be ready for public consumption, and it'll require a GNUstep [gnustep.org] port to get working on Linux (I'm doing development on Mac OS X) systems. I was hoping year end, but I don't think I'll have the time. Summer 2002 has a nice ring to it, though. :-)

  • QNX has it (Score:3, Interesting)

    by Wesley Felter ( 138342 ) <wesley@felter.org> on Wednesday November 21, 2001 @01:22PM (#2596585) Homepage
    QNX has a package filesystem [qnx.com] like what you describe; it looks like it solves Mosfet's problem and keeps PATH simple.
  • by Coplan ( 13643 ) on Wednesday November 21, 2001 @01:28PM (#2596624) Homepage Journal
    Maybe such a method already exists. But in case it doesn't, it would be nice to be able to integrate such information directly into the ls function (so what if it's a subcommand, doesn't bother me). That way, you can do a list of the directory, and see ON THE FLY what program does what. I imagine that performance might be an issue on large directories...but a database query can't be too processor demanding these days, can it?

    I agree, though. We don't necessarily need to separate programs for the most part. Espeically since the average program in the /usr/bin directory is only one file. But in the case that such a program requires several files (IE: Mozilla), these programs are already separated for the most part. So, it tends to be a non-issue anyhow. Mind you, the only real issue at hand would be the fact that one would have to create a standard for what gets separated, and what doesn't. But again, that's essentially a non-issue as a symbolic link typically takes a programs place within the /usr/bin directory anyhow.

    So nothing new needs to be done.

  • by CynicTheHedgehog ( 261139 ) on Wednesday November 21, 2001 @01:38PM (#2596713) Homepage
    Why not store all files in some kind of root context, then augment them with attributes describing what type of file it is and what relationship it has to other files, and who is allowed to do what to it. That way everything is in the path (and I mean *everything*) but graphical filesystem browsers can organize things however they want by reading the file attributes.

    You'd run into naming conflicts, but those can be resolved transparently by the filesystem. Think of it like a database with a multi-column key: the filename and it's path.
  • by Anonymous Coward on Wednesday November 21, 2001 @02:04PM (#2596890)
    I absolutely agree that this is the best way to do it, and I generally try to do it the same way myself.

    However the files which default go into the /usr/share... directory give me problems, because the share hierarchy that comes with the application usually has its own appname/version subdirectories and linking from /usr/local/{bin|man|lib|include..} gets clumsy (I think at least) comments?

    - what it is the share hierarchy for anyway?

  • by Medievalist ( 16032 ) on Wednesday November 21, 2001 @02:08PM (#2596908)
    While Red Hat is certainly a major offender, HP-UX 11.0 has device log files in the /etc hierarchy, and the runlevels are still under /sbin, and every "optional software" dumping ground ever invented (share, contrib, usr/local, opt, and more) as well as a totally brain-dead depot system that makes RPM look inspired.
    I've said it before - and I'm not the first or last to notice - HP-UX is a *train wreck* of a unix. HP puts Fibre Channel controllers that are necessary for the system to BOOT in the /opt folder!
    --Charlie
  • by Spinality ( 214521 ) on Wednesday November 21, 2001 @02:29PM (#2597035) Homepage
    This is kind of a wacky solution -- eliminate hierarchical directory structure and replace it with a (presumably) hierarchical attribute structure. I cringe. But I think this idea points out one key issue. We currently use the hierarchical directory tree to represent two orthogonal properties: logical file organization versus execution path. Usually, when we use one construct to support two different requirements, it fails to be ideal for either.

    I suspect that an elegant solution to this problem could be based on the core concept proposed here by CynicTheHedgehog, viz.: Organize the complex attribute data associated with each file system object by using a multicolumn table, containing various object capabilities and properties. And stop trying to encode these properties in a single name string, plus a chunk in a path variable.

    Of course, this would require a new file system paradigm, and might have a tiny impact on existing distros. :) But the issue is worth some discussion.
  • Re:Why? (Score:3, Interesting)

    by dillon_rinker ( 17944 ) on Wednesday November 21, 2001 @02:29PM (#2597038) Homepage
    The problem is when it DOESN'T just work.

    You could theoretically (and actually, too, since you've got the sources :) glom ALL your files together in / with no problem. When you mount another file system, all the files within that system are added to the pile in /. Why not do that? Because there are benefits to a hierarchical file structure. There are benefits to hierarchies at every level, though it is possible to take it to an extreme.

    If everything works, there's actually no problem in glomming everything together in /. The problem is when something breaks and a HUMAN BEING has to analyze what's on the system. This is less of a problem on hackers' personal systems, used and administered by solitary individuals at their own whim, than it is on a business server, used and administered by many. You want as much as possible of your system to be obvious to human eyeballs when most everything on the system is broken.

    This, BTW, is why I am fundamentally morally opposed to binary storage of configuration data (a la the win32 registry) versus plain-text storage. Binary is easier for the computer to handle, which is great as long as things work. Plain text is easier for me to handle, which is useful only when things break. Since the computer can work with either, plain text is preferable.

    When things break, I must have the ability to go from zero knowledge about a broken system's configuration to a fully functional system as quickly as possible. Well-organized files that take full advantage of a hierarchical file structure, and plain-text config files, are much helpful in this situation.

    (It just occurred to me that referring to the root directory as "/" at the end of a sentence produces an ambigous symbol; suffice to say I don't mean slashdot.org by "/.")

    IF everything always worked, there would be no advantage to . You could eliminate the path statem
  • Interesting solution (Score:2, Interesting)

    by iamcadaver ( 104579 ) on Wednesday November 21, 2001 @02:44PM (#2597134)
    The are a few guys on LFS's [linuxfromscratch.org] blfs-discuss mailing list that suggest that they modified the 'install' program to install each package as a user with the base package name.

    On first inspection, it of course makes identifying and finding what put whom where ludicrously simple.

    find / -group KDE

    On second reflection, it adds a finer layer to group/user mangement, and administrative delegations.

  • MacOS X - (Score:2, Interesting)

    by phandel ( 178702 ) on Wednesday November 21, 2001 @04:12PM (#2597627) Journal
    [localhost:~] local% uname -a
    Darwin localhost 5.1 Darwin Kernel Version 5.1: Tue Oct 30 00:06:34 PST 2001; root:xnu/xnu-201.5.obj~1/RELEASE_PPC Power Macintosh powerpc
    [localhost:~] local% ls -l /bin | wc -l
    32
    [localhost:~] local% ls -l /usr/bin | wc -l
    450
    [localhost:~] local% ls -l /usr/sbin | wc -l
    113
    [localhost:~] local% ls -l /sbin | wc -l
    58

    Not too bad, eh?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...