Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Red Hat Software Unix Linux

Fedora Aims To Simplify Linux Filesystem 803

jfruhlinger writes "Even Linux's most passionate partisans will admit that its filesystem, which stashes vital files in a variety of arcane directories, can be baffling to users. The developers at the Fedora project want to cut the Gordian knot and consolidate all executables into /usr/bin and all libraries into /usr/lib or /usr/lib64. One downside: this system would conflict with the standards developed by the Linux Standard Base, or the (rarely used) Filesystem Hierarchy Standard."
This discussion has been archived. No new comments can be posted.

Fedora Aims To Simplify Linux Filesystem

Comments Filter:
  • The one thing that baffles my mind is that Linux filesystems still don't offer compression of specific folders or files. Seriously, Windows has had this for over a decade. There's a few experiemental filesystems that can compress the whole partition, but still not individual files. Why doesn't Linux have such a simple but important filesystem feature? And no, I don't want to make an archive file, because I want to access those files and folders while they are compressed.
    • by royallthefourth ( 1564389 ) <royallthefourth@gmail.com> on Wednesday November 02, 2011 @07:05PM (#37927262)

      Make the directories you want compressed into mountpoints on the compressed partition.

    • It's still possible with squashfs and aufs, but squashfs is readonly so aufs is so you can still write to the directory.

      It's not very user friendly obviously however.

    • Re: (Score:3, Interesting)

      by billcopc ( 196330 )

      How important is that, really ? The only times I've used NTFS compression were for freeing up temp space on ancient servers, back when 9gb and 18gb SCSI drives were the norm. Seems like a throwaway feature to me.

      For portable usage like CD/DVD and USB flash, a full-disk compressor like squashfs is just fine.

      • by barrkel ( 806779 )

        SSD = very expensive small storage; mobility = no space for extra storage

      • by DrXym ( 126579 )
        I've used compression on laptops which are running tight on space, especially when a folder contains hundreds or thousands of infrequently but still useful files. e.g. documentation & samples folders, apps I still need but don't run too often and so on.
    • by Arker ( 91948 )

      And no, I don't want to make an archive file, because I want to access those files and folders while they are compressed.

      Yes, actually you do, because once you do you can.

      Anyhow back towards the 'article' such as it is, conflicting the the LSB is no reason not to do something, the LSB has never been relevant to anything. If you want a standard file layout just copy Slackware's - it's the most sensible and broadly compatible.

    • by SuperQ ( 431 ) * on Wednesday November 02, 2011 @07:11PM (#37927346) Homepage

      This isn't 1995. Nobody cares about filesystem level compression anymore. Go buy a 2T drive.

      • by c2me2 ( 2202232 )
        It's not your business to tell people what they should want. And not every machine or situation is right for a 2TB drive. Flash drives, laptops, tablets, etc.
      • by antifoidulus ( 807088 ) on Wednesday November 02, 2011 @07:23PM (#37927524) Homepage Journal
        Actually there are valid reasons to compress at least some of your files beyond the need for saving space, namely that the speed differential between I/O(esp. platter drives) and CPUs is continuing to grow incredibly fast. The performance gap is getting to the point that for files that tend to compress(executables and libraries are among them), the time it takes to read the compressed file off the disk and then decompress it in memory may be less than the time it takes to read the entire uncompressed file from disk.

        Now there are tools that allow you to do this just for executables, but since they don't run on all platforms you can be in kind of a bind. By putting your executables and libraries in their own file compressed file system, you can gain a lot of the advantages of executable compression while still being able to use it on pretty much any platform.
      • by barrkel ( 806779 )

        How will I get that into my Macbook Air?

      • by smash ( 1351 )

        Yup. Its all about de-dupe. Files are already compressed as CPU speed has outstripped IO by a large margin - compressing your file format gets you better IO speed. Thus, having the FS try and compress your already compressed data with some generic algorithm = LOSE.

        Block level de-dupe on the other hand, can save you HEAPS of space on a typical system. Like, cut your disk utilization by 50-60% on a typical file share storing end-user files.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      And no, I don't want to make an archive file, because I want to access those files and folders while they are compressed.

      The way I understand it, there's really no good generic way to handle file compression at the FS level.

      Even the way NTFS does it is to create a compressed file to hold the contents of the original file, like an archive. But if you'll notice, whenever you open the compressed file, NTFS will expand the whole compressed data into another special file until you close it. Watch the disk space

      • by dmitrygr ( 736758 ) <dmitrygr@gmail.com> on Wednesday November 02, 2011 @07:38PM (#37927712) Homepage
        wrong wrong wrong wrong wrong wrong wrong wrong just tried it. NTFS FS, 2GB free, file on there 30GB full size, compressed to 15GB. opens juts fine even though there isn't 30GB free there. also opens instantly (does NOT decompress 30GB anywhere)
      • No, I think you are confusing the compressed files/folders function (right-click, send to compressed folder) with the compression filesystem property (right-click, Properties, Advanced). The former is just a userspace program, like Winzip, that makes archives. The latter is exactly what you described in your second paragraph. The filesystem transparently compresses files in small clusters, and it suffers from fragmentation problems, like you also mention.

    • by jd ( 1658 ) <imipak@ y a hoo.com> on Wednesday November 02, 2011 @07:14PM (#37927392) Homepage Journal

      They don't need to. Linux has the ability to read/write compressed files directly (zclib?) and doesn't need the filesystems to support this. Which is great because it means compressed files will work under ALL filesystems ALL of the time (if you have the library installed) and you don't have to wait for each filesystem maintainer to add it. You also have no risks of one FS maintainer deciding another's implementation sucks and not being compatible with it. Which is very likely under Windows.

      • The difficulty is that(while most compressed archives are supported) you run into the situation where either the tool you are using has to have the ad-hoc ability to operate on the contents of a given archive type, or you need to decompress somewhere temporary, operate, and then compress it back up. That is sort of annoying.

        What you really want is something like http://code.google.com/p/fuse-zip/ [google.com] Fuse-zip(ideally one with support for all common archives you are going to run into; but if this is just for
    • by mysidia ( 191772 ) *

      The one thing that baffles my mind is that Linux filesystems still don't offer compression of specific folders or files. Seriously, Windows has had this for over a decade.

      It sounds like you want ZFS [zfsonlinux.org]. ZFS has supported compression for a long time LZJB compression since early on, GZIP compression since pool version 5, ZLE compression since pool version 20...

      The only problem is.... well, on Linux it's mostly available only using FUSE. There is the ZFS On linux port mentioned, but I suppose that's r

    • by Gumber ( 17306 )

      Well, putting aside the fact that you are talking about filesystem internals, and the OP is talking about conventions for filesystem layout:

      Disks are really big these days. The things people tend to fill them with are images, video and audio that is already in a compressed format. So, for the average user, directory compression isn't going to be a big win.

      To put it more succinctly, this isn't an important filesystem feature.

    • My understanding is that btrfs has something in that vein. In the meantime, the ability to use FUSE opens some convenient options for special-case compression of specific areas of your system:

      fuse-zip, as its name suggests, lets you mount a zip archive as though it were a filesystem, giving arbitrary programs the ability to interact with the contents as though it were an ordinary FS, so no need for tool-specific zip support. The .zip still has to go somewhere, so this isn't really an FS for /; but if you
    • by smash ( 1351 )

      What you probably really want is de-dupe. Run ZFS. ohwait...

      IMHO, this effort is pointless. If you're poking around in the file system you want it to be like any other Linux so that you don't piss off admins any more than redhat already does due to being different. If you're an idiot who can't RTFM to find out where files are, then you probably shouldn't be poking around the filesystem outside of ~/.

      If you want to make linux user friendly, get rid of the need for users to go poking around in the fi

    • by donaldm ( 919619 ) on Wednesday November 02, 2011 @11:52PM (#37930022)

      The one thing that baffles my mind is that Linux filesystems still don't offer compression of specific folders or files.

      If you need to do this why not try a compressed, gzip'd or bzip2 tar, rar or zip file. You can even use your a graphical explorer to actually create and manage your archives (yes you can have more than one). When I say manage you can easily (ie. point and click) display, extract and even insert specific files.

      Windows has had this for over a decade.

      Well you could do this in Unix for over 20 years.

      Why doesn't Linux have such a simple but important filesystem feature? And no, I don't want to make an archive file, because I want to access those files and folders while they are compressed.

      Linux does, I think I already explained this above.

  • Don't break it. You'd think they'd want to get rid of lib64 first, since 32-bit libs on 64 bit systems will fade into obscurity sometime soon. It's only really proprietary software compiled for 32-bit libs that uses it anyway so just shove everything into /lusr/lib maybe /sbin and /bin and /usr/sbin can be consolidated into /bin but i'd definitely leave /bin and /usr/bin separate because at least you'll have a system that kind of works with just /bin if the /usr partition goes missing
    • by skids ( 119237 )

      32-bit libs on 64 bit systems will fade into obscurity sometime soon

      Not in multiarch boot scenarios. The number of different lib subdirs is likely to increase over time, not decrease, and it's better for all concerned if one does not have to write two versions of "how to fix problems with libxxx", one for systems booting off multiarch media, and another for systems not carrying a multiarch library suite.

      This will be especially true of ARM, with its many variants.

    • Yea and flying spaghetti monster forbid you accidentally type something like chmod a+rx on your consolidated /usr/bin directory. That'll be a load of fun to fix
    • by Ant P. ( 974313 )

      Fedora's excuse for this is that their distro can't boot properly with /usr on a separate partition anyway, so rather than fix that bug so it at least boots far enough to mount that filesystem — like every other distro I've installed with that partition layout seems capable of doing — they're just saying "fuck the standards" and doing their own thing.

  • But I'd like to still have a conceptual difference between /usr/bin and /usr/local/bin; Perhaps support local, but mirror it's contents into /usr/ using symlinks.

    I want to be able to install software from source, then wipe it all in one fell swoop if I'd like, which could be done with a mirrored /usr/local.

    • by l2718 ( 514756 )
      /usr/local/bin is not supposed to contain any system executables, only locally installed ones. Fedora does not install anything there already. In any case, /usr/local/ seems to have been replaced by /opt/ in most UNIX installations today.
      • In case you hadn't noticed, the organizational models for /usr/local and /opt are different in a manner similar to the difference between column-major and row-major arrays.

        For example, /usr/local/bin contains executables for all local applications. This is convenient in terms of configuring such things as PATH and LD_LIBRARY_PATH environment variables but can easily lead to application name and version collisions. The /opt convention is for each application to reside in its own subdirectory, meaning th
  • Simple (Score:4, Funny)

    by wsxyz ( 543068 ) on Wednesday November 02, 2011 @07:05PM (#37927268)
    Just store everything in /
    What could be simpler?
  • Fedora, eh? (Score:5, Insightful)

    by turgid ( 580780 ) on Wednesday November 02, 2011 @07:06PM (#37927286) Journal

    The developers at Fedora can do whatever the heck they like. Pat knows what he's doing, and that's good enough for me.

  • I was a long time user of Slackware before switching to FreeBSD about three years back. One of the reasons why I used Slack for ....mmm.. 15 years.. was that Slackware put most things where they should be unlike what I saw happening with the various RedHats, Debians, etc. These guys would do better to clean up their own act before trying to hose everyone else because they think putting everything in the kitchen sink is "simpler".

  • by l2718 ( 514756 ) on Wednesday November 02, 2011 @07:14PM (#37927402)
    I think it's important to realize why the four directions /bin, /sbin, /usr/bin, /usr/sbin exist (and similarly why /lib is separate from /usr/lib). The reason is that once upon a time discs were small, so that /usr would be mounted separately from the root partition. So /bin and /lib are small directories containing as much of the operating system as you need to get going before you mount /usr and get everything else. In particular, this means the utilities needed to mount those other filesystems and to fix errors in them (e.g. fsck). The separation between /usr/bin and /usr/sbin means that ordinary users don't have system programs (those from /sbin) in their search path. Today most installations have the whole system (/ and /usr) on the same partition and it seems that many users use a GUI rather than a terminal. This means that the separation is not needed. Note that this change is not about multiple-architecture situations like /usr/lib and /usr/lib64. It's about the separation between /lib and /usr/lib (or /lib64 and /usr/lib64).
    • Having the separate directories still comes in handy when rescuing systems. Something Fedora is missing here is that it's not the separate directories that confuse people, it's the abbreviations.

      In all honesty, users this new don't need to be messing around with the system at such a low level, Windows actively discourages it as well. The problem is that there is still not enough mature applications available for new users. They are not always satisfied with what comes prepackaged, so they venture out on

      • I don't think it matters with separate directories. On a rescue disk you'd only have the essentials to get going anyway, right? But I agree that the abbreviations can be confusing. I'd love to see a simpler installation like OS X (NeXT) app bundles become the default. Since it's all mostly open source , or some commercial bits relying on open source frameworks (Gtk+, KDE/Qt) these app bundles would rarely need to bundle any dynamic libraries.

        I doubt a mess of symlinks is avoidable for now, though. Linux has

  • most passionate partisans will admit that its filesystem, which stashes vital files in a variety of arcane directories, can be baffling to users.

    Isn't that [directories] what filesystems are to provide, so things can be well organized ?

    Calling them, current UNIX/Linux filesystem hierarcy, "arcane", baffles me. Unless you're Poettering, of course. There is a good reason for things to be where they are, and, due to recent increase of embedded systems, a much more valid reason to split different levels of file

  • by bsDaemon ( 87307 ) on Wednesday November 02, 2011 @07:15PM (#37927420)

    the file system hierarchy makes perfectly good sense -- the absolute basics are in /bin, distribution/system stuff is in /usr/bin and anything that an administrator installs for that particular box is in /usr/local/bin. Substitute sbin for sysadmin-y binaries. I guess maybe it doesn't matter as long as it doesn't take off, since I can just not use Fedora ever again, but frankly I like things just they way they are. The weird places that Ubuntu stashes things is already enough of a hassle when you have an extremely heterogeneous environment like I do at work.

  • Can't take a hard drive built with LVM and migrate it to another machine with the same name -- i.e. a common operation after a hardware upgrade. I suppose it is possible to rename the logical volume, but this is fairly arcane, and why should I have to do this? ext4 offers no such impediment.

  • What users should care about where binaries go? Really, which kind of users are baffled to find several binary directories?

    I bet that most that do, understand simple concepts such as $PATH, which and are probably able to deal with there being multiple directories.

    I can see how this could be beneficial for installers and could help package maintainers that port from one distribution to another. Maybe Linux Standard Base already addresses this and this is only a moot point. This is only good if everybody does

  • Rather that move things around, how about use "union" filesystem to present an optional different view of things. Heck, you could use chroot and union fs to create a completely different singular view of the same file system to different users..

  • even simpler (Score:3, Interesting)

    by FudRucker ( 866063 ) on Wednesday November 02, 2011 @07:22PM (#37927520)
    forget /usr, just do /bin /dev /etc /home /lib /media /proc /root /sbin /sys /tmp /var
  • Putting everything under C:Windows and C:Windows/system turn out to be the way to go.

  • by Sipper ( 462582 ) on Wednesday November 02, 2011 @07:34PM (#37927668)

    Although this proposal sounds reasonable at first, actually implementing it is troublesome. Linux systems have an expectation that the root directory / and the /usr directory may be on different filesystems; thus /bin is expected to come with / and be available at boot time, where /usr may not be. This means that making /bin -> /usr/bin via a softlink would break that.

    Although the article summary claims that the Filesystem Hirearchy Standard (FHS) isn't used very often, some distributions such as Debian actually do try their best to follow it and even have it as one of the specs for how to build packages. Debian developers discussed the idea of trying to follow Fedora in this proposal, but it looks like it's too troublesome to be worth it. For one thing, all of the filesystem recovery tools or anything else that would be required in an emergency at the command line would need to be built into the kernel initrd images, which could be done but which doesn't seem terribly reasonable.

    As such I think most Linux distributions are going to need to wait and see how well it works out for Fedora on this effort.

  • GOBO Linux (Score:5, Informative)

    by pentalive ( 449155 ) on Wednesday November 02, 2011 @07:39PM (#37927722) Journal
    http://www.gobolinux.org/ [gobolinux.org] Combines executables and necessary libraries each in their own directory under '/programs' Uses links to show files in traditional places.
  • Why is everyone talking about compression? Seriously, that isn't the topic or something most Linux really give a damn about.

    Now, reorganizing the system so the files are arranged in a more logical fashion? That is great and something I have been waiting for 15 years! I understand the desire to maintain a degree of similarity and do things they way they have always been done, but directory structure on Linux is ridiculous for most users. This is one reason I would prefer a different OS version for deskto

  • by hey ( 83763 ) on Wednesday November 02, 2011 @07:43PM (#37927774) Journal

    This makes sense to me. All those extra directories had historical reasons but time marches on.

  • by peppepz ( 1311345 ) on Wednesday November 02, 2011 @07:44PM (#37927786)
    There is nothing exoteric or baffling with the filesytem of Linux.

    It is well-specified. There's a folder for executables, a folder for binaries, a folder for configuration data, a folder for temporary stuff. And its layout hasn't changed for 20 years.
    Compare it to Windows, where the file system layout changes from one Windows version to the other, there are no documents specifying most of its organization, and it doesn't matter anyway, because since Windows NT the file system is meant to be only managed by automated installation tools, and even an expert user can not hope to fix it when things go wrong.

    What's wrong with /bin and /lib ? They serve a specific purpose, and the files they contain shouldn't be directly handled by a user who gets confused because of the presence of more than a single directory in his $PATH, so who will gain from their "semplification"? Don't tell me the real reason is that Fedora's next-generation self-aware omniscient init system has grown so complex that they're no longer able to support a split /usr installation because of its dependency hell.
    Please do not turn Linux into an unmanageable mess as the one Windows has become.
    End of rant

    • by grumbel ( 592662 )

      What's wrong with /bin and /lib ?

      Among other things: They force you to splatter files from different pieces of software into the same directory, making it impossible to install different versions of the same application.

      • Among other things: They force you to splatter files from different pieces of software into the same directory, making it impossible to install different versions of the same application.

        /lib, /bin, /usr/lib, /usr/bin, /sbin, /usr/sbin, /usr/local/bin, /usr/local/lib have their place. You are not "forced" to splater files from different pieces of software into the same directory, but it makes sense to have these in a standard location if they are meant to be shared with other programs. Don't forget you can

    • Indeed, a while back I had a Vista install go tits up because I had added new partitions and had accidentally run bootrec forgetting about what that would do to the partition labeling. And MS doesn't provide proper tools to handle a situation like that without loading up a recovery disc. By the time I had that mostly sorted out, the profiles wouldn't load.

      Right, now Linux is largely impervious to that sort of thing, so obviously, the thing to do is find ways of making it pervious to that stuff.

    • There is nothing exoteric or baffling with the filesytem of Linux.
      It is well-specified. There's a folder for executables, a folder for binaries, a folder for configuration data, a folder for temporary stuff. And its layout hasn't changed for 20 years.

      Right, because computers haven't changed much in the last twenty years probably.

      WHY do you cut every single program into little pieces and box them together?

      Look at the RHEL Tomcat RPM, compare to traditional /opt/tomcat or /usr/local/tomcat or even /tomcat

      WHY does anyone think cutting that or any application into ten different directories, mixing its private bits with private bits of completely unrelated programs is a good idea?

      I can think of reasons to seperate user configuration and data from non-user th

  • by spottedkangaroo ( 451692 ) * on Wednesday November 02, 2011 @08:08PM (#37928030) Homepage
    When linux gets so simple my mom can sysadmin it with no worries, but I can't get anything done... wtf do I switch to? Corollary: why do we keep fixing things in linux kerneled operating systems that aren't even broken. Just leave the file system alone or you'll end up with Ext5: Unity Disaster.
  • by Tom ( 822 ) on Wednesday November 02, 2011 @08:22PM (#37928202) Homepage Journal

    Is it nonsense-week on /. or what?

    "Even Linux's most passionate partisans will admit that its filesystem, which stashes vital files in a variety of arcane directories, can be baffling to users.

    Errr... no?

    First up, you rarely need to dig out those directories, especially not during daily use. A systems administrator may need to venture into /var/lib/whatever occasionally, but a user? Another non-problem created by people confusing their audiences. A system administrator, who needs to know these things, shouldn't be baffled by it, or whatever you're paying him, it's too much. A user, who would be baffled, shouldn't need to worry about it.

    There's method to the madness - the mentioned filesystem standard, for example. If you don't grok the method, don't try to "simplify" it. Simplification only works, if you understand the thing you're simplifying really, really well. Otherwise, you're just throwing stuff out, and then say "oh, crap" when you realize much later that you shouldn't have.

    • by Shimbo ( 100005 )

      Is it nonsense-week on /. or what?

      Well, no. It appears to be bad summary week. And people replying without RTFA week. Actually skip the article even, read Lennart Poettering's post on fedora.devel. [gmane.org]

The use of money is all the advantage there is to having money. -- B. Franklin

Working...