Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Red Hat Software Unix Linux

Fedora Aims To Simplify Linux Filesystem 803

jfruhlinger writes "Even Linux's most passionate partisans will admit that its filesystem, which stashes vital files in a variety of arcane directories, can be baffling to users. The developers at the Fedora project want to cut the Gordian knot and consolidate all executables into /usr/bin and all libraries into /usr/lib or /usr/lib64. One downside: this system would conflict with the standards developed by the Linux Standard Base, or the (rarely used) Filesystem Hierarchy Standard."
This discussion has been archived. No new comments can be posted.

Fedora Aims To Simplify Linux Filesystem

Comments Filter:
  • by SharkLaser ( 2495316 ) on Wednesday November 02, 2011 @07:01PM (#37927196) Journal
    The one thing that baffles my mind is that Linux filesystems still don't offer compression of specific folders or files. Seriously, Windows has had this for over a decade. There's a few experiemental filesystems that can compress the whole partition, but still not individual files. Why doesn't Linux have such a simple but important filesystem feature? And no, I don't want to make an archive file, because I want to access those files and folders while they are compressed.
  • Fedora, eh? (Score:5, Insightful)

    by turgid ( 580780 ) on Wednesday November 02, 2011 @07:06PM (#37927286) Journal

    The developers at Fedora can do whatever the heck they like. Pat knows what he's doing, and that's good enough for me.

  • by Anonymous Coward on Wednesday November 02, 2011 @07:11PM (#37927354)

    And no, I don't want to make an archive file, because I want to access those files and folders while they are compressed.

    The way I understand it, there's really no good generic way to handle file compression at the FS level.

    Even the way NTFS does it is to create a compressed file to hold the contents of the original file, like an archive. But if you'll notice, whenever you open the compressed file, NTFS will expand the whole compressed data into another special file until you close it. Watch the disk space usage change and you'll see. You can easily set up a situation where you do not have enough free disk space to open a compressed file, and that is not intuitive for a user.

    What you want is an ability to compress small individual blocks of the file that can be accessed separately without having to decompress the entire file. But doing that creates all sorts of other problems such as how to efficiently allocate the space for the compressed contents which might change over time (and change size) without causing a great deal of additional fragmentation. This is rather harder to do.

  • by antifoidulus ( 807088 ) on Wednesday November 02, 2011 @07:23PM (#37927524) Homepage Journal
    Actually there are valid reasons to compress at least some of your files beyond the need for saving space, namely that the speed differential between I/O(esp. platter drives) and CPUs is continuing to grow incredibly fast. The performance gap is getting to the point that for files that tend to compress(executables and libraries are among them), the time it takes to read the compressed file off the disk and then decompress it in memory may be less than the time it takes to read the entire uncompressed file from disk.

    Now there are tools that allow you to do this just for executables, but since they don't run on all platforms you can be in kind of a bind. By putting your executables and libraries in their own file compressed file system, you can gain a lot of the advantages of executable compression while still being able to use it on pretty much any platform.
  • by skids ( 119237 ) on Wednesday November 02, 2011 @07:28PM (#37927586) Homepage

    Add to that, in network attached storage solutions, every file you read is squeezed through something as small as a 1GBbps pipe.

  • by SuperQ ( 431 ) * on Wednesday November 02, 2011 @07:32PM (#37927640) Homepage

    Sounds like you have a problem with file formats, not filesystems. Filesystem level compression is a stupid idea since it doesn't have any way to apply appropriate compression methods to the files. Should I apply zlib to uncompressed audio? No, use FLAC. Should I apply zlib to logs files? No, I should probably use something like LZO or Snappy that have block seeking.

  • by Sipper ( 462582 ) on Wednesday November 02, 2011 @07:34PM (#37927668)

    Although this proposal sounds reasonable at first, actually implementing it is troublesome. Linux systems have an expectation that the root directory / and the /usr directory may be on different filesystems; thus /bin is expected to come with / and be available at boot time, where /usr may not be. This means that making /bin -> /usr/bin via a softlink would break that.

    Although the article summary claims that the Filesystem Hirearchy Standard (FHS) isn't used very often, some distributions such as Debian actually do try their best to follow it and even have it as one of the specs for how to build packages. Debian developers discussed the idea of trying to follow Fedora in this proposal, but it looks like it's too troublesome to be worth it. For one thing, all of the filesystem recovery tools or anything else that would be required in an emergency at the command line would need to be built into the kernel initrd images, which could be done but which doesn't seem terribly reasonable.

    As such I think most Linux distributions are going to need to wait and see how well it works out for Fedora on this effort.

  • by peppepz ( 1311345 ) on Wednesday November 02, 2011 @07:44PM (#37927786)
    There is nothing exoteric or baffling with the filesytem of Linux.

    It is well-specified. There's a folder for executables, a folder for binaries, a folder for configuration data, a folder for temporary stuff. And its layout hasn't changed for 20 years.
    Compare it to Windows, where the file system layout changes from one Windows version to the other, there are no documents specifying most of its organization, and it doesn't matter anyway, because since Windows NT the file system is meant to be only managed by automated installation tools, and even an expert user can not hope to fix it when things go wrong.

    What's wrong with /bin and /lib ? They serve a specific purpose, and the files they contain shouldn't be directly handled by a user who gets confused because of the presence of more than a single directory in his $PATH, so who will gain from their "semplification"? Don't tell me the real reason is that Fedora's next-generation self-aware omniscient init system has grown so complex that they're no longer able to support a split /usr installation because of its dependency hell.
    Please do not turn Linux into an unmanageable mess as the one Windows has become.
    End of rant

  • by PrimaryConsult ( 1546585 ) on Wednesday November 02, 2011 @07:52PM (#37927850)

    NTFS has been updated pretty consistently actually (the last being Windows XP with NTFS 3.1). Since then the Windows OS on top of it has evolved to actually take advantage of the features of 3.1.

    What you said is akin to, "You mean, since the last time the default *nix file system was updated? I can't believe they're still using the aging ext file system, with all of it's 90s features like 'symbolic links' ".

  • by jiteo ( 964572 ) on Wednesday November 02, 2011 @08:06PM (#37928010)

    This, bizarrely, reminds me of an article on feminism, and how, having won the major gender inequality battles, they're starting to pick silly fights where a rational person would see no fight to pick.

    I think something similar is happening here, and with Ubuntu's Unity and maybe GNOME 3 as well. Having achieved a big chunk of what they set out to achieve, they start looking for the next big goal, but there's no next big goal to be found, so they just invent something utterly ridiculous. The file system is fine, just leave it be. All the newbies need to know about is their home directory, and those who need to know about the entire file system are perfectly fine with how it's currently laid out.

  • by hmbJeff ( 591813 ) on Wednesday November 02, 2011 @08:52PM (#37928570)

    As I read the comments for this topic, what stands out is that there are a bunch of arbitrary and conflicting interpretations of the "right" place to put things on a Unix/Linux system, each of which is justified by some sacred "historical reasoning" (even though no one can ever agree on said reasoning).

    And that is the problem. Why should I have to know which arbitrary approach happened to be followed by a particular distro or installed package?

    I don't have touchy geek pride or a need to whip out my big nerd phallus at parties--I just want the systems I manage or use to be reasonably robust and consistent. I don't care who wins--I just don't want all the variations.

  • by SharkLaser ( 2495316 ) on Wednesday November 02, 2011 @09:09PM (#37928724) Journal
    Text files. They don't take 10TB, they take 1TB but it's growing. So Slashdot Linux users suggestion to this is to get new 10x 1TB drives instead of just compressing the data and keep using the same 10TB?

    This is the reason why most sane people avoid OSS. No one listens to feature requests, but only tells you "you don't want to that anyway, do this instead" while I perfectly good know compressing them is exactly what I want to do.
  • by donaldm ( 919619 ) on Wednesday November 02, 2011 @11:52PM (#37930022)

    The one thing that baffles my mind is that Linux filesystems still don't offer compression of specific folders or files.

    If you need to do this why not try a compressed, gzip'd or bzip2 tar, rar or zip file. You can even use your a graphical explorer to actually create and manage your archives (yes you can have more than one). When I say manage you can easily (ie. point and click) display, extract and even insert specific files.

    Windows has had this for over a decade.

    Well you could do this in Unix for over 20 years.

    Why doesn't Linux have such a simple but important filesystem feature? And no, I don't want to make an archive file, because I want to access those files and folders while they are compressed.

    Linux does, I think I already explained this above.

  • by Kjella ( 173770 ) on Thursday November 03, 2011 @06:01AM (#37931856) Homepage

    So are you Wikipedia, the Gutenberg project or SourceForge? You're certainly not going to read 1TB of text unless you plan to live the next billion years or so, you are some kind of speciality site. The point is that most use appropriate compressed formats for their purpose. PNG beats BMP-in-a-ZIP. FLAC beats WAV-in-a-ZIP. Lossy formats like JPG, MP3, H.264 video are already well compressed.

    Sure, compression is the way to go. But is it that vital that it's in the file system instead of working with zip files? Maybe to you. But I think you know you are an extreme minority on this one. Most people are happy having zip folders and a search engine that reads inside zip files and I know Linux has both. Or actually most people have no problem storing their text uncompressed at all because it takes up <1% of their drive.

    I guess none of the people you talked to felt this was a problem worth solving. To me it would be a bit like learning Linux has issues with >1 TB RAM or >1000 cores, even if that was so I wouldn't exactly feel it is or ever would become a problem for my desktop. So yeah, it probably would get returned with "Well, if it's a problem to you feel free to do something about it, but I don't think anyone here will work on it."

    Or something less polite, depending on who you ran into and how you formulated your feature requst.

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...