Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux IT

The Linux Filesystem Challenge 654

Joe Barr writes "Mark Stone has thrown down the gauntlet for Linux filesystem developers in his thoughtful essay on Linux.com. The basic premise is that Linux must find a next-generation filesystem to keep pace with Microsoft and Apple, both of whom are promising new filesystems in a year or two. Never mind that Microsoft has been promising its "innovative" native database/filesystem (copying an idea from IBM's hugely successful OS/400) for more than ten years now. Anybody remember Cairo?"
This discussion has been archived. No new comments can be posted.

The Linux Filesystem Challenge

Comments Filter:
  • New FS (Score:5, Interesting)

    by stecoop ( 759508 ) on Wednesday July 28, 2004 @03:16PM (#9823969) Journal
    Linux must find a next-generation filesystem to keep pace

    What are the winds of change saying? R..E..I..S..E..R...4... [namesys.com]
    • Re:New FS (Score:5, Informative)

      by AstroDrabb ( 534369 ) on Wednesday July 28, 2004 @03:43PM (#9824325)
      Reiser4 is going to be great. Here are some of the features for those who don't like to click-n-read:

      1. * Reiser4 is the fastest filesystem,
      2. and here are the benchmarks [namesys.com].
      1. * Reiser4 is an atomic filesystem, which means that your filesystem operations either entirely occur, or they entirely don't, and they don't corrupt due to half occuring. We do this without significant performance losses, because we invented algorithms to do it without copying the data twice.
      1. * Reiser4 uses dancing trees, which obsolete the balanced tree algorithms used in databases (see farther down). This makes Reiser4 more space efficient than other filesystems because we squish small files together rather than wasting space due to block alignment like they do. It also means that Reiser4 scales better than any other filesystem. Do you want a million files in a directory, and want to create them fast? No problem.
      1. * Reiser4 is based on plugins, which means that it will attract many outside contributors, and you'll be able to upgrade to their innovations without reformatting your disk. If you like to code, you'll really like plugins....
      1. * Reiser4 is architected for military grade security (sponsored by DARPA). You'll find it is easy to audit the code, and that assertions guard the entrance to every function.

      Notice the plugin feature. This will create endless possibilities for what you can do with the file system. Want to tie a DB/SQL search function in to it? Write a plugin, want special security? Write a plugin. Tons of possibilites with ReiserFS4 and it is _very_ fast. This is hands down better then the MS "a filesystem as a DB" approach. ReiserFS4 will be like Firebird, lean-n-mean-n-fast. Want more features, grab _your_ favorite plugins!
      • not so fast ... (Score:3, Insightful)

        by vlad_petric ( 94134 )
        While I agree with the atomicity part, it's all great provided that the code is bug-free. IIRC reiserfs bugfixes where quite frequent in kernel changelog a couple years ago.

        Filesystems are so crucial to OS stability, that I'd say it's worth formally-verifying them to a certain extent (i.e. prove that the algorithms/code work, instead of just observing that they work in normal conditions).

        P.S. The whole thing - filesystem as a DB - is complete crap. You can't do a bunch of fs operations in a single trans

        • Re:not so fast ... (Score:3, Interesting)

          by Anonymous Coward

          P.S. The whole thing - filesystem as a DB - is complete crap. You can't do a bunch of fs operations in a single transaction and have ACID semantics on the transaction as a whole. Sure - searching is great. But database means much more than just a searching interface.

          The real killer is stored procedures. It'll be a cold day in hell before those are allowed into a kernel.

          And how do you email files with attributes or other metadata? They're not part of the regular file data, so all the usual email client

          • by cortana ( 588495 ) <sam@robots[ ]g.uk ['.or' in gap]> on Wednesday July 28, 2004 @04:24PM (#9824836) Homepage
            What's that? The ghost of Andrew Tenenbaum... mouthing the word "Microkernel, microkernel" over and over again!
            • Re:not so fast ... (Score:3, Interesting)

              by Sloppy ( 14984 ) *
              Ghost?! He's not even dead yet!

              I do think this is really funny, though. The more functionality people want to cram into the FS, the more they're going to look back at that famous Usenet thread, and reconsider... ;-)

              • Ghost?! He's not even dead yet!

                MORTICIAN: Here -- he says he's not dead!

                CUSTOMER: Yes, he is.

                DEAD PERSON: I'm not!

                MORTICIAN: He isn't.

                CUSTOMER: Well, he will be soon, he's very ill.

                DEAD PERSON: I'm getting better!

                CUSTOMER: No, you're not -- you'll be stone dead in a moment.

                MORTICIAN: Oh, I can't take him like that -- it's against regulations.

                DEAD PERSON: I don't want to go in the cart!

                CUSTOMER: Oh, don't be such a baby.

                MORTICIAN: I can't take him...

                DEAD P

          • Re:not so fast ... (Score:4, Insightful)

            by jbolden ( 176878 ) on Wednesday July 28, 2004 @05:15PM (#9825482) Homepage
            The backwards compatibility problems are insurmountable

            They aren't a problem at all. Every email system can identify file formats it doesn't know how to deal with. Most can get external plugins. The file + attributes can be seen as just a type of file (like say .att). So if you support the .att format you would see a doc plus an icon plus history plus... otherwise you just see a .att file that needs some external app to understand.
        • Re:not so fast ... (Score:5, Insightful)

          by AstroDrabb ( 534369 ) on Wednesday July 28, 2004 @04:38PM (#9825018)
          ReiserFS 3 had bugs in the early versions just as all software will. That is why reiserFS was not used for productions systems for a while. It will probably be the same with ReiserFS 4. I will use it at home when it first comes out, but not where I don't want to chance data corruption.
        • Re:not so fast ... (Score:5, Informative)

          by poelzi ( 177615 ) <`gro.izleop' `ta' `gro.todhsals'> on Wednesday July 28, 2004 @05:20PM (#9825527) Homepage
          P.S. The whole thing - filesystem as a DB - is complete crap. You can't do a bunch of fs operations in a single transaction and have ACID semantics on the transaction as a whole. Sure - searching is great. But database means much more than just a searching interface.


          Sorry, but you are wrong here. Reiser4 is atomic and you can pack as many operations into one transaction as you like, you just have to use the reiser4 system call. This is, because there is no standard system call for atomic filesystem transactions. Modern filesystems are databases, build to store files and query them trough filenames, reiser4 is the first filesystem where search path can be done through plugins, therefore you can index everything you want.
      • Re:New FS (Score:5, Interesting)

        by prisoner-of-enigma ( 535770 ) on Wednesday July 28, 2004 @04:11PM (#9824658) Homepage
        I've been shouted down before about this, but I'm going to keep asking for it because it's a useful feature for my company: what about per-file compression in the file system? Now before anyone has a hissy fit, let me explain.

        We output a lot of digitally-created video files that are huge (think HDTV resolution). Most of these files are output uncompressed because either (a) the file format doesn't support compression or (b) the multimedia program doesn't support compression. Either way, a few minutes of HDTV-quality uncompressed video will absolutely destroy a few hundred gigabytes of space in no time.

        We have to hold on to some of this video for quite some time, but we only need to get at it infrequently. It's too big to fit on DVD-R's, tape is too slow, ZIPping it up hinders easy access later, and removable hard drives are expensive. File system compression, on the other hand, does wonders. We routinely get 60%-80% compression on archived video files, and it's allowed us to stretch our disk capacity a long, long way because of it.

        We've considered archiving our video in some kind of compressed streaming format like AVI, Quicktime, or MPEG-2, but none of these offer lossless codecs that are appropriate for us, and we're unwilling to accept using a lossy compressor.

        So, I ask the question again: when, if ever, is anyone going to implement file compression on a Linux file system? Or does it already exist but is buried somewhere in some arcane HOWTO or website?
        • Re:New FS (Score:3, Informative)

          Quick googling got these. They may provide you with some leads. If you find more, please post :)

          http://sourceforge.net/projects/e2compr/ (Ext2 Compression)

          http://squashfs.sourceforge.net/ (Squashed - Read Only, don't know what that means)

          S
        • by hansreiser ( 6963 ) on Wednesday July 28, 2004 @05:44PM (#9825758) Homepage
          Reiser4 has a compression plugin coming. We got gzip to work, but it consumes too much cpu, so now we are doing lzo which can compress at disk drive speed. The lzo plugin has a bug, maybe next week....

          Hans

          (You can email edward@namesys.com for details).
    • Re:New FS (Score:3, Interesting)

      by vandan ( 151516 )
      Damned right.

      With Reiser3, doing `emerge -up --deep world` on my Gentoo box would usually take about 10 seconds after the progress spinner had started.

      Now with Reiser4, it takes about 2 seconds after the progress spinner starts.

      The speed really is absolutely amazing.

      And from what I've read of Reiser4, it has all the database niceties for managing files and contents of files that WinFS is promising. Of course Reiser4 currently exists and is working on my home gaming maching and 4 machines here at wor
  • by alanw ( 1822 ) * <alan@wylie.me.uk> on Wednesday July 28, 2004 @03:16PM (#9823980) Homepage
    Hans Reiser has written a white paper [namesys.com] containing his thoughts on the design of the next major version of ReiserFS.
    • by minginqunt ( 225413 ) on Wednesday July 28, 2004 @03:26PM (#9824114) Homepage Journal
      In addition to Reiser4, there are a whole whost of projects that aim to provide all or part of what BFS achieved, Spotlight (MacOS X Tiger) and WinFS will achieve.

      This includes Beagler/Dashboard

      http://www.nat.org/dashboard
      http://www.gnome.o rg/projects/beagle/

      And of course, the ambitious Gnome Storage project, being pushed by Seth Nickell. He recently wrote a paper comparing all the technologies, found here:

      http://www.gnome.org/~seth/blog/document-indexin g
      • by MouseR ( 3264 ) on Wednesday July 28, 2004 @03:49PM (#9824391) Homepage
        Apple's Spotlight isn't a file system. It's a search engine that uses and maintains metadata stored in the file system.

        HFS+ is the current OS X file system, and that of Tiger (next revision of OS X) as well. Spotlight uses HFS+'s built-in metadata support to enhance it's search capabilities. What Tiger offers more to application developers is an API to add metadata to documents, something that was limited until now.
        • What Tiger offers more to application developers is an API to add metadata to documents, something that was limited until now

          What Tiger offers is a way for application developers to DECLARE metadata in their document formats... most formats have metadata of some kind already (in an mp3, id3 tags; in a image, resolution etc.; in a source file, dependencies and exported symbols); what Tiger lets application developers do is tell spotlight how to find the information that's already there.

          Now, this may lead
    • by ninejaguar ( 517729 ) on Wednesday July 28, 2004 @04:58PM (#9825299)
      Here [roninhouse.com] are further thoughts on that, from 3 years ago.

      = 9J =

  • by suso ( 153703 ) on Wednesday July 28, 2004 @03:17PM (#9823993) Journal
    Instead, try to keep up with the demands and needs of users.
    • by Anonymous Coward on Wednesday July 28, 2004 @03:20PM (#9824024)
      Don't try to keep up with Microsoft and Apple. Instead, try to keep up with the demands and needs of users.

      In this case, they're one and the same.
    • by jilles ( 20976 ) on Wednesday July 28, 2004 @03:41PM (#9824287) Homepage
      Actually that involves keeping up with the rest of the field as well. Not every feature MS adds to their OS should be duplicated. But some features are useful and should be considered.

      MS has basically announced/demonstrated most of the new features that are in longhorn. Effectively that has given the linux community two years to come up with competing features. Adding database features to a filesystem makes sense, beos has demonstrated that you can do some nifty stuff with it and both apple and MS have anounced to do this.

      The linux community however is divided. You can install reiserfs, maybe develop some tools that use some of its more advanced features but that doesn't fundamentally change anything if openoffice, KDE and Gnome and other programs don't coevolve to use the new features.

      The same goes for stuff like avalon. While everybody is still talking about how such technology might be used in OSS projects like mozilla, Gnome, MS is well on their way of implementing something that may actually work.

      Filesystems with rich metadata were already a good idea ten years ago. The OSS community has talked about them where others have implemented them. Two years of more talking would be fairly consistent. IMHO the OSS community is underperforming in picking up new technology and good ideas.
      • Not really (Score:3, Insightful)

        by Synn ( 6288 )
        IMHO the OSS community is underperforming in picking up new technology and good ideas.

        Hardly. There are a lot of OSS projects that are leading the way with new technologies and in implementing good ideas.

        But in quite a few areas it's not at all uncommon to see slow support for new tech. The community divides about how to implement the new ideas, which slows things down, but that division fosters competition and provides a base for testing out different ways of getting the new tech out the door.

        Sometimes
    • by kfg ( 145172 ) on Wednesday July 28, 2004 @03:58PM (#9824475)
      Users often demand and think they need all sorts of pointless, worthless, daft shit. Commercial companies, of course, have to cater to this, and the less ethical directly exploit it ( I'll sell you speaker cables that I've meditated over while sitting under a mystical waterfall to infuse them with energy and align their molecules, only $2000 a set. If you can't hear the difference it's because your chakras are blocked, but don't worry, I've developed a homeopathic remedy, only $20 a bottle. Oh, they only work while listening to my taped lecture series though, just $499. Remember to sit on my special magnetic pad at the same time (available to members only)).

      How's about this for a better idea, instead of trying to keep with Microsoft try to keep up with sound software engineering principles in designing our file systems?

      There may even come a time when the required action to impliment this idea is to do nothing.

      KFG
    • by dasunt ( 249686 ) on Wednesday July 28, 2004 @04:01PM (#9824503)

      As a linux user, I don't sit back and think "this filesystem sucks". For the most part, I'm happy with ext3.

      When I do try to make a wishlist, the only things I really want is KDE's IO Slaves integrated into the system at a lower level so that all programs can use it, and a more secure version of NFS. That's it. Perhaps some sort of revision control on certain files, but RCS works fine for me.

      I don't want data forks -- it creates more problems (with transferring files) then it solves.

      For a similiar reason, I don't want my filesystem to be a DB. I'm happy with files. Damn happy. I don't see what problems a database solves.

      Just my $.02.

  • by tcopeland ( 32225 ) * <tom AT thomasleecopeland DOT com> on Wednesday July 28, 2004 @03:17PM (#9823994) Homepage
    ...wrote "Open Sources", which you can read/buy here [oreilly.com]. He's a fairly savvy fellow...
  • easy answer (Score:5, Insightful)

    by dAzED1 ( 33635 ) on Wednesday July 28, 2004 @03:19PM (#9824014) Journal
    nfs4, with solid integrations for auth servers (ldap to active directory, etc).

    We live in a network-based universe. Local filesystems are already good - whether its just continued development in Reiser, or whatever else.

    Nfs4, though - its like afs, only without the sucky stuff. AIX is now including nfs4 in its AIX5.3 release, even! With the Big Dog on board, we should realize there's wisdom in that direction ;)

    • Re:easy answer (Score:4, Informative)

      by Anonymous Coward on Wednesday July 28, 2004 @04:05PM (#9824572)
      Man, you totally miss the point. NFS is not a file system (don't be fooled by the name), it's a network protocol. The files provided by a NFS server have to be physically stored on some (real) filesystem, like ext3 or reiserfs.

      This is very much like saying "the future of filesystems is apache2, local filesystems are already good, now we have to concentrate on apache2".
  • by bigberk ( 547360 ) <bigberk@users.pc9.org> on Wednesday July 28, 2004 @03:21PM (#9824045)
    Hans Reiser has some interesting ideas about the role of a modern file system. Here's a recent USENET post [google.ca] describing some of the immediately visible features of reiserfs v3. Some people have said that there was corruption in the past, but I think there are no longer any problems in recent 2.4 kernels. Namesys [namesys.com] is now developing Reiser4, which appears to be more flexible (still needs time to stabilize though). If I had to put down my money on a future filesystem though, it would be ReiserFS.
  • by valen ( 2689 ) on Wednesday July 28, 2004 @03:21PM (#9824052) Homepage

    I want a disk equivalent of top - something that'll tell me what processes are kicking the shit out of the disks, and by how much.

    If Linux could do that - it's more a VM thing than a filesystem - I'd stick with ext3 for years to come.

    Who needs a filesystem in a database when you have a database that lives on your filesystem (updatedb). Get that updating in realtime, with more things (like permissions, access times etc.) and a lot of the work is done.

    john
    • Who needs a filesystem in a database when you have a database that lives on your filesystem (updatedb). Get that updating in realtime, with more things (like permissions, access times etc.) and a lot of the work is done.

      PR & tech journalists to the contrary, that is all that is involved in Spotlight & WinFS. Spotlight runs on HFS+. WinFS runs on NTFS. Both are databases stored as files on existing filesystems. The only difference between those databases & updatedb is that they may be using bet
    • dtrace (Score:5, Informative)

      by DarkMan ( 32280 ) on Wednesday July 28, 2004 @03:42PM (#9824303) Journal
      dtrace, due with Solaris 10 does that. So it's not quite a top equivelent, but it does laet you answear your questions ("What processes are kicking the shit out of the disk", and "By how much"), and long with the also useful "In what way" i.e. many small writes, hugh seek to read ratio, or what have you.

      It is, however, expert driven, unlike top, which is simple to use. Still, I think that dtrace shows the furture of performance monitoring apps.

      Note that dtrace lives partially in the kernel - it's not portable to Linux.
  • by tikoloshe ( 515755 ) on Wednesday July 28, 2004 @03:23PM (#9824077)
    Filesytems are tools that will suit different purposes. Some are good for databases, some for lots of small files, some for lots of reading, some for writing, some for networks, some for streaming.
    So to develop a one handy "swiss army knife" of filesystems may not be the best route. For the most part one knows what a system will be doing and can build in the most appropriate filesystem for the job.
    • by beee ( 98582 ) on Wednesday July 28, 2004 @03:41PM (#9824288) Homepage
      A good filesystem should be capable of handling all potential applications (for example, FAT32 has found its way into grandmother's desktop and production web servers). Specializing a FS is a huge mistake, and any highly-specific FS introduced to date has been a huge flop. This is not the best route to travel for Linux.
      • by Anonymous Coward on Wednesday July 28, 2004 @03:55PM (#9824444)
        for example, FAT32 has found its way into grandmother's desktop and production web servers

        Wow, your grandmother has production webservers! Cool. ;-)
      • by kasperd ( 592156 ) on Wednesday July 28, 2004 @04:43PM (#9825085) Homepage Journal
        A good filesystem should be capable of handling all potential applications
        I absolutely agree. And I actually think the current interface to filesystems is good. I don't want any major changes. Because major changes would most likely lead to all new kinds of metadata that no applications know how to deal with. And whenever your files get handled by a program without this knowledge, you are losing metadata which again means new applications that makes use of the metadata get screwed. So most of this inovation will just give us lots of compatibility problems. If anybody really want to inovate, and produce something good, then they should implement a clever implementation of the existing interface, that works well for different cases, that is both small and large files, deep trees, many files per directory, few files per directory. AFAIK reiserfs and XFS are doing quite well.

        (for example, FAT32 has found its way into grandmother's desktop and production web servers).
        FAT is a horrible example, because it didn't become this widely used because of quality. Minix' FS is simpler than FAT, it have more features, and it is a lot faster for random access to large files. FAT-16 had problems with small files, because on large partitions you were forced to use large clusters, which means lots of disk space wasted (I have seen 30% waste in real life cases). FAT-32 did improve on the problem with small files, because now you could have much larger partitions with 4KB clusters. But since FAT-32 still use linked lists to find data sectors (like previous FAT versions), FAT-32 is worse at handling large files than any previous filesystem. For example seeking to the end of a 625MB file in 4KB clusters requires following 160000 pointers. Most other filesystems use a tree structure, which means you can typically index the entire file with at most 3 or 4 levels of indirection, which means you need to follow 4 or 5 pointers. Would you try to cache the FAT table to speed up the access? Good luck, you would need 4 bytes of FAT table per 4KB cluster on the disk, so for a 160GB disk you would need to use 160MB of your RAM just to cache the FAT. And this doesn't get rid of the CPU time required to traverse the linked list.
  • Gnome Storage (Score:5, Interesting)

    by leandrod ( 17766 ) <l@dutras . o rg> on Wednesday July 28, 2004 @03:24PM (#9824087) Homepage Journal
    Gnome Storage should be a step in the right direction, and it gets it right by not reinventing the wheel, just using PostgreSQL as its database engine.

    This way we can test the waters without messing with the kernel. When the concept is tried, we can decide if we make PostgreSQL a required part of a GNU/Linux system, or a Hurd translator, or whatever.
  • But... (Score:3, Interesting)

    by sk6307 ( 797832 ) <sk6307@btinternet.com> on Wednesday July 28, 2004 @03:24PM (#9824097)
    Is there anything that a true database filesystem offers that something like a realtime updatedb index and maybe a background updated glimpse index of /home cant offer?

    I have about 18GB of files in my main home dir, and I can search it in seconds with slocate and if I need a content search, with glimpse.

    I know that this kind of database FS provides a lot of cool opportunities in terms of meta-data, but how useful is it for non-techies, who usually dont name their files coherently, let alone correct ID3 tags or other other meta-data.
  • Compatible (Score:5, Funny)

    by cubicledrone ( 681598 ) on Wednesday July 28, 2004 @03:27PM (#9824123)
    Just make sure it is incompatible with all the current applications so we can rewrite everything. Add a cool feature or something too.
  • Linux.com (Score:3, Funny)

    by maelstrom ( 638 ) on Wednesday July 28, 2004 @03:28PM (#9824142) Homepage Journal
    So Joe Barr who writes for Linux.com, submits a story to Slashdot (who owns Linux.com) written by another OSDN guy. Good job for editorial independance.

  • File versioning (Score:4, Interesting)

    by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Wednesday July 28, 2004 @03:33PM (#9824187) Homepage
    I know that some don't like it, but we need the option of file system versioning, so that if/when you delete half the lines in your letter/program/... you can get them back from the previous copy on disk.

    There is an expectation that the application should do it, that means extra code in each application and they all do it slightly differently.

    OK: need an O_NOVERSION on open(2) if the app *really* doesn't want this - eg a database.
    • Re:File versioning (Score:4, Interesting)

      by prisoner-of-enigma ( 535770 ) on Wednesday July 28, 2004 @04:20PM (#9824781) Homepage
      I know that some don't like it, but we need the option of file system versioning, so that if/when you delete half the lines in your letter/program/... you can get them back from the previous copy on disk.

      Interestingly enough, Microsoft has implemented just that very feature in Windows Server 2003. They call it "Shadow Copy Volume" and it's accessed through a "Previous Versions Client" add-on to any file's properties. If you overwrite or delete a file on a Shadow Copy Volume-enabled network share, you can just right-click on the file, select "Properties," and go to the "Previous Versions" tab to see all the prior versions of that file. You can recover any one of them you like and save it anywhere you like. Further, the server only saves the deltas between changes, so it's very space efficient.

      This is one feature I'd *love* to see implemented on Linux. I don't think this is in Samba yet, is it?
  • by JBMcB ( 73720 ) on Wednesday July 28, 2004 @03:33PM (#9824191)
    Make the core filesystem small, robust and fast. Journalling, realtime and not much else. Make add-on modules for fancy things like ACL's, quota, compression, encryption, compatability, extended attributes, etc... Put in shims for calling attributes from a database (db or SQL or whatever)

    XFS comes close, ReiserFS 4 is nice, too. The most important thing is keeping the base filesystem simple and FAST. You think NTFS is fast? Try deleting a complete Cygwin install (>30K files) It takes AGES, even from the command prompt. I've deleted 15K files (That's 15 THOUSAND files) on Reiser 3 on the same machine, it took a few seconds.

    DO NOT make a database driven filesystem. Some day we will have a true, document based desktop paradigm (OpenDoc anyone?) but probably not for several years, until then we need SPEED.

  • Speed and Versioning (Score:3, Interesting)

    by silas_moeckel ( 234313 ) <silas@@@dsminc-corp...com> on Wednesday July 28, 2004 @03:34PM (#9824201) Homepage
    OK we have all these DB things that seem more for meta data and seach realy thats a bit secondary to a filesystem. Most filesystems are access by applications for surprise surprise files with very little user files and lots of application files. While it might make snece to mount /home as some DB is a filesystem with piles of indeed and seachable data so the users can be even more clueless to where anything is. The rest of the system needs faster all around and cluster aware from my point of view. Versioning in the FS ala VMS would be a nice thing as well. Disks are the slowest thing on your average system with Gigabit ethernet moving more data than the highest performing single disk in the real world.
  • Next generation? (Score:5, Interesting)

    by stratjakt ( 596332 ) on Wednesday July 28, 2004 @03:36PM (#9824224) Journal
    Lets get the "this generation" filesystems working correctly, shall we?

    Solid, universal support for ACLs, and while we're at it, let's fix the whole user/group namespace mess Unix has with it. Let's use an SID-style id like Windows does.

    For example: my small network at home, centrally authenticated through ldap.

    Now, windows knows the difference between the user "jim" on local machine A, "jim" on machine B, and "jim" the domain user. They'd be shown as MACHINEA/jim, DOMAIN/jim, etc.. The various SIDs take the domain (or workstation) SID and append the UID. So if his number is 100, his sid is "long-domain-sid" + uid. So when you pass around sid tokens, you know exactly which jim you're talking about.

    Now in linux, we just have numbers for users and groups. If user 100 on machine A is "jim", user 100 could be "sally" on machine B. Moving that stuff to ldap becomes messy, now I have to reconcile the numbering schemes of all the machines I want to migrate. Ick. And you get all kinds of screwy stuff sharing folders, if you ls it on one machine it'll show wholly different ownerships.. Is the source of about a billlion and one nfs security holes.

    And of course, since a file can only have one permission set - owner, user, group, it sure does make for some sucky shit. The lazy among us would just run as root all the time to avoid the whole damn mess.

    I know there's a circle jerk of workarounds, patches and gotchas to avoid this, but it should never be a problem in the first place. The basic unix security model is out-of-date, and is the source of many systemic problems.
    • Re:Next generation? (Score:3, Informative)

      by jregel ( 39009 )
      I suspect that a lot of the differences between Windows and UNIX are due to their respective histories.

      UNIX has traditionally been about big systems with multiple users. Networks have been a standard feature for decades. In this sort of environment, you'd naturally use some network-oriented naming service, be it NIS or LDAP.

      Windows has grown from a PC background where everything is traditionally local. In a networked environment there is little need for the MACHINEA/user when there is a DOMAIN/user (some
    • Re:Next generation? (Score:5, Interesting)

      by mattdm ( 1931 ) on Wednesday July 28, 2004 @05:19PM (#9825516) Homepage
      Is the source of about a billlion and one nfs security holes.

      Or rather, it is the source of the NFS security hole. But it's okay. NFS4 (or 3, even) with Kerberos totally solves this problem, much more elegantly.

      Everyone's all excited by ACLs, but I'm sceptical of their real world value. The "keep it simple" principle of security can't be emphasized enough. With ACLs, you have to really examine the access rights of a given object to figure out what's going on. With the standard Unix user/group system -- with simple directory-based inheritence -- it's completely transparent.

      And, most importantly, I've yet to see one thing worth doing with ACLs which couldn't be set up with user/group permissions instead -- and more simply.
      • Re:Next generation? (Score:5, Interesting)

        by Malor ( 3658 ) * on Wednesday July 28, 2004 @09:07PM (#9827244) Journal
        Properly done, an ACL system will give you a MORE secure system, not a less secure one, because there are fewer chances for mistakes.

        In the NT 4.0 days, one of the better ways to handle permissions was the 'AGLP' standard. User A)ccounts go in G)lobal groups, G)lobal groups go in L)ocal groups, and local groups get P)ermissions.

        This allows a nice level of indirection. I implemented this standard by specifying that Global groups described groups of people, and that Local groups specified access privileges. I built Local groups on each server describing the kind of access privileges they offered. Generally, I would make four groups for each of my intended shares: Share NA (no access), Share RO, Share RW, and Share Admin. I would assign the appropriate ACLs in the filesystem, and then put Global groups from the domain into the proper Local groups. The Accounting group, for instance, might get RW on the Accounting share. Management might get RO, and the head of Accounting and the network admins would go into the Share Admin group.

        What this meant was that, once I set up a server, I *never again* had to touch filesystem permissions. Not ever. All I had to do was manipulate group membership with User Manager... with the caveat, of course, that affected users had to log off and on again for permissions to take effect. But this is also true with Unix, in many cases. (when group membership changes).

        Note that Windows 2K and XP have more advanced ways to handle this, so don't use this design in a Win2K+ network.... this is the beginnings of the right idea, but 2K added some new group concepts. Under Active Directory, this idea isn't quite right. (I'd be more specific but I have forgotten the details... I don't work much with Windows anymore.)

        ACLs are key to this setup, because I can arbitrarily specify permissions and assign those permissions to arbitrary groups.

        By comparison, User, Group, and Other are exceedingly coarse permissions, and it is very easy to make a mistake. What if someone from Human Resources needs access to a specific Accounting share, but nothing else? Under Unix, I can't just put them in the Accounting group, because that will give them access to everything under that Group permission. I'd probably have to make a new group, and put everyone from Accounting and the one person from HR into that, and then put the special shared files into a specific directory, and make sure the directory is set group suid. That is a lot of steps. Everything is always done in a hurry in IT, and lots of steps are a great way to make mistakes. Messing up just one can result in security compromise.

        In my group-based ACL system, I'd still have to make a custom group, perhaps "HR People With Access to Accounting Share". But I'd only have to touch one user account, the HR person's, and wouldn't have to disrupt Accounting's normal workflow at all, or touch any filesystem permissions.

        Instead of a whole series of steps, any one of which can be done wrong, I have only three: Create new Global group, put HR person in new Global group, put Global group in the correct Local group. All done. Hard to screw this up too badly.

        Now, I'll be the first to admit that a badly-implemented ACL setup is a complete nightmare. But a clean, well-thought-out ACL system, in a complex environment, is virtually always superior to standard Unix permissions.
  • Why????? (Score:3, Informative)

    by Libor Vanek ( 248963 ) <libor.vanek@g[ ]l.com ['mai' in gap]> on Wednesday July 28, 2004 @03:37PM (#9824233) Homepage
    Can somebody explain me WHY should we put things like database, indexing/previewing etc. into the filesystem => KERNEL SPACE!!!! What advantage does it bring?

    Any good (XFS, JFS, ext3) filesystem now has nice feature called Extended Attributes which is intented for STORING such a data (like previews etc.). And using user-space server it's much more easier to add plug-ins for various file-formats, "search" plugins etc.
  • by dekeji ( 784080 ) on Wednesday July 28, 2004 @03:43PM (#9824315)
    All indications are that Linux, Windows, and Mac OS are moving in a common direction with filesystem innovation

    Whether or not it is useful, one thing is clear: this sort of thing is not "innovation". Databases as file systems have been around for decades, as has the question of file system metadata. The UNIX choices in this area are by design, not by an accident of history, and the motivations behind those choices are as valid today as they were several decades ago.

    Linux is a ways yet from having a fully attributed, database-driven, journaling filesystem. The direction of future development looks promising, though. Linux will certainly compete as the search wars come to the desktop. Linux's value to the enterprise depends on it.

    There are two things one needs to keep apart: what functionality do we want to support and how do we want to support it. Search clearly is important. Metadata clearly is important. However, whether a "fully attributed, database-driven, journaling filesystem" is the best way of implementing those features is an open question. There are many possible alternative designs.

    And, in fact, it seems right now as if Microsoft is, in fact, not building what the author seems to think they are building, but is choosing an implementation strategy that combines a more traditional file system with user-level databases.
  • been there, had that (Score:3, Informative)

    by prgrmr ( 568806 ) on Wednesday July 28, 2004 @03:54PM (#9824441) Journal
    Never mind that Microsoft has been promising its "innovative" native database/filesystem (copying an idea from IBM's hugely successful OS/400) for more than ten years now. Anybody remember Cairo?"

    The seamless filesystem-in-a-database was created in the Multi-Valued DB structure [multivaluedatabases.com] in the mid-60's and release as the the Pick OS [wikipedia.org]. It is still sold by Raining Data [rainingdata.com] and runs on Windows, Unix, and Linux.
    • OS400 native FS is still the neatest model for DB-FS integration out there. It's unique because the File system is written to allow file-member-record-field access directly from a command line call... It's similar to what was posted above with Reiser4 and having plugins. The key to AS400 success is that the "file system driver" is pushed down into hardware controller roms so the DB like access is nearly fool-proof. Stuff like queries and SQL are just "plugins" on top of that model. The only thing I see
  • Why no MS DBFS? (Score:3, Interesting)

    by Doc Ruby ( 173196 ) on Wednesday July 28, 2004 @04:11PM (#9824648) Homepage Journal
    Exactly why hasn't Microsoft released a SQL-queryable database filesystem? They validated the architecture with their marketing years ago, after IBM proved it technologically. And its advantages are obvious. In addition to better features, it offers Microsoft the opportunity to sell its SQLServer product to serious users, with a natural upgrade path. And its an opportunity to promote the MS version of SQL across the world, raising the tide against Oracle and the rest (including MySQL). It could also make an end-run around the Samba project, which Microsoft initially helped, but now apparently fears. And of course it's a better platform on which Microsoft can offer "open yet proprietary" file formats. Is Microsoft really so incapable of actual innovation, or is there something else wrong with this picture?
  • Palm (Score:3, Insightful)

    by gveloper ( 168162 ) on Wednesday July 28, 2004 @04:13PM (#9824678) Homepage
    Doesn't Palm OS have a database/filesystem hybrid too?
  • by njdj ( 458173 ) on Wednesday July 28, 2004 @04:15PM (#9824714)
    Linux must find a next-generation filesystem to keep pace with Microsoft and Apple,

    This whole article is based on nonsense. Microsoft has a long way to go before it catches up with Linux in the filesystem area. There is no realistic prospect of Microsoft keeping pace with Linux filesystems in the foreseeable future.

    (Before dismissing me a Linux fanboy, note that the above applies only to filesystems. When it comes to understanding of GUI issues, I'd make a similar statement but with Linux and Microsoft swapped. But that would be off-topic.)

    • Well, if you look at NTFS, it has one issue : fragmentation when the disk is near full capacity.

      On the other hand, it already does almost everything ReiserFS 4 _promises_ to do, and with NTFS it actually works, tried in the real world, and can be trusted.

      Small files aggregation : NTFS stores small files in the MFT directly
      Plugin : NTFS reparse points
      Encryption : there since ages
      etc..

      Linux supports a lot of filesystems, but very few come close to NTFS when it comes to capabilities, scalability,...
  • EMBED VERSIONING! (Score:3, Insightful)

    by chill ( 34294 ) on Wednesday July 28, 2004 @04:36PM (#9824983) Journal
    Searching is all wonderful and that, but not the direction I believe would provide the most benefit.

    Embed versioning into the filesystem. I believe Reiser has talked about this. Imagine being able to right-click on a file, folder or even partition and choose "roll back" or "restore" from the context menu. It then presents you with a list of snap-shot points you can restore to, starting with "last change".

    Who backs up their hard drives any more? Have you thought of the problems and time involved in backing up 40, 80 or even 200 Gb of data? I'd MUCH MUCH rather have this feature than some enhanced search.
  • by jmulvey ( 233344 ) on Wednesday July 28, 2004 @04:37PM (#9824995)
    I'm an architect for a large corporation that is today trying to find a replacement for NFS. Our key goals are:
    - Integration with a Kerberos SSO strategy
    - Fast performance
    - Cross-platform compatibility with Windows
    - Robust Access Control mechanisms, RBAC would be nice but DACL is probably reality.

    In my opinion, these are the primary goals that companies are looking for. Not a "journaling" file system, or built-in encryption. Sure those are nice, but let's get the basics first. Unfortunately, CIFS is still in quite a state of beta (even on the 2.6 Kernel) and there don't seem to be any real other alternatives.
  • by JohnsonWax ( 195390 ) on Wednesday July 28, 2004 @04:37PM (#9825006)
    Apple is simply adding functionality to HFS+. Everything you've read about Spotlight describes a services that sits above the file system. It takes advantage of HFS+, but there is NO database driven FS coming out from Apple.

    Their solution is to build a service that can interact with individual files, including their native metadata (ID3 tags, pdf metadata, MS Office metadata, email headers, etc.) through metadata importers and to store the metadata indexes in a separate database. This is relatively similar to how iTunes does it's thing. The services will have lots of APIs open to apps to incorporate the functionality locally.

    The obvious clue that HFS+ isn't going away is that Apple is finally pushing full HFS+ support back up to the command line utils like cp to support resource forks and whatnot in 10.4, so hopefully we can stop needing OS X specific tools like ditto.

    They've been adding improvements steadily over the years, such as journaling and most recently case sensitivity. The more obvious question to me is why doesn't the Linux community just jump all over HFS+ and build off of Apple's work since they seem more than willing to give the HFS+ support back anyway?
    • They've been adding improvements steadily over the years, such as journaling and most recently case sensitivity. The more obvious question to me is why doesn't the Linux community just jump all over HFS+

      Because the features they're adding to HFS+ are already available in other filesystems? There's nothing in HFS+ that would make linux users want to use it, and some compelling reasons why they would not. (Performance, size limits, lack of an online resizer, etc.)
  • by CAIMLAS ( 41445 ) on Wednesday July 28, 2004 @04:38PM (#9825021)
    Does anyone else find the fact that someone is comparing WinFS to the likes of reiserfs (despite my dislike for it), XFS, and hell, even ext3?

    Granted, the proposed featureset of WinFS is vastly 'superior' to that of the 3 main linux contenders, but it could be argued that WinFS is neither a filesystem itself, nor is it on par with any of the linux filesystems in terms of performance or stability (if NTFS5 is to be of any forboding).

    I seem to recall reading about several projects that impliment WinFS-like features. I don't recall what they were, and I don't think they were kernel-space projects, but I recall thinking, "this looks nice".

    Besides, let's be honest here. What practical functionality does WinFS provide that is above and beyond the combination of 'locate', and 'file' used in conjunction? WinFS seems to me to be merely a crude hack so as to make up for the fundamental shortcomings with MS's OS design.
    • NTFS5 is miles ahead of most Linux filesystems (reparse points, encryption, compression,... all supported transparently), only ReiserFS 4 comes close, and it's not yet in a usable state.

      I'm wondering if you even know what WinFS is, comparing it to file and locate it laughable at best.

      Try finding all mp3 by Brian Adams or Withney Houston on a 200Gb disk filled with 250'000 files with file and locate, you'll get the answer 10 minutes later.

      With WinFS, it will take you a whooping 2 seconds maximum.

      That is
  • EXT3 FS (Score:3, Interesting)

    by jonnystiph ( 192687 ) on Wednesday July 28, 2004 @05:30PM (#9825625) Homepage
    At work here, the previous admin installed a number of machines with EXT3 FS on the drives. These machines (RH 8.0 - EL3) crash sporadically often giving indications the FS was at fault.

    While I personally believe Redhat is known to push "unstable" releases, I was suprized that from 8.0 - EL3 the EXT3 fs was still crashing and Redhat was still offering this as default on an install.

    Anyone else had better expierences with EXT3? I am curious if anyone has more information on why this FS seems so damn unstable.

    For test purposes we run "out of the box" installs, so there should be no kernel tweaking or any other "anamolous" things going on with the install's or the boxes.
  • The first feature I would like to see added would provide the capabilities of a Partition Data Set from the IBM TSO days. We would not copy what IBM did of course because they did some stupid things meant to sell more hardware. One thing was deleted files were not really deleted and you needed to "re-generate" the PDS when it filled up. Nice trick!

    Suppose you had an infinit number of loop back devices and these were hidden and used internally by the file system and when you started an application you could "mount" what for many intents and purposes looks like a TARBALL and the application in question and ONLY the application in question got to see all the files in this TARBALL. Well, the files inside a "TARBALL" of this nature would probably not be compressed, but, they could be if desired... Well, that is the concept of a Partition Data Set.

    In the case of a user logging in, when the shell is started a mount could take place against the user's private data set. By doing this on a shared machine, file security can be guaranteed. For export and import the system could mount a "shared" dataset.

    This sort of secutiry is far superior to ACL's and anything present file systems offer for the very simple reason that normal people including systems administrators would not normally see any of the files inside one of these datasets. Consider the advantages of running an apache server where you KNOW all associated files needed by that release of apache are in a single dataset. There IS not easy way to lose a file or clobber it or accidently delete it and so forth. Next consider that when that copy of apache starts up it _could_ simply mount a set of files each of which contains the whole website for a given domain.

    Upgrading to a new copy of apache would be as simple as copying in a new dataset and mounting it against the web datasets. If a glitch is found, simply restart the old copy.

    Backing up a set of files becomes a simple copy operation. Replication can be accomodated as well.

    Systems Administration in those old IBM mainframes was MUCH easier than with UNIX systems and this is in large part because of the way the system handled partition datasets.

    ------------

    Now, with this we would want to be able to mark certain files as being external sort of like opening up a window, and through this window we could for instance access certain files which might be the executables and supporting scripts.

    Of course people will point out we can accomplish some of this with a loop back mount. The problem with the loopback mount is that it populates the directory tree and this is what I really want to avoid. Frankly there really *IS* no reason for even a sysadmin to be able to see 90% of the files that say consitute a web server, or say PostFix, or PostgreSQL. We accomplish a lot if the executable which needs access to its supporting files has a "private" loopback and only this executable by default gets to see the mounted dataset.

    --------------

    Next idea is versioning the way Digital Equipment Corporation did it in the VAX. We simply append a version number and what the delete does is append a version number. With disk drive capacities heading into the stratoshere there is no reason to be conservative.

    And this leads to the next idea which has been mentioned before... that is replication - across machines.

    I can buy for $20 bux a machine (P1 200mHz) that can run a 20 GB hard drive and in fact I think they can run 80 GB hard drives as well. Rsync is useful, but a full replicating filesystem at the kernel level, or at least at the level of a deamon close to the kernel would mean that a machine can be backed up to another machine in perhaps another building automatically and with little effort.

    Well, I'm sure other people have other things they might like to add. This is my wish list.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...