Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

XFS merged in Linux 2.5 271

joib writes "According to this notice, the XFS journaling file system has been merged into Linus bitkeeper tree, to show up in 2.5.36." Ya just know someone out there wants to have every journaling file system on one drive just 'cuz.
This discussion has been archived. No new comments can be posted.

XFS merged in Linux 2.5

Comments Filter:
  • by Gabrill ( 556503 ) on Tuesday September 17, 2002 @10:29AM (#4273018)
    The round file gets all my bills. The manila one gets all my pay stubs. It works out ok.
    • by Anonymous Coward
      "The round file gets all my bills. The manila one gets all my pay stubs. It works out ok. " ...and the IRS gets everything else. Time to use that 'hidden' attribute.
  • Comparison? (Score:3, Interesting)

    by FyRE666 ( 263011 ) on Tuesday September 17, 2002 @10:32AM (#4273030) Homepage
    Does anyone have a link to any comparisons of all these journaling filesystems, showing their strengths and weaknesses? Why shouldn't I just stick with ext3 for everything?
    • Re:Comparison? (Score:3, Informative)

      by Wee ( 17189 )
      Does anyone have a link to any comparisons of all these journaling filesystems, showing their strengths and weaknesses?

      Google is always your friend [google.com].

      -B

    • Re:Comparison? (Score:5, Informative)

      by rindeee ( 530084 ) on Tuesday September 17, 2002 @10:40AM (#4273099)
      http://aurora.zemris.fer.hr/filesystems/
    • I have seen comparisons of ext3 and Raiserfs but these have not included other jfs's.

      Basically I still see ext3 as much more full-featured than Raiserfs (supports file attributes which can be useful in many places on the system) but Raiserfs is faster (esp. for small files), so if you have databases, or are using your filesystem as a hierarchical database, maybe Raiser is for you.

      Now how does XFS compare to these two?
    • Re:Comparison? (Score:2, Informative)

      by rindeee ( 530084 )
      And this: http://oss.software.ibm.com/developer/opensource/j fs/project/pub/jfs040802.pdf
    • My understanding (Score:3, Informative)

      by 0x0d0a ( 568518 )
      ...is that the breakdown goes something like this:

      ext3:
      * can be told to journal everything, including data (not just metadata) -- most theoretical reliability.
      * is backwards compatible with ext2

      xfs:
      * tweaked for streaming large files to/from disk -- probably best at sequential reads/writes.

      reiserfs:
      * best performance with many, many files in a single directory.
      * Can save space on very small files with -tail option

      jfs:
      * really don't know. :-)
      • Re:My understanding (Score:4, Interesting)

        by 4of12 ( 97621 ) on Tuesday September 17, 2002 @10:51AM (#4273174) Homepage Journal

        xfs:
        * tweaked for streaming large files to/from disk
        -- probably best at sequential reads/writes.

        Hm...would that imply that XFS would be say a really good candidate FS for building video streaming devices?

        Seems like it might fit well from the perspective of:

        1. high speed read write (good enough for 1080i?)
        2. quick reboots due to journaling (essential for consumer electronics devices)
        3. don't have a cow if there are a few bit errors in the stream
      • Re:My understanding (Score:2, Informative)

        by dmelomed ( 148666 )
        Just to note: ReiserFS is also inodeless. This means you can't run out of them, as far as I can imagine.
        • Re:My understanding (Score:3, Informative)

          by jgarzik ( 11218 )
          If reiserfs was inode-less, it would not work with Linux.

          Even NTFS has inodes, they simply call them "MFT records."

      • This comes secondhand, and is not a personal opinion of my own, but I think it's worth mentioning here.

        I had an operating systems professor that did some filesystem design work (and DBMS design work, which at a low level and especially back in the day, was pretty similar). He was pretty negative on the mass demand for journalling filesystems.

        See, what people really want is filesystems that don't get corrupt. It's also kind of nice if the recovery procedure at mount is pretty fast. So they want a filesystem that is always consistent -- it's never in a state where if the power is lost, the computer will try to mount the thing and say "hmm, this isn't a proper filesystem."

        So if you want to add a file, you can't just add an entry to the table of files, then create the file metadata, than complete the filesystem operation, because you could lose power and end up with only the entry, but no filemetadata...so you have a pointer to garbage on the disk.

        You need some sort of atomic updating. You want to say "at this point the change I'm making to the FS is not active, at this point it is, and nowhere in between is the FS invalid".

        Journalling is one method of making atomic updating -- always write in the forwards direction on the hard drive, building a journal of all actions as you go, and just using the lastest journal entry when you're reading. Journalling tends to have pretty sexy write performance, because it always writes forward and doesn't have to seek it all. It also usually has fairly lousy read performance, since you have to be sure that you're using the most up to date journal entries.

        To avoid some of the slower read performance, most "journalling" filesystems on Linux only journal metadata -- the lists of files in directories, permissions, times modified and so on, because the data is what you're really worried about accessing quickly, and if the data in a file gets corrupted when you lose power, you only lose that file -- not the whole filesystem.

        It's possible to use other techniques -- I believe that BSD's FFS uses a non-journalling approach to ensuring a consistent filesystem at all points in time. Despite claims both ways, I don't believe that FFS is radically faster or slower than any of Linux's journalling filesystems.

        And what's my personal preference? Well, I use ext3, because I already had an ext2 filesystem, and it's awfully easy to upgrade. ext3 used to have pretty bad performance, but now it's generally on par with ReiserFS (which was ahead for a bit), except for Reiser's strongest points (like a single directory with, say, five or six thousand files in it). That being said, I suspect that most people just use ext3 in it's "metadata journalling mode", which means that it doesn't have many advantages over reiserfs.

        Ext3 builds heavily on ext2, which is a pretty mature filesystem. I've had one roommate that screwed up his reiserfs filesystem a while back. I believe the bug that caused that was fixed, but it made me a bit leery of reiser at the time.

        The other misgiving I have about reiser is that I'm uncomfortable with the direction that the developers are going -- very heavyweight filesystem drivers, with plugins and all sorts of stuff. I'm not sure that I want my filesystem drivers to be so complex.

        On the other hand, if you have lots of very small files (not empty, just a hundred bytes or so), Reiser does a great job of keeping them from eating up more disk space than they should (normally, you have to throw 4K or so at a file, unless you've changed the block size of your FS).

        XFS, as far as I can tell, wasn't really designed so much to be a general-purpose filesystem as a streaming video filesystem.

        And, as I've said earlier, I don't know a thing about JFS.

        Other interesting tidbits:

        * ext2 is still a pretty well-designed, fast filesystem.

        * All of the mentioned Linux filesystems beat the snot out of MS's FAT-16 and FAT-32 in performance and *particularly* fragmentation. The popular act of defragmenting your hard drive on Windows stems solely from the fact that FAT was not well designed for anything but the very smallest of filesystems, like a disk.

        * I've heard stories that NTFS (MS's new filesystem) is still worse off from a fragmentation point of view that Linux's FSes. That's second hand, so it could be wrong.

        * I know for a fact that real-world performance on NTFS (at least in the NT 4 era) is significantly slower than on ext2. I have a strong suspicion that a fair bit of that stems from the ACL security system MS uses in their filesystems. In terms of performance, ACLs are not a good choice.
    • Re:Comparison? (Score:5, Informative)

      by auferstehung ( 150494 ) <tod@und@auferstehung bei gmail@com> on Tuesday September 17, 2002 @11:00AM (#4273236)
      You could check out Daniel Robbins' "Advanced filesystem implementor's guide" [ibm.com] over on IBM's developerworks. He covers reiserfs, ext3, and XFS and I believe there is a link to articles on JFS in the Resources section at the bottom of the page.
    • I used to run both ext3 and ReiserFS on my home machine.. my experience is that ext3 sucks..

      Every month or so, I had to sit through the following:

      "Warning: drive has been mounted more than 30 times, check forced" on the ext3 partition

      I thought the idea for journaling was to AVOID fsck's on boot?
      • # man tune2fs

        (you can turn fscks off, change the number of mounts or make it time-dependent, etc.)

      • Every month or so, I had to sit through the following:
        "Warning: drive has been mounted more than 30 times, check forced" on the ext3 partition

        This is a safety feature. Filesystem corruption can be caused by hardware funnies as well as software bugs. Your memory could be flaky, your hard drive could be on its way out, your IDE cable could be too long, your SCSI chain could be improperly terminated, your motherboard might be iffy, your CPU could be running too hot. There might be software bugs in the generic kernel, the block / scsi drivers, the ext3 code, or even some random driver that has nothing to do with filesystems or memory management.

        Because of this, ext2 and ext3 have tunable parameters for how often to force an fsck, overriding the fact that the fs is supposed to be in a known clean state. Apparently reiserfs does not have this safety feature - or does it? (I don't know.)

        If this annoys you, turn it off. 'man tune2fs', or specifically,

        tune2fs -c0 -i0 /dev/your/filesystem

        HTH..

      • And why do you reboot every day?
  • Cool (Score:2, Interesting)

    From linux-2.6 on I don't have to repatch the kernel source with that sgi.com XFS patch everytime a new kernel comes out. BTW, I still have trouble getting XFS to work on linux-2.4.19 because sgi won't update their stable XFS patch from 2.4.18.
    • Re:Cool (Score:3, Informative)

      by ShawnX ( 260531 )
      Try my patches at http://xfs.sh0n.net/2.4. They merge in XFS with 2.4.20-pre7 (current) and rmap =)

      Shawn.
  • Not just journaling (Score:5, Interesting)

    by Anonymous Coward on Tuesday September 17, 2002 @10:34AM (#4273050)
    As I understand it, XFS also offers things like extended attributes. However, I have been told that the Linux VFS does not offer any way to read or write the attribute information?

    Is this correct? Will the VFS also be extended so that you can make use of extended attributes in XFS?
    • by publius ( 69199 ) on Tuesday September 17, 2002 @10:57AM (#4273213)
      I read them, write them and delete them all the time using the attr family of commands. 64K limitation on the current value size but that's not so bad, and in the future it will be the (I think) 512K that Irix has. When you begin to think of all the cool things you can do with that, it becomes very interesting...
    • by IamTheRealMike ( 537420 ) on Tuesday September 17, 2002 @11:18AM (#4273491)
      Is this correct? Will the VFS also be extended so that you can make use of extended attributes in XFS?

      Cooler, if I read the tea leaves right. I believe some time ago now there was a thread on lkml about whether it'd be possible to have files as also directories (and vice-versa). The reasoning behind this was simple: we want flexible filing system attributes, but not at the expense of API bloat. You want ACLs? That'll be another API then. Extended Attributes? Another API. What, you want heirarchical extended attributes too? Well you've just created another version of the filing system API haven't you.

      The theory goes (and Hans Reiser, top guy, explains it much better than I can) that by altering one of the rules of the filing system, we can get lots more power and expressiveness without having to invent lots of new APIs. Let's say you want to find out the owner of file foo. You can just read /home/user/foo/owner. You can edit ACLs by doing similar operations. Now you can have something more powerful than extended attributes, but you can also manipulate that data using the standard command line tools too! Coupled with a more powerful version of locate, you can have very interesting searching and indexing facilities.

      This has implications beyond just string attributes. Now throw in plugins, so for instance the FS layer interprets JPEGs and adds extra attributes. Now you can read the colour depth of an image by doing "cat photo.jpg/colour_depth" or whatever. You can get the raw, uncompressed version of the file by doing "cp photo.jpg/raw > photo.raw". Noticed something yet? You no longer need a new API for reading JPEG data, because you are reusing the filing system API.

      But the FS is not a powerful enough concept, I hear you cry! Have no fear, for with new storage mechanisms comes new syntax too, to allow for BeFS style live queries. If you want more info, you should really read up on this stuff at Reisers site [namesys.com].

      That's why ReiserFS is so good at small files as well as large files. Have you ever wondered why that is? It's not just a quirk of its design, it was very deliberate. One day, Hans wants to see us store as much information as possible in a souped up version of the filing system, so reducing interfaces and increasing interconnectedness. Or something. It sounds cool anyway :) That's one thing that RFS has that the other *FSs don't - the ReiserFS team has vision.

      • by goga ( 8143 )
        This all sounds very Plan 9-ish. (Not that you can read files as directories in Plan 9.)

      • How do I use these named streams for a directory? To re-use your example, can I:

        $ cat $HOME/owner

        and get my username? Or will it be looking for a file named "owner" in $HOME?
        • I oversimplified things a bit. The current syntax has yet to be decided, for for most metadata attributes the current plan (subject to change without notice blah de blah) is to prefix metadata attributes with double dots, so it'd be

          $ cat $HOME/..owner

          Alternatively, standard UNIX attributes may be placed in a subsubdir, so :

          $ cat $HOME/..metadata/owner

          Nobodies entirely sure yet.
      • oh no, it's the plan9 from bell-labs operating system all over again :)

        reiser is just implementing what others have done long time ago:

        http://plan9.bell-labs.com/sys/doc/9.html
        • Forgive me if I'm wrong (or better, correct me)... but I didn't think Plan9 did this sort of thing. At least, not the file metadata, or plugin interfaces to files. It just placed a number of kernel interfaces into the filesystem. There's no (significant) reference to "metadata" in that document, and in one instance where they are talking about permissions, they are talking about using chmod on a process, but not the more novel echo newGroupOwner > /processes/processID/group

          I think what Reiser is talking about could be truly novel -- I'm sure someone has thought of it before, but I don't know that anyone's made it happen in a real OS. (Though I wouldn't be surprised to see it in an experimental OS)

  • XFS FAQ (Score:5, Informative)

    by semaj ( 172655 ) on Tuesday September 17, 2002 @10:35AM (#4273064) Journal
    There's an XFS FAQ and a load more information about it on SGI's site [sgi.com] - which points out that several large distributions have had XFS support for a while by default.

    Still, it's noteworthy that Linus has finally accepted it into his tree...

  • This is great; more filesystem support is always good in my opinion. Now if we could just get some stable NTFS read/write support I would be set.
    • Re:Excellent (Score:2, Informative)

      by psamuels ( 64397 )
      Now if we could just get some stable NTFS read/write support I would be set.

      It's on the way. Read-only NTFS (rather poor in 2.4) has been rewritten and is much improved in 2.5, and a certain subset of read-write (writing new contents to an existing file) is reported to be stable. I haven't tried it. Full read-write may or may not make 2.6.0 but you can be sure it is in active development.

    • Now if we could just get some stable NTFS read/write support...

      That will only happen if Microsoft gets a court mandate to open their specifications. MS has far too much economic benefit in deliberately breaking compatibility to not do so. They've changed the ACL portion of the FS in such a way as to break the Linux NTFS driver in every single NT-line kernel release since Linux came out with an NTFS driver.
  • Silly question (Score:5, Interesting)

    by Mr_Silver ( 213637 ) on Tuesday September 17, 2002 @10:41AM (#4273104)
    This is a silly question but ...

    When I install Linux, and it comes to anything to do with filesystems, I just go with whatever default it gives me.

    I suspect I'm not exactly alone.

    So ... what compelling reason is there for me to use any other filesystem? Being more stable or better with data loss is nice, but considering I've only ever had this problem once, doesn't mean that i'll leap up and down going "oo oo! got to have blahFS!" any time soon.

    To give you an example, FAT16 to FAT32 was the fact you could have larger partitions. FAT32 to NTFS was because of permissions and security.

    But whatever we have now (can't remember, i barely look) to XFS? What *compelling* absolutely-must-have reason do I have to go change from whatever my installer suggests putting on for me?

    Or should I just stick with what the installer suggests from now until eternity?

    • Re:Silly question (Score:2, Interesting)

      by kelv ( 305876 )
      As a desktop user you might be able to get away with any old filesystem......

      However, if you have a server that has to have high performance and has data that you *really* care about then one of ReiserFS, XFS, EXT3, etc... becomes a *really* good idea.
    • Re:Silly question (Score:5, Informative)

      by MasterD ( 18638 ) on Tuesday September 17, 2002 @10:49AM (#4273153) Journal
      XFS supports ACL's (or access control lists) which are much better than standard UNIX permissions.

      XFS is an extent based filesystem which means that you don't end up wasting tons of space having to allocate a 4K block for every small file. And you don't need to jump through tons of indirect blocks to get large files.

      XFS allocated inodes on the fly so it grows with what data you put on there. Once again, not wasting space up front. And it sticks the inode near the file itself so the head does not have to move far on the hard drive.

      XFS supports extended attributes which can be used for all kinds of extensions later on.

      XFS has been around since 1994 and is the most mature of the journalling filesystems.

      And there are many other reasons that I cannot think of right now.
      • Re:Silly question (Score:2, Informative)

        by felicity ( 870 )
        XFS also allows you to grow the filesystem live (ie: mounted). This is great for those of us who use it in conjunction with a volume manager (I use LVM). lvextend to enlarge the volume, growfs to enlarge the filesystem. No downtime required. :)

        It's also a 64-bit filesystem, so you could have extremely large files and filesystems, although my understanding is that the Linux VFS system can't handle the large sizes right now (1Tb max filesystem for instance). XFS is the standard filesystem for SGI's IRIX which doesn't have the restrictions. :)
      • Re:Silly question (Score:5, Insightful)

        by rseuhs ( 322520 ) on Tuesday September 17, 2002 @11:20AM (#4273518)
        XFS supports ACL's (or access control lists) which are much better than standard UNIX permissions.

        Actually I think ACLs are the reason why everybody is running as Administrator in Windows. They are just too damn complicated.

        The Unix-permissions are simple. You can understand the concept of user-group-all in a few minutes and there are only 2 commands to remember (chmod, chown).

        Also, Unix-permissions have so far fit with everything I needed and in the rare case you really need something special, there is also sudo.

        I think ACLs are only useful for a tiny minority, IMO. I certainly don't need it.

        • > Actually I think ACLs are the reason why everybody is running as Administrator in Windows. They are just too damn complicated.

          The reason is, that they're accustomed to the DOS based Windows-Series.
          For some people, the concept of a superuser and a normal user seems to be too complicated.

          >The Unix-permissions are simple.

          Great... Now how does a small group of students get read write rights on a set of files/directories?

          >, there is also sudo.

          It's just that you are switching into superuser-mode for every little thing a little out of box.
          Since you're complaining about people running Windows as Administrator, you certainly are aware of the lack of style in this.
          Not to mention, that it is out of question for every larger system (practically every system, which exists outside ones home).
        • I think ACLs are only useful for a tiny minority, IMO.

          One thing they are useful for is if you are replacing NT file servers with Samba servers. If you don't use ACLs (either XFS or the EA/ACL patch [bestbits.at] with ext2/3) then your Windows users who connect to your Samba shares don't get all the fine-grained permission control to which they are accustomed; Samba fakes it. Combine this with winbind and you end up with almost a perfect drop-in replacement for your NT file servers, and you don't have to manage those users separately. Sah-weet.

        • Re:Silly question (Score:5, Interesting)

          by Jeremy Allison - Sam ( 8157 ) on Tuesday September 17, 2002 @01:36PM (#4274983) Homepage
          POSIX ACLs aren't much more complex than
          standard UNIX permissions and allow you to do
          the 2 common cases :

          1). Group finance has access + user Jill
          2). Group finance has acces but not user fred.

          But then again I wrote the Samba POSIX ACL
          code so I'm biased :-).

          Windows ACLs are a complete *nightmare* in
          comparison. I still don't understand why Sun
          added an incompatible varient of Windows ACLs
          to NFSv4 (ie. it's close, but not the same as
          the real Windows ACLs. The problem is they based
          the spec. on the Microsoft documentation of how
          the ACLs work. Big mistake.... :-).

          Regards,

          Jeremy Allison,
          Samba Team.
    • Re:Silly question (Score:3, Informative)

      by fruey ( 563914 )
      Performance. Different systems are going to take more or less overhead depending on the task. Some daemons might write a lot of data to logs, you want this to be done asynchronously, you may not need the data so badly, you don't need journalling perhaps. (so use ext2??)

      Or you have a proxy, you don't care if suddenly your cached data is lost, it will soon be refilled, it's not important data, you want performance without too much security (reiserfs)?

      In fact each filesystem has inherent limits on inodes, filenames, permissions, etc... so you go with any that has a minimum for each thing you need. Journalling you don't really need unless you want to be able to step backwards or repair your filesystem in more interesting ways...

    • Re:Silly question (Score:3, Informative)

      by blakestah ( 91866 )
      1) Backup strategies. Versions of dump are available for ext2/ext3 and xfs, but not for ReiserFS (I don't know about JFS). (I don't mean to start a page cache/buffer cache debate).

      2) Journalled file systems mean fast re-boots on power outages

      3) Speed. This depends on your usage. A huge mail spool machine may use ReiserFS on the mail spool. For most people it is a wash.

      4) Ext3 can be remounted as ext2, and really good file system checking tools exist for ext2/3.

      Mostly, though, you CAN just stick with whatever the default suggests.
      • 2) Journalled file systems mean fast re-boots on power outages

        They mean faster reboots period because they never need to be checked on boot - so you don't get that annoying "Ahem, you've rebooted too many times, I'm going to check your hard drive while your client, who's looking over you shoulder, wonders why you re-assured him you'd only have his production server down for half a minute to install the new kernel, and I'm spending 5 minutes scanning his drives."

        Of course you can turn off those checks on ext2 too, but that would be stupid.

        • They mean faster reboots period because they never need to be checked on boot - so you don't get that annoying "Ahem, you've rebooted too many times, I'm going to check your hard drive while your client, who's looking over you shoulder, wonders why you re-assured him you'd only have his production server down for half a minute to install the new kernel, and I'm spending 5 minutes scanning his drives."

          Journalling does protect against software caused inconsistencies. It does not protect against hardward probs. Periodically, it is a VERY good idea to unmount and fsck while checking for bad blocks.
          • An occasional fsck on a production system is quite important, I agree.

            This is what scheduled downtime is for. I understand that it's a so-called "helpful measure" to automate the process, but at times, it's downright annoying. If the admin isn't bright enough to even schedule maintenance periods, then he ought to be told to clean out his desk.

            Again, I completely agree with you, but I think that any system that runs periodic maintenance for the admin is really just making things a little *too* convenient.
    • This is not intended to be a facetious answer, but if you have to ask, then you probably don't need anything other than the default filesystem provided by your distribution.

      Linux is used in an incredible variety of environments, from embedded systems without disks to seriously large servers and parallel supercomputers. As you might imagine, the default filesystem isn't always ideal. But, if you're just running an ordinary single-user workstation, and aren't experiencing any noticeable performance problems related to your disk access, then there's no reason to worry about your filesystem.

      So "stick with what the installer suggests from now until..." you run into a reason to do otherwise, makes sense.

    • Flexibility, performance, optimization of whatever characteristics you want to optimize.

      There may be no compelling reason for you to change from the default (which, presumably, were chosen as defaults becaused they'd satisfy most people). But for someone looking to optimize for a particular application, it's one more variable they can tweak (different filesystems each having their own strengths and weaknesses.)

      For example, someone doing desktop video editing (really big contiguous files, high sustained data rate needed, etc) might want a different filesystem than someone running a highly active database server (lots of small table changes scattered across the filesystem).
    • If you don't know, the installer had better be suggesting something appropriate, or you're not using a good distribution.

      There's no reason to switch from (ext3?) to XFS. But it's quite possible that the next time you install, if you're formatting a new drive, it will suggest XFS. Of course, converting an existing disk is enough of a pain that you probably don't want to do it.
    • So ... what compelling reason is there for me to use any other filesystem? Being more stable or better with data loss is nice, but considering I've only ever had this problem once, doesn't mean that i'll leap up and down going "oo oo! got to have blahFS!" any time soon.
      Well, gee, if you don't care about the technology, why not just run Windows? Linux is for pioneers.

      Anyway, the big success story for Linux is servers -- and journalling file systems make a lot of sense for servers, because they're more bulletproof. I once worked in a place with a lot of Solaris servers using a non-journaling FS. Now we had fancy UPSs so the servers could go down gracefully. But they were no help when an overloaded power main caught fire (middle of summer), sending out a gigantic surge that took out the UPSs before the power went away. It was days before all the file system repair and restore was complete.

      About a year later, I was working at a place with a lot of IRIX servers. Had a power failure there too. No surge this time -- but no UPSs either. So how long before the servers were back up? About ten minutes after the power came back. XFS, like other journalling file systems, doesn't get all inconsistent when it's interrupted.

  • by Anonymous Coward
    For those of you who don't subscribe to the Linux kernel development mailing list, it was absolutely not a case of XFS just being accepted, there was a HUGE flamewar about it, which only ended a few days ago.

    Mailing list archive [iu.edu]

    Just search in the page for XFS and you'll find the thread.
  • Questions... (Score:3, Interesting)

    by pubjames ( 468013 ) on Tuesday September 17, 2002 @10:43AM (#4273120)

    When is Linux 2.6 likely to be released? I know that there is no fixed date, but what are the criteria?

    My second question... Does it really matter when the 'official' release comes out, when distribution makers "roll-their-own" anyway?

    Sorry if these sound like dumb questions to some of you, but I'd be interested to find out.
    • Most distributions should have 2.6 a couple months after it is released, and Debian will have it by 2012.
    • Halloween is the deadline for development. After that, there will be a while tracking down bugs, and then probably a release in January or thereabouts. Linus is planning on turning things over to someone who's a better release manager (Marcello, I think), which means that the release is likely not to drag on, and likely to actually be stable when it happens.

      I suspect that a number of distributions will include 2.6 pretty quickly this time, because it'll be handled by someone who is good at stability. Also, the distribution makers are actually pretty close to the 'official' process, and they're really in the best position to judge stability on a wide variety of systems. By the time 2.6 is declared stable, most of the distribution makers will be comfortable with it, both in the official version and with their patches.
    • IIRC, feature freeze date for 2.5 is October 31. Figure a few more months of shakeouts and bugfixes after that, we might see 2.6 sometime in first quarter of 2003.

      There is a list around of the desired features for 2.6 that was put together at the Linux Kernel Summit. A very hasty web search turns up this list [lwn.net], which doesn't seem to mention things already merged like the block-IO stuff.
    • what are the criteria?

      "When it's ready." :)

      Seriously, they'll release when the new features and changes they've made are stable and tested enough... and the release of a v2.6 is important, as it means it will be more widely used, more bugs found, etc. Most distrobution makers wouldn't ship a newly 'stabilised' kernel, e.g. 2.6.0, but would wait until it had matured a little...

  • Yes! (Score:3, Informative)

    by zentec ( 204030 ) <.moc.liamg. .ta. .cetnez.> on Tuesday September 17, 2002 @10:44AM (#4273128)

    Despite being a little more resource intensive than ext3, XFS has to be one of the better file systems available. I've used it (obviously) on SGI's and it's been outstanding, and opted to use it before ext3, JFS and Reiserfs (although I believe Reiserfs is just as nifty).

    Having it accepted into the kernel makes upgrades a world easier, and hopefully I'll be able to move away from SGI's modified Red Hat installation. Although, I doubt Red Hat will support it out of the box.

    The other issue that needs fixing with XFS is the lack of an emergency boot disk. XFS enabled kernels are huge, and that creates a slight problem when booting from floppy.

    • This is why I think there should be more useful rescue CDs.

      CD burners are quite widespread, a quick rescue image could be quite small.

      And yes I know not everyone has a burner, I don't either.
    • Boot partition (Score:2, Insightful)

      > XFS enabled kernels are huge, and that creates a slight problem when booting from floppy.

      I think the trick to this is to have a /boot partition, and a /root partition, and make them both ext2. Then you can boot from a floppy, and then boot the larger image on the boot partition. That was the reason given for having those partitions in the Linux Stadard Base documents, anyway.

      But I'm an engineer, not an IT person, so I could be mistaken as I've never attempted to do it myself.

  • by someonehasmyname ( 465543 ) on Tuesday September 17, 2002 @10:54AM (#4273197)
    this pdf [cmu.edu] compares how journaling file sytems compare to non-journaling systems like ffs or freebsd's soft updates.
  • Ya just know someone out there wants to have every journaling file system on one drive just 'cuz.

    Ya. And people want to have every ethernet card in one box just 'cuz, so there are a bunch of different drivers for ethernet interfaces.

  • by chrysalis ( 50680 ) on Tuesday September 17, 2002 @10:58AM (#4273223) Homepage
    I've been running Gentoo Linux for some times with XFS. Here's my experience with this filesystem :

    - It's extremely reliable. Filesystems never got corrupted, even after a lot of ugly reboots.

    - Recoveries after a crash are really fast. Almost immedate, better than ext3 and reiserfs.

    - Every needed tool is available to resize filesystems, check filesystems, analyze filesystems and backup/restore filesystems.

    - _BUT_ there's something strange. Basically during disk I/O, the whole system is unresponsive. While I'm compiling something, KDE becomes slow, playing videos is not smooth at all, etc. Just as if it didn't scale at all for concurrent disk access. So I finally switched back to ReiserFS just because of this. Maybe the 2.5.x series of kernel behaves differently.

    • Just wondering, are you using the custom kernel from Gentoo? If so, have you compiled your kernel with either/both of the low latency patch and/or the preemptible kernel patch? What are your experiences with either of those two options when running XFS? I'd expect the use of either of those two to improve a system's responsiveness to user interaction when doing a lot of disk I/O, but if those don't help when using XFS, I wonder what kind of black magic is going on inside that code.


    • Your observations were anticipatable. XFS was originally designed for real-time (high speed) data streaming, namely capturing and processing video (which require A LOT of disk space). That bias in design does not lend itself to concurrent disk access performance. Interestingly, your move back to reiserfs works well with reiserfs's strengths. I use XFS, and can't say I've experienced your problems, but I haven't tried compiling and watching video at the same time.

      Having said that, I can't say whether your experiences are specifically due to XFS's design, or other factors; such as XFS's implementation under linux, or your tasks requiring a lot of RAM, or CPU (which applies to compiling, playing videos, and XFS). Your problems ith XFS could be resolved with a faster or 2 CPU's or a lot more RAM.
    • by josh crawley ( 537561 ) on Tuesday September 17, 2002 @12:40PM (#4274409)
      ---"- Recoveries after a crash are really fast. Almost immedate, better than ext3 and reiserfs."

      Hmmm.. I'd assume that ext3 wouldn't be as good.. A fix on a fix usually sucks. And then I've heard about Reiser's file truncation problems. I use Reiser and no big problems."

      ---"- _BUT_ there's something strange. Basically during disk I/O, the whole system is unresponsive. While I'm compiling something, KDE becomes slow, playing videos is not smooth at all, etc. Just as if it didn't scale at all for concurrent disk access. So I finally switched back to ReiserFS just because of this. Maybe the 2.5.x series of kernel behaves differently.

      I've had the same problems on 2.2.X when I didn't tweak my HD's to dma66 32 bit. Try doing a:

      hdparm /dev/(drive linux is on)
      hdparm -tT /dev/(drive linux is on)

      If you dont like those settings, Drop into single user mode, with / read only and do this command

      hdparm -X66 -d1 -u1 -m16 -c3 /dev/hda

      Now manually do a fsck on that partition. If you have errors, it's a bad mode. But if it works, then redo the -tT option (it's a benchmark).

      Be aware that 2.4 does most of this for you, but sometimes can give to little of a setting (so your performance sucks). Then again, you could have an unsupported IDE device.

      All the best..

    • Great points, except one small correction must be made. Not every needed too to resize an XFS filesystem exists. There's no way to reduce size o f an XFS volume. I needed this feature last May when I played around with XFS on top of LVM. It was my fault in the first place, better make a working plan beforehand next time, but still you couldn't do it. With ReiserFS, it's easy - though time - consuming to reduce a volume.
  • by Dan Ost ( 415913 )
    I've been reading about the differences between
    using journals and using soft updates and have
    decided that soft updates is the cleaner approach.

    Can anyone explain to me why the Linux community
    is so enthralled with the concept of journaling
    file systems while the BSD community has quietly
    but unanimously embraced soft updates?
  • by Kynde ( 324134 ) <kynde@NOSpAM.iki.fi> on Tuesday September 17, 2002 @11:12AM (#4273387)
    There are systems where we simply don't and won't have enough disk space and where speed is not of the essence. We have them now, and we will continue to have them in the future.

    Being a linux developer for embedded production boxes and given the current increasing interest over linux in embedded along with embedded boxes typically running _WITHOUT_ hard disks (mostly just flash chips of some sort, due to their better life-time), I cannot help wondering why the kernel mailing list shows little or no interest towards ext2 (or ext3) compression.

    JFFS and JFFS2 don't come into question in most cases as they tear through the fs layers and cannot be used with IDE flash chips for example.

    Alcatel even released it two weeks ago for 2.4.17... loads of people, like me, must have ported it to 2.4.19 by now. But to get ext2 compression to 2.5.XX, forget it... but why?

    This little like the lack interest towards under clocking, eventhough once you've overclocked your main computer to the max, you will start looking for more silent option, if not for the desktop computer, but for the closet firewall. Even if you don't have the interest now, you will, once you shack in with a gal.
  • by Scooby Snacks ( 516469 ) on Tuesday September 17, 2002 @11:40AM (#4273724)
    I hear that it's the only Linux filesystem that is endian-safe. IOW, you can move it from a system of one endian type to a system of the other type and it will still work. No other filesystem for Linux currently is able to make that claim.

    I find that very cool, for some reason. I guess one practical application is if you have a box that is the only one of that type (either big-endian or little-endian) that dies and you need to recover the data.

  • by jonr ( 1130 )
    I just wish to get BeFS back. It was the best FS I've ever seen. Journaled, live queries, and FAST! Palm, it's useless to you, open it up! :)
    • XFS smokes BeFS. Hell, even the open source OpenBFS BeFS implementation is heaps faster than the original beast :). The other funny thing is that BeFS was actually inspired by XFS.

      -adnans
  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Tuesday September 17, 2002 @12:32PM (#4274329) Journal
    I recently installed Linux-XFS on one of my computers here, as I was having problems with the kjournald process under ext3 taking extremely unreasonable amounts of time -- and I had had wonderful experiences with XFS on our SGIs -- it's always been solid and fast. Various reviewers of ext3 had complained about the existence of kjournald -- disputing the need for a user-code daemon.

    Several places it is mentioned, though, that the kernel image of XFS is very large, so much that you can't really fit it onto a floppy (although people over-format their floppies to get 1.8 MB or so onto them, and then the kernel might just barely fit.)

    I can't understand why any filesystem should be so big -- it seems that the code to run the filesystem is almost as big as the rest of Linux put together. How can this be? Is it really all code? What could that code possibly be doing?

    I studied XFS fairly extensively after I had to repair a disk that had 1 of its 23 heads fail. From the remaining 22/23rd of the disk I managed to recover almost every file and directory, by writing my own XFS filesystem interpretation code. The on-disk organization of the filesystem is fairly simple and straightforward, I can't imagine where the hundreds of K of code is going.

    I won't be shocked if the answer does lie in that kjournald daemon -- that XFS is bigger than ext3 because ext3 puts most of the bloat into a user-mode daemon instead of the kernel.

    thad
  • I lost my entire OGG collection after a powercut because it was on an XFS partition.

    I survived powercuts and brownouts just fine when everything was on ReiserFS...

  • Related question (Score:3, Interesting)

    by Quixote ( 154172 ) on Tuesday September 17, 2002 @01:01PM (#4274646) Homepage Journal
    XFS has a file size limit of 32TB (or so, I think), with a _filesystem_ limit in the EBs. But, I've heard that the Linux VFS layer has a max file size limit of 1TB. Is it possible to create files > 1TB on a Linux+XFS box ? Unfortunately, I don't have the resources to try it out just yet... :-)
    • Re:Related question (Score:3, Informative)

      by foobar104 ( 206452 )
      Just FYI, XFS on IRIX can support files up to 9 million terabytes (9 EB) and filesystems up to 18 million terabytes (18 EB).

      It's more complex under Linux. Here's the Linux-specific answer to this question from the FAQ:
      Q: Does XFS support large files (bigger then 2GB)?


      Yes, XFS supports files larger then 2GB. The large file support (LFS) is largely dependent on the C library of your computer. Glibc 2.2 and higher has full LFS support. If your C lib does not support it you will get errors that the valued is too large for the defined data type.

      Userland software needs to be compiled against the LFS compliant C lib in order to work. You will be able to create 2GB+ files on non LFS systems but the tools will not be able to stat them.

      Distributions based on Glibc 2.2.x and higher will function normally. Note that some userspace programs like tcsh do not correctly behave even if they are compiled against glibc 2.2.x

      You may need to contact your vendor/developer if this is the case.

      Here is a snippet of email conversation with Steve Lord on the topic of the maximum filesize of XFS under linux.

      I would challenge any filesystem running on Linux on an ia32, and using the page cache to get past the practical limit of 16 Tbytes using buffered I/O. At this point you run out of space to address pages in the cache since the core kernel code uses a 32 bit number as the index number of a page in the cache.

      As for XFS itself, this is a constant definition from the code:

      #define XFS_MAX_FILE_OFFSET ((long long)((1ULL<<63)-1ULL))

      So 2^63 bytes is theoretically possible.

      All of this is ignoring the current limitation of 2 Tbytes of address space for block devices (including logical volumes). The only way to get a file bigger than this of course is to have large holes in it. And to get past 16 Tbytes you have to used direct I/O.

      Which would would mean a theoretical 8388608TB file size. Large enough?
  • Transactions? (Score:2, Interesting)

    by ndecker ( 588441 )
    Is there any FS/API that allows ACID style transactions for applications on filesystems?

    This way it would allow cool stuff like garanteed data consistency or rollback.

    Imagine

    $ begin_trans
    $ rm -rf /
    $ rollback_trans

The most difficult thing in the world is to know how to do a thing and to watch someone else doing it wrong, without commenting. -- T.H. White

Working...