Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Software IT Linux

How To Move Your Linux Systems To ext4 304

LinucksGirl writes "Ext4 is the latest in a long line of Linux file systems, and it's likely to be as important and popular as its predecessors. As a Linux system administrator, you should be aware of the advantages, disadvantages, and basic steps for migrating to ext4. This article explains when to adopt ext4, how to adapt traditional file system maintenance tool usage to ext4, and how to get the most out of the file system."
This discussion has been archived. No new comments can be posted.

How To Move Your Linux Systems To ext4

Comments Filter:
  • reiser4?
  • "largely unnoticed by mere mortal Linux users and administrators" strikes me as a strange phrase to find on this IBM page. Is there some other IBM project more interesting than ext4 being revealed here?
  • by halivar ( 535827 ) <bfelger@NOsPaM.gmail.com> on Tuesday May 06, 2008 @01:46PM (#23314370)
    ext4fs is designed to be used in systems requiring many terabytes of storage and vast directory trees. It is unlikely the common desktop (or even, for that matter, the common server) will see appreciable performance increase with it.
    • by Anonymous Coward on Tuesday May 06, 2008 @01:50PM (#23314424)
      Do you realize how much porn some people have?
    • by Vellmont ( 569020 ) on Tuesday May 06, 2008 @01:59PM (#23314544) Homepage

      It is unlikely the common desktop (or even, for that matter, the common server) will see appreciable performance increase with it.

      Disk sizes are going up. In a few years you'll see a terabyte on a single drive. I'd also say that features like undelete, and online de-frag are important to anyone.

      So while you may not see any real performance increases, that's really beside the point.
      • by Anonymous Coward on Tuesday May 06, 2008 @02:02PM (#23314580)
        Instead of waiting a few years, go to your local computer store. They should have terabyte drives now.
      • by A nonymous Coward ( 7548 ) * on Tuesday May 06, 2008 @02:04PM (#23314614)

        Disk sizes are going up. In a few years you'll see a terabyte on a single drive.
        Unlike those two 1000 GB (or is it 1024) drives I have on my desk now.
      • by Uncle Focker ( 1277658 ) on Tuesday May 06, 2008 @03:00PM (#23315406)

        Disk sizes are going up. Since last year we've seen a terabyte on a single drive.
        Fix'd it for you.
      • by LWATCDR ( 28044 ) on Tuesday May 06, 2008 @03:35PM (#23315878) Homepage Journal
        But EXT4 because really useful when you have many terabytes of disk storage. With just one or two EXT3 is probably good enough.
        Now when we have ten TB drives....
        Good grief people Yea just keep a few thousand TV shows on your desktop.
        • by erlehmann ( 1045500 ) on Tuesday May 06, 2008 @06:19PM (#23318110)

          Good grief people Yea just keep a few thousand TV shows on your desktop.
          This could make for RIAA settlements in an order of magnitude of the GDP of a small country !
        • Re: (Score:3, Interesting)

          by Sancho ( 17056 ) *
          Why not? When storage density gets so high and the drives get so cheap, why not rip all of your movies and store them on disk? I'm lazy, and don't want to get up to change the disk.
    • Re: (Score:3, Interesting)

      by miscz ( 888242 )
      I can't wait for faster fsck. It takes something like an hour on my 500GB ext3 partition. Terabytes of storage are not that far away.
    • Re: (Score:3, Interesting)

      by dpilot ( 134227 )
      It might be a win for me even today on my meager 300G MythTV media partition. I'm currently using xfs for that, but every now and then I hear about bad things on xfs with a power failure, and other times I hear that it can be physically hard on the hard drive. (excess head motion?) Of course other times I hear that xfs is the best thing since sliced bread, and is usable for ANY purpose with just a little tuning.

      I transcode my Myth stuff on an ext3 partition, and occasionally get complaints about the larg
      • Re: (Score:3, Informative)

        by LWATCDR ( 28044 )
        No file system likes a power failure. Bet a UPS that will shut down the PC. They are cheap.
        And if you care about that data make a backup and even better run a raid.

        Remember EVERY HARD DRIVE IS GOING TO FAIL SOMEDAY.
      • Re: (Score:3, Interesting)

        by Random Walk ( 252043 )
        XFS has that nasty 'security' feature that it will zero files that were open when the power failed. Never use XFS on hardware that has no battery backup to shutdown properly if you trip over the power cable.
    • ext4fs is designed to be used in systems requiring many terabytes of storage and vast directory trees. It is unlikely the common desktop (or even, for that matter, the common server) will see appreciable performance increase with it.

      Really? My (comparatively) cheap laptop has 640gig of storage, and when you start getting into video, 640g is NOT enough!

      You can buy 2 x 500gig desktop hds for the grand total of $150.

      At that price, a terrabyte will be the "new pink" within a couple of years. Just like 2

    • by diegocgteleline.es ( 653730 ) on Tuesday May 06, 2008 @05:03PM (#23317180)
      Ext4 has a lot of performance improvements, like extents or delayed allocation. Desktop users will notice that ext4 is much faster

      That said, ext4 is unstable. It can easily eat your data. Just say NO to moving your filesystem to ext4 - for now.
      • by Xabraxas ( 654195 ) on Tuesday May 06, 2008 @07:43PM (#23318874)

        Ext4 has a lot of performance improvements, like extents or delayed allocation. Desktop users will notice that ext4 is much faster

        XFS has both extents and delayed allocation. I really don't know why we need Ext4. XFS has been a very solid fs for quite some time now, it's sad that more attention hasn't been payed to it from kernel hackers. The whole idea behind Ext4 seems to be more of a NIH syndrome than anything else. I could understand if it was radically different but it isn't.

        • by oddfox ( 685475 ) on Tuesday May 06, 2008 @11:43PM (#23320368) Homepage

          One of many reasons right here [brillig.org]

          I messed around with Ext4 for a little while on my machine (Like a couple days, just toying with it and seeing how its performance compares to Ext3 and Reiser4) a while back, like maybe a little bit before it was merged as experimental in the mainstream kernel. It is fast, backwards-compatible and extremely featureful. XFS is not a bad filesystem, but it has some problems, in my eyes. Metadata-only journaling, aggressive caching that makes it a potentially dangerous choice if you don't have a UPS, very slow metadata and deletion operations.

          That's great that XFS has a lot of features Ext4 is bringing to the playing field, and has had them for a long time. To pretend, however, that the developers of Ext4 simply have a NIH syndrome is just silly and disregards the fact that there is a lot that Ext4 already provides that XFS doesn't, and even more that it will soon. You might not see what the big deal is, but really, I can assure you that it won't be very long before the new ideas Ext4 employs are in widespread use.

          Here's an interesting article [byteandswitch.com] that really caught my eye with this: "Storage snapshot: The financial firm has more than 14 Petabytes of active storage and plans to add "several more Pbytes" within the next 12 months."

    • Re: (Score:3, Insightful)

      by glwtta ( 532858 )
      ext4fs is designed to be used in systems requiring many terabytes of storage and vast directory trees

      Well yeah, but Slashdot seems like a pretty good place to find people who administer multi-TB systems, no?

      A terabyte isn't what it used to be (hell, 1TB SATA disks are pretty common) and ext3 sucks pretty hard even on a measly TB.

      Does the sub-TB desktop crowd even care about filesystems? I mean, they all pretty much work and these days the popular ones have pretty similar performance (on a single s
  • Wikipedia entry (Score:5, Informative)

    by drgould ( 24404 ) on Tuesday May 06, 2008 @01:49PM (#23314414)
    Link to Ext4 [wikipedia.org] entry on Wikipedia for people who aren't familar with it (like me).
    • Re: (Score:3, Funny)

      by miscz ( 888242 )
      Because nobody on Slashdot knows that primary filesystem used on Linux is called Ext3 and we're too stupid to figure out what Ext4 might be. Come on.
      • by discord5 ( 798235 ) on Tuesday May 06, 2008 @02:29PM (#23314970)

        Because nobody on Slashdot knows that primary filesystem used on Linux is called Ext3

        Now now, don't give us too much credit

        we're too stupid to figure out what Ext4 might be

        It's like ext2 times two, stupid.

      • Because nobody on Slashdot knows that primary filesystem used on Linux is called Ext3 and we're too stupid to figure out what Ext4 might be. Come on.
        I, for one, saw the news item and immediately thought, "How to move to ext4? What's in it for me?"

        Thank you, GP, for saving me the seconds of typing with a convenient link. And shame on you for wanting to put out a candle that might be used to light the darkness.
  • by A beautiful mind ( 821714 ) on Tuesday May 06, 2008 @01:53PM (#23314470)
    Yes, Terabyte is not entirely correct according to SI, but Tebibyte just sounds lame and language is a tool, to facilitate written and oral communication.

    Of course, in this case you have to balance the confusion stemming from the Tera in IT context meaning 1024 in some cases. To be honest, people insisting on the new naming, they should have come up with a sensible sounding name and promoted that. You have to remember that language, even technical language is for the people. There are lots of ways to craft a beautiful, logical, symmetrical language that no sane person would use because it just doesn't sound convenient.

    Maybe a linguist can pitch in to explain why tebibyte sounds so awful?
    • by Dachannien ( 617929 ) on Tuesday May 06, 2008 @02:16PM (#23314772)

      Maybe a linguist can pitch in to explain why tebibyte sounds so awful?
      Tebibyte-buh: It's bad-buh because-buh it makes-buh you sound-buh like Mushmouth-buh.

      Hey hey hey!

      • I think you meant....

        Tebibyte: Ubit's bubad bubecubause ubit mubakes yubou subound lubike Mubushmubouth.
    • I'm no linguist but the notion of anything that resembles a Tera binary byte doesn't compute all that well.
    • I think the only place you need to use it is in the abbreviations, there KiB vs KB is sort of useful
      • Yeah, but now the problem is that KB is now ambiguous - it could either be 1000 bytes, or it could be 1024. Before anyone mentions HDD manufacturers, it isn't ambiguous there, either. 1 KB is 1000 bytes, yeah that's because they're fleecing you, it sucks but oh well.

        I just hate the mindset that comes up with all of this stuff, it reeks of the sort of person who alphabetises everything and writes into newspapers to complain that they misused the apostrophe one time on one page. I mean for god's sake, take
        • They're only fleecing you if you keep on insisting that 1 KB is 1024 bytes. If you define that 1 KB is 1 billion bytes, then they are are really fleecing you. The only reason that 1024 was used as the size of a KB was because it was much easier, not because we were trying to standardize things, or because it made things simpler to understand. It completely went against all the other standards, just because it made the code a little simpler to write.
          • Re: (Score:3, Funny)

            by sconeu ( 64226 )
            If you define that 1 KB is 1 billion bytes, then they are are really fleecing you

            I'd say that i fyou define that 1KB is 1 billion bytes, then you've got bigger problems than the marketing departments of drive manufacturers.
    • by JustinOpinion ( 1246824 ) on Tuesday May 06, 2008 @02:36PM (#23315072)

      Of course, in this case you have to balance the confusion stemming from the Tera in IT context meaning 1024 in some cases.
      It's worse than that. According to SI prefixes, "Tera" should mean 10^12 (1,000,000,000,000), but in common usage applied to computers it sometimes means 2^40 (1,099,511,627,776). But it also sometimes means "1024 Giga", where the Giga could be using either convention (and, for all you know, the "Mega" implied within could have been computed using either convention). So you can get a gradient of "mixed numbers" that conform to neither standard. You might say that only a non-professional would make such a stupid mistake... but on the other hand, if you see a column of numbers listed in "Gigabytes" and you want to convert them to Terabytes, what conversion factor would you use? How would you know what conversion factor the previous author had used? How could you guarantee that you were doing it right? Would you be able to confidently convert it into an exact number of bytes?

      Personally, I think the whole thing is a mess, and computer professionals should be working harder to enforce a consistent scheme. Unfortunately, only a minority of computer professionals seem interested in changing the status quo confusion.

      Maybe a linguist can pitch in to explain why tebibyte sounds so awful?
      I'm no linguist, but I don't think "Tebibyte" sounding silly is the real problem. I admit that I laughed when I first heard the binary prefixes. They sound lame. But who cares? "Quark" was silly when it was first coined. So was "Yahoo" and "Google" and "Linux" and "WYSIWYG" and "SCSI" and "Drupal" and so on... Silly names become second-nature once they are used enough.

      I think the real problem is that people, inherently, are loathe to change. They are more apt to come up with rationalizations and justifications for doing things "the old way" rather than put in the work to learn (and code!) a new system. Sorry if this sounds harsh, but I find the people who say the binary prefixes "sound dumb" or say that "the current (inconsistent)* system works fine" are just coming up with excuses to avoid doing the work to use a properly consistent standard/notation.

      Maybe you're right, and that if the new prefixes had sounded "cooler", then adoption would have been faster... but I'm not so sure. Even if true, it doesn't absolve any of us for allowing the confusion to persist: cool or not, we (geeks especially!) should have the discipline to use proper standards.

      * The current system can be roughly described as: SI prefixes are powers of 10 everywhere except in computer science, when they become powers of 2. But only when referring to memory, and some data structure sizes, but not when referring to transmission rates or disk space (unless it's a flash drive, sometimes), and other kinds of data structures.
    • It really didn't start out that complicated, but it's the manufacturers who keeps F*ing it up because they are trying to stretch the numbers. I think it's the consumers who are the victims in this.

      Hard drives have the MiB-MB problem because manufacturers wanted to be able to say 60GB instead of 54GB. When you buy a monitor, you have look for viewable size in much smaller print. Then there is the dithering you hear about on modern LCDs. I've also heard that early monitors were measured by their horizonta
    • Wouldn't it be easier to make manufacturers use the old MB=1024 type standard than to get the common people to understand a new prefix that they just won't remember?
    • Re: (Score:3, Interesting)

      I am not a professional linguist, but I think I can explain.

      In any spoken language, different sounds are loosely associated with different ideas. As a simple example, voiceless sounds, like p, k, t, f, and s, are well suited for pointed use, as in pejoratives; and r, especially the alveolar trill variety, is associated with intimidation or primality. These associations are made either because it sounds like something else ("rrrr" sounds like an animal's growl or roar -- notice the Rs in "growl" and "roar"?)
  • Did you see the section on timestamps? Nanosecond resolution out to 2514.

    Nanoseconds.

    We're dealing with a process whose maximum useful precision is "has the green light gone off yet", and we've got nanosecond timestamps.
    • The nanosecond resolution is there for mission critical systems that need a finer resolution than seconds.
      • by jesdynf ( 42915 )
        I'm having trouble with that one. I mean, the statement would've been true without invoking "mission critical" -- of course the ability to get resolution better than seconds is for applications that need resolution better than seconds. I'm not sure why you've invoked the specter of "mission critical" here, and I'm having a damn hard time picturing any utterly important, world-ending task that's going to (a) rely on the *timestamp in the filesystem* and (b) run on Linux and ext4fs.

        And the timestamp isn't in
        • Re:Wait, what? (Score:5, Informative)

          by Waffle Iron ( 339739 ) on Tuesday May 06, 2008 @03:06PM (#23315472)
          They're probably using a 64-bit number to hold the timestamp. That gives you 1.8e19 discreet time intervals, so you're going to get ridiculous precision, dates ridiculously far into the future, or both. I assume that they went for precision because that arguably has more potential for use in the real world than worrying about files thousands of years into the future.

          IIRC, today's PCs have high-resolution timers available that surpass the old 14.318MHz clock chip. If you can't get accurate nanoseconds out of the timers yet, they'll just round the numbers off. No big deal.

          BTW, NTFS uses 100ns timestamp granularity, and it was designed when systems were almost 100X slower than today. So it had a similar amount of overkill, but that certainly doesn't seem to have had any negative impact on the acceptance of NTFS.

        • you'll have a hard time convincing me that you can maintain nanosecond timing long enough for the difference between two nanosecond timestamps to be accurate down to the nanosecond.


          Not now, and not in the near future, sure. However, who's to say that it won't happen, possibly sooner than we think? The developers had the room to store times that accurate, so they probably just put it in to allow for future developments.

  • To all ext3 users... (Score:5, Informative)

    by c0l0 ( 826165 ) * on Tuesday May 06, 2008 @01:56PM (#23314514) Homepage
    ...who are on the lookout for a new fs to entrust with keeping their precious data: make sure to check out btrfs ( http://oss.oracle.com/projects/btrfs/ [oracle.com] ). It's a really neatly spec'd filesystem (with all the zfsish stuff like data checksumming and so on), developed by Oracle employees under GPLv2, which will feature a converter application for ext3's on-disk-format - so you can migrate from ext3 to the much more feature-packed and modern btrfs without having to mkfs anew.

    On a related sidenode: I'm very happy with SGI's xfs right now. ext\d isn't the only player in the field, so please, go out and boldly evaluate available alternatives. You won't be disappointed, I promise.
    • Re: (Score:3, Insightful)

      by DJProtoss ( 589443 )
      I agree btrfs looks nice, but its somewhat behind ext4 in terms of implmentation and stability (which is saying something) - theres the small matter of not yet handling E_NOSPACE, for instance
      • btrfs is a completely new ground up filesystem. I would expect it to take a while longer than ext4 which is just another incremental improvement on ext2. btrfs isn't stabilized at all yet. I would consider the running out of space issue a non-issue at their current stage in development.
    • Re: (Score:2, Troll)

      by swilver ( 617741 )
      There is no way I'm installing anything Oracle on my Linux system ever. I will definitely not entrust my data to them after having witnessed over the past years what a mess their flagship product is.
    • Re: (Score:3, Interesting)

      by Anonymous Coward
      I'm an XFS fan as well. I have been using it for years. I usually have my root/boot partition as ext3 (so grub works) and all data on XFS.

      XFS kills ext in terms of not losing data. I have recovered lots of data from failed drives that were XFS formatted. Not so with ext3 which tends to flake out and destroy itself when it gets bad data.

      And don't even mention ReiserFS, that has always sucked. I have lost more data to Reiser than any other filesystem (ext is a close second though). Sometimes it would co
    • by Jherek Carnelian ( 831679 ) on Tuesday May 06, 2008 @05:29PM (#23317514)
      btrfs -- How fast are deletes?

      ext3 is both so slow and so bottlenecked that mythtv had to implement a special "slow delete" mode which gradually truncates files instead of just unlinking them. Without the "slow deletes" mode, you get hiccups in any shows that are being recorded while old shows are deleted.

      On my system, deleting a 20GB file can take a minute on ext3 (and the filesystem is completely locked - all other processes are blocked), but on ntfs it is almost instantaneous.
    • by glwtta ( 532858 ) on Tuesday May 06, 2008 @06:36PM (#23318276) Homepage
      On a related sidenode: I'm very happy with SGI's xfs right now.

      I seem to be plugging XFS in every fs thread recently, so I'll second that - I'm really surprised it's not more popular.

      ext3 may have, more or less, caught up to XFS in IO speed recently, but file operations on large filesystems are still a disaster - just try deleting a 2TB tree with a couple million files in ext3, I dare you.
  • Step 1 (Score:3, Informative)

    by vga_init ( 589198 ) on Tuesday May 06, 2008 @02:05PM (#23314628) Journal
    Step 1: Install Fedora 9

    OK, all done!
  • undelete (Score:5, Informative)

    by Nimey ( 114278 ) on Tuesday May 06, 2008 @02:08PM (#23314656) Homepage Journal
    Oh, please. ext2 had "undelete" capability, just as it had filesystem compression capability. Neither were ever implemented.
  • That's all fine and dandy, but will it allow me to somehow undelete/recover when I accidently type rm -Rf /hugedir -- yes I know there are other ways to delete stuff, I just find it ridiculous that all linux file systems with the exception of ext2 make no effort at all to be able to recover from such a common mistake. Of course, rm not giving any indication at all about how many bytes and files it is about to remove doesn't help either.
    • by mlwmohawk ( 801821 ) on Tuesday May 06, 2008 @03:08PM (#23315492)
      The whole "undelete" thing is a DOS FAT stupidity. The *only* reason why people think that you *can* undelete is that the DOS FAT file system was designed in such a way that file changes could be recovered *IF* you managed not to change the file system too much. DOS being a mainly single tasker, with the exception of the standard "indos" flag games.

      POSIX was not and should not be designed in such a way that "undelete" is reliably possible. That's like saying can I unlight that match. Can I unbreak that egg?

      An unreliable system that may, on the odd chance that the file structure has not changed too much, recover files from a disk that have not been over-written yet is no replacement for NOT being an idiot and being careful when you delete something.

      • by swilver ( 617741 )
        Fine, assume I'm an idiot then. No expert would ever want such a feature, or expect to be able to recover files in some way after they had made a mistake, even if that takes taking the drive offline immediately and having it perform a full disc scan.
      • by Blakey Rat ( 99501 ) on Tuesday May 06, 2008 @07:00PM (#23318506)
        Please.

        The greatest feature of modern software is "Undo." Everything I can screw up on the computer should have an Undo-- that's what the Recycle Bin (or Trash Can for Mac users) is there for, although it's a bit more awkward than pressing control-Z.

        Call it stupidity if you want, but my system files (you know, the ones that file permissions actually protect from malware) are worth approximately zero, and my personal files (the ones that malware can delete no question asked) are worth hundreds of man-hours, if not thousands, and ten times that in dollar value.

        Windows Shadow Copy has an exact template on how to implement it, now go implement it.

        (And yes, I keep backups, as should everybody. But there's no excuse not to use spare disk space as another layer of defense.)
    • by Hatta ( 162192 )
      That's not the file systems job, that's the tool's job. You'll find on windows that when you use 'del' to delete something, it doesn't end up in the recycle bin.

      So if you want some sort of soft delete, don't use rm or del. Use a tool that 'soft deletes' a file by moving it into a trash bin, which you can 'hard delete' when you need more space. This is how Windows and KDE both work.

      Personally, I think file systems aren't aggressive enough when it comes to deleting files. When I delete something I want it
      • by swilver ( 617741 )
        I don't expect it to manage "deleted" files at all. It's that sinking feeling you get when you think you're deleting a directory that should be mostly empty, and rm is taking longer than expected (as your only indication that something is horribly wrong.

        It's not unreasonable to expect to be able to undo that action when I immediately press cancel it and make sure the drive is not written to anymore. Ext2 can do this easily. Ext3 goes out of its way to make this impossible. XFS/ZFS/ReiserFS etc.. all m
    • I think you're looking for rm -rI /hugedir then. Adding the -f option is you specificly stating that you know exactly what you're doing and do not want to be asked if you're sure you want to remove /home and all it's subdirectories.
      • by swilver ( 617741 )
        Capital I not being an option on my rm, I assume you are talking about the "interactive" mode. The problem is that rm's behaviour is ridiculous in this mode. Watch this:

        [root@MyServer 0 ~]# rm -Ri MPdeletethis
        rm: descend into directory `MPdeletethis'? y
        rm: descend into directory `MPdeletethis/Gui'? y
        rm: descend into directory `MPdeletethis/Gui/wm'? y
        rm: remove regular file `MPdeletethis/Gui/wm/ws.c'? y
        rm: remove regular file `MPdeletethis/Gui/wm/ws.h'? y
        rm: remove regular file `MPdeletethis/Gui/wm/wske

    • Re: (Score:3, Funny)

      by Anonymous Coward
      Perhaps you should try prm (pansy rm) or psh (pansy shell).
  • Ext3 tops out at 32 tebibyte (TiB) file systems and 2 TiB files, but practical limits may be lower than this depending on your architecture and system settings--perhaps as low as 2 TiB file systems and 16 gibibyte (GiB) files.

    Is this really the case? I created a 100GB file on ext3 earlier this week. It contains a virtual machine image that I am currently running under Xen. I haven't yet had a problem. I would guess that >16GB files are pretty commonly used in the world of Xen.

    • Did you miss the part where it said "may be" lower. As in, it's in some cases that might be true, but not others.
  • Why bother? (Score:4, Informative)

    by jabuzz ( 182671 ) on Tuesday May 06, 2008 @03:36PM (#23315898) Homepage
    ext4 is the biggest waste of time and effort in Linux. There are already good extent based filesystems for Linux. Why anyone would consider using what is an experimental filesystem for a multi TB production filesystem is beyond me.

    What ever they do XFS and JFS will have way more testing and use than ext4 will ever have. I just don't get the point of ext4. It would be far more useful to fix the one remaining issue with XFS, the inability to shrink the filesystem none destructively, than to flog the dead horse which is ext2/3 even more with ext4, which is not one disk compatible anyway.
    • Re: (Score:3, Insightful)

      by skulgnome ( 1114401 )
      It has value as an experiment, even if it ultimately doesn't turn into much. These people have ideas, and they want to implement them. They aren't maintenance programmers and should not be shoehorned into that task even at the level of J. Random Person On Slashdot's thought.

      Remember how reiserfs was the first filesystem to have journaling in Linux, and how some people were ready to state that there is no need to do an ext3 any more?
  • by Khopesh ( 112447 ) on Tuesday May 06, 2008 @04:35PM (#23316780) Homepage Journal

    Those features may be new to ext3, but not to the real competitors. I see nothing that might grant an edge over JFS or XFS. The real justifications will come from performance tests.

    This reminds me of the recent NTFS article here, which actually suggested that since Hans Reiser is in jail and reiser4 is dead, we should consider NTFS. WTF? The ludicrousness of using NTFS as the primary filesystem is further justified in this article by its similar performance to ZFS, but both run in user-space (and are thus horrible in performance), so neither is really an option. What the heck is wrong with JFS and XFS?

    Here are some real comparisons: First, Wikipedia's Comparison of file systems [wikipedia.org] gets you started with a nice mapping of features. Second, a benchmarking of filesystems from 2006 [linuxgazette.net] which is still quite applicable (though it doesn't yet cover ext4). What we need is a comparison of EXT4 to XFS and JFS (et al), with EXT2/3 in there for reference.

    Recall that the biggest reason for using ext3 is that it is supported best of all the filesystems. If all hell breaks loose, even Tomsrtbt [toms.net] (an ancient rescue floppy pre-dating knoppix) can fix it. Ext4 breaks this backwards-compatibility to ext2. Therefore, I see no reason to use it. One might as well use something more stable and proven, especially while we lack numbers suggesting it performs as well or better.

    • by Sentry21 ( 8183 ) on Tuesday May 06, 2008 @05:37PM (#23317648) Journal
      If your recovery procedures involve using pre-knoppix floppy recovery tools, you shouldn't be administering any systems with important data on them.

      Aside from the fact that no non-obsolete machine I've seen in the last few years has a disk drive, 'backwards compatibility with ext2' is a pretty lousy minimum requirement for a filesystem.

      Heck, I can do recovery on Ext2/3, ReiserFS, JFS, XFS, and more using only a few-dozen-meg Debian netinstall image. I don't even want to know what an Ubuntu or Knoppix LiveCD could recover from.
    • Re: (Score:3, Informative)

      by oddfox ( 685475 )

      Here's an Ext4/XFS/ZFS benchmark [brillig.org]

      I would like to see a more recent benchmark that did include JFS/Reiser/etc, though.

It is impossible to enjoy idling thoroughly unless one has plenty of work to do. -- Jerome Klapka Jerome

Working...