Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Operating Systems Software Businesses Red Hat Software Linux

Fedora 11 To Default To the Ext4 File System 161

ffs writes "The next release of Fedora, 11, will default to the ext4 file system unless serious regressions are seen, as reported by heise online. The LWN story has a few comments extolling the virtues of the file system. Some benchmarks have shown ext4 to be much faster than the current default ext3. Some of the new features that matter for desktop users are a faster file system check, extents support (for efficiently storing large files and reducing fragmentation), multiblock allocation (faster writes), delayed block allocation, journal checksumming (saving against power / hardware failures), and others. The KernelNewbies page has more information on each feature. As is the extfs tradition, mounting a current ext3 filesystem as ext4 will work seamlessly; however, most new features will not be available with the same on-disk format, meaning a fresh format with ext4 or converting the disk layout to ext4 will offer the best experience."
This discussion has been archived. No new comments can be posted.

Fedora 11 To Default To the Ext4 File System

Comments Filter:
  • EXT4 in Clusters? (Score:1, Informative)

    by BountyX ( 1227176 )
    After doing research on various cluster filesystems I eventually decided on GFS (as opposed to luyster, which seemed a bit overkill). How does EXT4 compare to GFS? Can EXT4 even be used in a clustered environment?
    • No (Score:5, Informative)

      by Anonymous Coward on Friday January 23, 2009 @09:43AM (#26574295)

      Ext4 is not a SAN or distributed filesystem. GPFS/lustre/GFS remain a good choice for that.

    • by Forge ( 2456 ) <kevinforge AT gmail DOT com> on Friday January 23, 2009 @10:23AM (#26574765) Homepage Journal
      Clustered file systems and local file systems are of necessity different. Most of what makes a clustered FS useful would be pure dead weight on a local FS.

      What I would like to see are clustered FSs which are easier to set up. I.e. You go to the 1st machine and start up the cluster config program and it asks: "Is this the 1st machine in your cluster?" Once you say yes there, you go to the other machines in turn, fire up the same program and say no to that question and enter the IP of the 1st machine.

      Once all those machines are added, the next step is to select. "Add Local disk to cluster pool" and then you select partitions on your local hard drive that should be in the pool. They don't have to all be the same size either.

      Once you have done that for each machine (either by going from one to the next or using the the tool on the primary node to add disks from each one (or a whole group of them if they are already partitioned in the same way).

      Then you just start mounting this virtual disk and dumping files to it.

      The technology exists to do this. The problem is that each time it's done' its a manual process tantamount to a programing job. Who want's to take up the task of tying all the pieces together to make the setup feel this simple for the user.

      Additional functionality (like tuning the FS for Database or Email usage and failover hierarchy) would be added over time and in a way that dose not detract from the simplicity of that basic setup.
      • Re: (Score:3, Funny)

        by pipatron ( 966506 )
        Sounds like someone should learn perl or python and get to it!
        • by Forge ( 2456 ) <kevinforge AT gmail DOT com> on Friday January 23, 2009 @11:34AM (#26575795) Homepage Journal
          We all have our talents.

          I have bartered PC repair and System admin services for competent legal advise, accounting service and even medical care on one occasion (Every desktop in my dentist's office had the "worm of the month").

          Sensible people do what they are good at and wherever possible get others to do the other things.

          This little project may take a day or a few months for a pearl wizard. I'm not sure. I do know it would take me years, if it got done at all.
          • Have you tried any of the admin tools to do this? Say those from Centos Cluster Suite for example?
            • by Forge ( 2456 )
              Yes I have and as I mentioned in another post. It's not easy by any stretch.
          • by WNight ( 23683 )

            It needs specifying. If you had example configuration files and how they would be changed with various operations, examples of what the dialogs should be like, and other details planned out you could probably get someone to program it pretty easily. It doesn't sound like it'd be a lot of code for a CLI app that helped with some of the discovery/etc.

        • Sounds like someone should stop replying to people on slashdot and get to it!
      • Re: (Score:3, Informative)

        Red Hat ship some web based tools called Luci and Ricci which basically do all of this, with a pointy-clicky interface.

        Rich.

  • by Chemisor ( 97276 ) on Friday January 23, 2009 @09:44AM (#26574309)

    So where can I see some benchmarks showing just how much of a slowdown I can expect after switching from ext2 to ext4? All the benchmarks I see around here compare it to ext3 and to ReiserFS only. Also, is it possible to run ext4 without the journal? Any benchmarks on that? (Oh, and please, don't bother with the reliability lectures. I couldn't care less.)

    • by diegocgteleline.es ( 653730 ) on Friday January 23, 2009 @11:14AM (#26575445)

      is it possible to run ext4 without the journal?

      Yes, it is [kernel.org]. And, as you can see in the link, ext4 is faster than ext2. Even with journaling.

      • You may be able to make one:

        ext4_noj = {
        features = extents,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize
        inode_size = 256
        }
        # mke2fs -T ext4_noj ext4image.iso
        mke2fs 1.41.3 (12-Oct-2008)
        ext4image.iso is not a block special device.
        Proceed anyway? (y,n) y
        Filesystem label=
        OS type: Linux
        Block size=4096 (log=2)
        Fragment size=4096 (log=2)
        31296 inodes, 125000 blocks
        6250 blocks (5.00%) reserved for the super user
        First data block=0
        Maximum filesystem blocks=130023424
        4 block groups
        32768 blocks per group, 32768 fragments per group
        7824 inodes per group
        Superblock backups stored on blocks:
        32768, 98304

        Writing inode tables: done
        Writing superblocks and filesystem accounting information: done

        But you might not beable to actually use it:

        # mount -t ext4 -o loop ext4image.iso /mnt/loop1/
        mount: wrong fs type, bad option, bad superblock on /dev/loop/0,
        missing codepage or helper program, or other error
        In some cases useful info is found in syslog - try
        dmesg | tail or so
        # dmesg | tail
        ext4: No journal on filesystem on loop0

        I use ext4 on my media partition with no problems.

    • ext4 vs reiserfs & jfs would be particularly interesting tbh, its hard NOT to be faster than ext3/2.

  • by Dogun ( 7502 ) on Friday January 23, 2009 @09:47AM (#26574333) Homepage

    I still haven't seen sensible benchmarks for ext4 with respect to how large directories scale, interleaved small file read and create, and small-file write with one fsync() at the very end (the only real world case.)

    At this point, I have to wonder if the emporer has no clothes, or if the people posting benchmarks are just idiots.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      For those who are not filesystem wizzes, could you expand or provide a link on why this is important? I'm wanting to improve the performance of my boxes like everyone else but understanding the ins and outs of the filesystems is a weak point of mine. Thanks.

      • by Dogun ( 7502 ) on Friday January 23, 2009 @10:28AM (#26574835) Homepage

        Because disks are buffered, and fsyncing after every call (or forgetting to do so entirely) is silly.

        I suppose somebody cares about how well they can expect their 124GB file to stream to disk, but for the rest of us mortals, we care about journalling support (check), a toolset (mostly check), and common-case performance, which in the *nix world involves a lot of reading and writing of small files.

        I'd also like to see how these things perform under load, or when multiple benchmarks are running simultaneously.

        • maybe a silly question, but wouldn't running multiple benchmarks simultaneously cause spurious results?

          • Re: (Score:3, Interesting)

            by blueg3 ( 192743 )

            What he really means is having a new benchmark that has a combination of loads from other benchmarks -- this is closer to a real-world case than any one individual benchmark, which is some kind of extreme case.

        • by jonaskoelker ( 922170 ) <(moc.oohay) (ta) (rekleoksanoj)> on Friday January 23, 2009 @10:50AM (#26575097)

          I suppose somebody cares about how well they can expect their 124GB file to stream to disk

          I know for certain that I care about big-file performance in almost only these ways:

          Can I write the file faster than the network sends it to me?

          Can I read the file faster than the application (typically mplayer) needs to consume it?

          When I know I shouldn't sit and wait for a larger task to continue, I really don't care how long it takes as long as I can do interactive stuff with good performance and the disk won't still be rattling when I go to sleep. Five minutes? An hour?

          I'd rather have effort put into usability of disk management tools: four-way on-line resizing (left/right end moving left/right), on-line repacking (defragmentation) and on-disk format conversion, on-line repartitioning [which goes beyond the scope of ext4, of course] and things like that. A versioning file system would be cool, and btrfs snapshots sound like they'd be nice as well .

          But that's the desires for my usage pattern, and I acknowledge that there are others.

          • I know for certain that I care about big-file performance in almost only these ways:

            Can I write the file faster than the network sends it to me?

            Whether or not that's at all difficult depends on the network in question. You could just about carve the data into stone tablets with a hammer and a chisel at the speed my cable modem delivers it, but when I'm moving files between machines on the LAN (Gig-E), it's a very different story.

            • Re: (Score:3, Funny)

              by TooMuchToDo ( 882796 )
              This is the part where I tell you I have a 50Mb/s down connection from Comcast that gets close to that, and you come search me out with an ice pick.
              • :-)

                Although, 50 Mbps is only a little over 5 MBps, which is an I/O rate even an ancient laptop drive with a lousy file system can handle.

                Do they give you a decent upstream with that?

                • 7Mb/s up. Not horrible, but it's no FIOS.
                  • You should just have forked out the extra amount for the business package. I just had Comcast out today, upgrading my 8/1 connection to 50/10. 5 static IPs, no port blocking, no throttling.
                    • Did go with the business package (no total transfer caps with the business package). $199/month. I don't have a need for the static IPs, as I VPN back to datacenter space I have at Equinix.
        • by inKubus ( 199753 )

          I'd like my filesystem to come with a fast way to backup incrementally without having to read all the metadata. Like a lightweight journal, a changelog only. I know Veritas and Tivoli have had journalling services that poll the filesystem for changes for a while. Would it be so hard to break the journals up into a few parts so I only have to look at changed files when I'm backing up incrementally. Or do they already do this?

    • Re: (Score:3, Informative)

      You can see some of those benchmarks in this paper [fedoraproject.org] which explains the block allocator improvements that have been done in ext4.

      • by Dogun ( 7502 )

        Alright! And NOW I feel somewhat excited about ext4. I just wish for a change /. were posting things like this instead of the article in the summary.

  • by MacColossus ( 932054 ) on Friday January 23, 2009 @09:52AM (#26574401) Journal
    I read the article and it looks like converting from ext3 to ext4 may be problematic.

    I do not propose offering migration from ext3 by default, at this point, due to bugs in that process, and extra risk involved. Perhaps an "ext4migrate" boot option could be used to expose it for further testing.

    • by radarsat1 ( 786772 ) on Friday January 23, 2009 @10:08AM (#26574593) Homepage

      Good to know. Personally I'll be happy to use ext4 on new disks or when I'm really doing a complete re-install, but I'm in no hurry to "upgrade", seeing as my current ext3 disks are working just fine. I played with different filesystems once until I got some corruption and realized that one of the advantages of ext3 is that it's been around long enough that there are lots of tools to help with recovery and checking. So I'll probably stick with what I know until I have an opportunity to try out ext4, but I'm not going to go and reformat my disks right away.

    • Re: (Score:3, Insightful)

      by incripshin ( 580256 )
      I upgraded, and eventually it erased my root directory. I'm presently trying to figure ext4 out and writing a program that should recursively recover files from /etc and my home directory. I recommend nobody use ext4 for at least five years.
      • So... you upgraded to a brand-new filesystem without first making a backup? Some glitches are to be expected at this point, just as when ext3 first came out. If everyone followed your advice and avoided ext4 "for at least five years," however, those glitches would never be found, much less fixed.

  • After several more years testing in Fedora releases?
  • Thank you Red Hat (Score:5, Interesting)

    by eparis ( 1289526 ) on Friday January 23, 2009 @10:25AM (#26574803)
    I'm glad to see Red Hat and Fedora taking the hard steps to push our technology forward. Precious few organizations employ people to work on things like this, instead expecting others to do the hard work to create and integrate disruptive core technologys. I know Red Hat employs people to work full time on ext4 and they have a person working full time on btrfs (which by all early accounts is supposed to be revolutionary and kick the crap out of everything else out there [even the fabled ZFS] (it pains me to say thanks to oracle for btrfs, but one of their employees is the primary driver) Someone has to do the hard work of being a leader, putting in engineering time, and fixing the bugs before the fanboys can consume (and all too often get credit for) new technology. Thank you Fedora for both the freedom and the constant drive to be on the leading edge of technology.
    • And thanks to the Fedora users, apparently the first large user base that will (hopefully in full knowledge) be testing this thing for the benefit of the rest of the community (nothing against RedHat, somebody have to do the first step.)

      • by Abreu ( 173023 ) on Friday January 23, 2009 @11:58AM (#26576345)

        There is a saying in Spanish, which translated says:

        "They are braver than the first men to try oysters!"

        • Re:Thank you Red Hat (Score:4, Interesting)

          by TheLink ( 130905 ) on Friday January 23, 2009 @01:26PM (#26578035) Journal
          I wonder if many "edible discoveries" involved drunk young men daring each other to eat something.

          Stuff like: century eggs, tofu, lutefisk, casu marzu (not sure if the last is really that edible ;) ).
          • Or extreme starvation: "we dropped our fish in the fireplace, but we'll surely die if we don't eat it anyway!"

          • by mikael ( 484 )

            According to the natives of many tropical jungle tribes, they watch what the animals eat, and adjust their diets accordingly.

            • by Raenex ( 947668 )

              How did the animals learn? Check out these parrots that eat clay so they can eat poisonous nuts and seeds:

              http://www.highlightskids.com/Science/Stories/SS1201_parrotseatDirt.asp [highlightskids.com]

              • by mikael ( 484 )

                Animals learn from watching each other. Perhaps these parrots were able to determine that eating seeds that had fallen in this layer of clay were more edible that seeds from other areas. Then when they went to other areas, they found out that they could eat the seeds there.

                Maybe they have a sense of taste/smell that can detect alkaloids and suitable antidotes. Mammals can smell salt/humidity and know instinctively that if they eat something salty, they should drink water.

              • by TheLink ( 130905 )
                Lots of birds eat grit so that they can chew their food in their gizzards.

                Maybe a parrot that accidentally ate clay one day figured it out, and it eventually became a common tradition amongst parrots (who are intelligent enough to copy each other).
    • by Eil ( 82413 )

      Thank you Fedora for both the freedom and the constant drive to be on the leading edge of technology.

      So that more mainstream distributions like Ubuntu can implement the technologies after they've been bug-tested.

      </rimshot>

    • Thank you Fedora for both the freedom and the constant drive to be on the leading edge of technology.

      Thank you, Fedora users, for not abandonding Red Hat when they started demanding money for their operating system and only giving away their alpha test version. You truly provide the earliest bug reports and suffer the most damage. The whole team is truly grateful.

  • Excellent. This will be a great feature for F11. Now, if they could just get Fedora 10 booting with an nvidia fakeraid [redhat.com], I'd be happy. And, fix the performance issues with intel GMA graphics [redhat.com], that'd be dandy too.

    Fedora is my favorite distro, but this fakeraid bug is ridiculous -- keeping me from running F10 on my desktop. Sure runs nicely on my Samsung NC10, though.
  • ... They're the ones with the arrows in their backs! It's changes like this that underscore treating new distro versions as a public beta. Chances are, this or some other new feature will cause someone real pain. It's always a good idea to make sure that that someone is *not* you. Whether it's Fedora or OpenSuse, or Ubuntu, oftentimes features are added that aren't really ready for prime time. Trust no one.

    • Re: (Score:3, Informative)

      by BTG9999 ( 847188 )
      Do you not know what Fedora is? Fedora is a bleeding edge distro. One that is openly acknowledged by Red Hat as being their Beta testers for new technologies that might eventually make it into RHEL. So this is just a standard thing Fedora does.
  • ext3 seems to be the nicest at the moment for native linux support and painless Windows support for dual boot machines. Easier than using NTFS in Linux. Last I heard ext4 wouldn't work with Windows.
  • by the_one(2) ( 1117139 ) on Friday January 23, 2009 @12:06PM (#26576501)

    Apparently there is a serious risk of data loss at this time in case of power loss (at least in ubuntu). http://ubuntuforums.org/showthread.php?t=1040199 [ubuntuforums.org]

  • by unixluv ( 696623 ) <unixluv&gmail,com> on Friday January 23, 2009 @01:05PM (#26577633)

    One of my biggest beefs with ext3 in the data center is the required fsck periodically. Redhat won't support jfs or xfs (which I can get from CentOs) but some vendors won't support anything that isn't on their supported platform list (IBM Clearcase for one).

    So is ext4 going to force a fsck at boot, which takes 1/2 a day with ext3 on some of my multi-Tb systems? Will Redhat finally adopt a better server filesystem? These are the questions that some of us doing professional Redhat support are asking.

    • Re: (Score:3, Informative)

      by Per Wigren ( 5315 )

      So turn it off the periodical fsck then:

      tune2fs -c 0 -i 0 /dev/foo

      It's perfectly safe as long as the underlying blockdevice is safe (RAID).

      • by Gerald ( 9696 )

        It's perfectly safe as long as the underlying blockdevice is safe (RAID).

        I'd rather have a filesystem that's perfectly safe period, thankyouverymuch.

        • No filesystem in current Linux is going to save you from silent bitflips. The only way to be protected from that is to use checksumming and parity calculation. Either you implement that in the block device (classic RAID) or in the filesystem (ZFS Z-RAID or similar) or you have to live with the possibility of corrupted data.

      • It's perfectly safe as long as the underlying blockdevice is safe (RAID).

        And the filesystem driver is bug-free.

    • It WILL Help (Score:4, Informative)

      by maz2331 ( 1104901 ) on Friday January 23, 2009 @04:44PM (#26581423)

      Ext4 is orders of magnitude faster than Ext3 regarding fsck time. Your half-day checks will almost certainly be reduced to minutes. The developers rewrote the algorithm to not require as intensive of a search in phase 1.

      If it's really important to get the machines up in minimal time (even at risk of some data loss) then you can turn off the auto checks entirely.

    • by jabuzz ( 182671 )

      Take LVM snapshots and do periodic background fsck's, if they pass reset the last check time. If they fail set it to some time in the distance past and raise an alert.

      However I have to agree, my biggest beef with Redhat is their boneheaded sticking with ext3, and the utter waste of effort that is ext4. It would be far more sensible to have picked either JFS or XFS (I don't care which) and used that instead.

    • One of my biggest beefs with ext3 in the data center is the required fsck periodically.

      The Ext guys need to take a lesson from UFS2 (FreeBSD 5.0, circa 2003) and perform the fsck in the background, at low priority, while the system is up and fully functional.

      I hear Btrfs is going to eventually get similar capabilities, so maybe the answer is to just keep waiting.

    • I've been using a trick since the ext2 days to reduce fsck times by a lot.. Reduce the inode count. I seem to recall EXT3 allocates 1 inode for every 16kb of diskspace by default. This means 20 million inodes on a 300gb partitian. 2.5gb scanned, on every fsck! A lot of the time, this is overkill. I generally run my parititians with 1/5 to 1/20 this number of inodes. I don't have any partitians formatted under the defaults to compare this to, but a mostly full 300gb partitian, with 1M inodes and 50k files, f

  • This is good news. All those Fedora folks can be beta testers. In five years or so I'll consider going from ext3 to ext4. It's only about a year since I went to ext3. I figured it must be OK by now since there haven't been any scare stories. I used to use Reiser before ext3 was stable.

    xfs is really over-rated. I used to work on an "Enterprise" storage appliance that used xfs. It was scary. Don't go there. Also, avoid anything from IBM.

  • Is ext2 a better choice because it limits the number of writes, or is that a silly worry?
    • Probably a silly worry. Every SSD out there provides it's own wear-leveling hardware, so the FS you use is fairly immaterial. Heck, most (all?) gear out there doesn't even provide a mechanism for direct access to the underlying storage, so you *can't* do the leveling in software (ie, with something like JFFS, etc) even if you wanted to.

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...