Benchmarking XFS, ext2, ReiserFS, FAT32 124
blakestah writes: "Well, it looks like someone on the LKML has taken upon himself to do some benchmarking of ReiserFS, ext2, and XFS using the 2.4 kernel series. It is not a real benchmark test, but kind of interesting nonetheless. See the results (in Spanish) at this LUG in Mallorca. Simple runs of dd, tar, and rm are shown, and for most of the tests XFS is pretty dern fast, beating all the others. The exception is removal of a large source tree (the kernel source), for which XFS is the slowest by a fair amount. See this kernel post for the translations of important words. It will be nice to see more such open benchmarking posted, because benchmarks provide developers goals." The contrast between FAT32 and XFS is particularly interesting to see.
Re:Good Job (Score:1)
If the benchmark doesn't operate similarly to the application you're going to use it for, the benchmark may be next to meaningless to you. Maybe you're going to create, and then soon after delete a small file, 10,000 times. Which FS is better for you? Maybe you don't know that your application does this, now you have even less of a clue.
There should be fewer benchmarks because 1) people trust them more than they should, and 2) their utility is low when compared to test-driving your application on the real thing.
Those "tests" aren't good benchmarks (REFORMATTED) (Score:1)
Benchmarks... bah. I don't think the ones linked to on that Spanish site are worthy of the name. For those that can't see it because it's /.ed, it makes out XFS to be quite a bit faster. Some relevant comments from the LKML... [the last one showing that XFS is not always faster]
Alan Cox
"reiserfs seems to handle large amounts of small files well, up to a point but it also seems to degrade over time. ext3 isnt generally available for 2.4 but is proving very solid on 2.2 and has good fsck tools. Ext3 does not add anything over ext2 in terms of large directories of files and other ext2 performance limits. XFS is very fast most of the time (deleting a file is sooooo slow its like using old BSD systems). Im not familiar enough with its behaviour under Linux yet."
Andi Kleen:
"On one not very scientific test: unpacking and deleting a cache hot 40MB/230MB gzipped/unzipped tar on ext2 and xfs on a IDE drive on a lowend SMP box.
XFS (very recent 2.4.4 CVS, filesystem created with mkxfs defaults)
- > time tar xzf ~ak/src.tgz real 1m58.125s user 0m16.410s sys 0m44.350s
ext2 (on same kernel as above)> time rm -rf src/ real 0m50.344s user 0m0.190s sys 0m13.950s
> time rm -rf src/ real 0m1.085s user 0m0.160s sys 0m0.930s
Re:xfs and deletion... (Score:1)
Re:But reiserfs wastes 1/3 of disk space! (Score:1)
Re:xfs and deletion... (Score:1)
There is a reason that journaling is considered required for an enterprise level OS. I'd love to see ReiserFS or XFS ported to FreeBSD. Too bad about that GPL thing...
Re:Does it bother anyone... (Score:1)
It's not ASCII 0, 0 marks the end of the directory (I think it's 0xE5, not that it's that important). And if the file has a long filename, the first letter of the short filename entry is overwritten, but the long filename is still available. So newer undelete programs don't always need the first letter.
Anyway, when a file is deleted, each cluster of the file must be zeroed in the FAT (note that there are usually two identical copies of the FAT, both must be updated). Since the FAT is at the beginning of the disk, the write heads may need to move a large distance (between the directory entry and each FAT), so write caching will give a huge speed increase. Also, directories can only be deleted when they are empty, so this needs to be done for each file (and subdirectory), and then for the directory itself. And then the free space count has to be updated.
My guess here is that FAT32 just marks the root source directory as deleted and then moves along.
This would not work, you'd end up with "lost clusters" and lots of missing hard drive space. You could try this with Norton Disk Editor some time if you were curious - it's good way to look at how the filesystem is stored, FAT is pretty easy to understand. Scandisk would find and fix this problem easily (it would either restore the directory, or just delete everything that used to be in it).
[Offtopic] A fun trick to play on someone is to change their disk's volume label, and then use Disk Editor to set the read-only flag on it (you'll find the volume label in the root directory). They will be unable to change the label from Windows or DOS. Unfortunately, I think Scandisk (or Norton Disk Doctor) can detect this now, but they won't think to run Scandisk for a few minutes anyway.
Re:XFS deletion performance. (Score:1)
>Only if you've got an infinite drive..
So the fact that you get one credit to your bank account each month but make lots of payments means you have an infinite balance?
lots of allocation operations, fewer delete operations. It is just that the delete operations are bigger
ReiserFS and rm -rf (Score:2)
Re:Does it bother anyone... (Score:3)
My guess here is that FAT32 just marks the root source directory as deleted and then moves along. (This is how it works under DOS/Windoze at any rate.) If this were so, FAT32 is hardly fast here -- it takes 6.7 seconds to do just one write to the FAT table!
To be honest though, I'm not sure how the various Linux FSes work, so maybe they're doing it the same way, though I doubt one FS entry would take 10 - 20 seconds.
Those "tests" aren't good benchmarks (Score:4)
Re:ReiserFS and rm -rf (Score:1)
So beware, there's no undelete utilities last time I checked, though the authors would like to write one if you choose to pay them.
xfs and deletion... (Score:1)
- A.P.
--
Forget Napster. Why not really break the law?
Re:Why FAT32? (Score:2)
Re:Whatever happened to Tux2? (Score:4)
http://kernelnewbies.org/~phillips/
IBM's pthread work (Score:1)
The perils of tracking obscenely large drive sizes (Score:1)
<SLIGHTLY OT>
Doesn't it piss you off that disk drive manufacturers are claiming that 1 Gb = 1000 Mb in order to protect the "sanctity" of the metric system. What a load of crap. Byte magnitudes have always been on the order of 2^(n*10). Now all of a suddent they need to protect the SI naming convention? Isn't it convenient that this "protection" makes their drives sound larger without actually being that large?
</SLIGHTLY OT>
I've always been pulling for XFS. (Score:5)
This guy Dave (I forget his last name now), from sgi [sgi.com] gave a presentation to the DC-LUG [tux.org] back in 1999 and talked about XFS and how sgi wanted to release it as GPL to become a core component of Linux. He also talked about the history of XFS and how they had to invent a new size prefix to describe how large a filesystem XFS could accomodate ("exo-byte" = 1024 Gb). XFS has been used by sgi for their MIPS and Cray machines ever since 1984, and now that sgi has donated it to the Linux community, I think we'd be remiss if we didn't welcome it with open arms.
But that's just MHO. ;)
Re:FAT32 (Score:2)
I would imagine that the reason FAT32 isn't highly optimised on Linux is that it's only there as a compatibility tool. Nobody running Linux would put data on a FAT32 partition when performance was an issue. Being able to get at the filesystem at all is enough.
--
Re:Either the site's slashdotted already... (Score:2)
XFS and MOSIX -might- work, with sufficient application of brute-force. Neither work with 2.4.4-ac6, and I believe there's a lot of problems with FreeSWAN's IPSEC, too.
The only patches I've found to be relatively unstressful are the International Patch, the POSIX Timer patch, and IBM's Next Generation PThread code.
Re:Either the site's slashdotted already... (Score:2)
The closest to a printable response was some sneer to the effect that the writers weren't interested in supporting Alan Cox's "private OS".
(Curiously, the patches tend to break with every release of the official kernel. Hmmm. Maybe if these coders got off their high horses long enough to see how code migrates between the official kernel and the AC series, they'd have an easier time keeping up.)
As of right now, though, you're going to have to do some of the merge by hand. And it's likely to prove difficult.
Look on the bright side, though! If you =DO= get a merged patch, it'll be YOUR patch that's used. Who wants second-best?
Re:XFS size (Score:3)
Size of Linux-2.4.4 with XFS (as taken from SGI's CVS tree): 3,612,541 lines (same algorithm) and 145,367,040 bytes.
Either the site's slashdotted already... (Score:5)
Seriously, XFS looks interesting. I did some stats on source size, and posted them to K5. :) Dropped in score so rapidly, it'll reach the Earth's core sometime in the next half hour.
Essentially, XFS is 10% of the entire kernel size, making it perhaps the single-most sophisticated driver available. On the flip-side, I can't help but feel that that much code is going to have -some- impact on the rest of the system.
Talking of benchmarking, how does IBM's "Next Generation PThread" code stand up? And how on Earth are you supposed to install it? It clashes with glibc, making an RPM install, ummm, of questionable safety. And once you start with RPM (or tarballs, or any other system), it's unwise to mix-and-match. Either you keep track of where things are, or the system does. Half-and-half is NOT good.
Lastly, anyone found a way to get XFS, JFS and AFS into the same kernel? (Without using a sledgehammer, preferbly.)
You should not use 2.4.4 from CVS yet. Re:Those "t (Score:1)
Re:lousy metadata performance with XFS? (Score:2)
Note, also, that my test machine was only a AMD K6-2 266 w/28M RAM
Re:Asshole. (Score:1)
How about barrapuntar: "to slashdot".
- Sam
Re:Those "tests" aren't good benchmarks (Score:2)
Keep in mind that the article starts out by saying "This is a little more rigourous than what was posted to the mailing list the other day". Also says "This comparison only measures one aspect of the filesystems, and there are other important aspects to consider when deciding on a filesystem".
- Sam
Re:Well, I guess.. (Score:2)
Well, "slashdot" is Barrapunto [barrapunto.com] when translated in to Spanish, so the verb would be "barrapuntar". E.g. ellos barrapuntaron el pagina (They slashdotted the page).
- Sam
I managed to get a copy of the page (Score:5)
First benchmark:
Writing, reading, and deleting a fairly large file (256MB). The commands used are as follows (omitting the redundant 'time' command used in all cases):
Filesystem Write Read Delete .tar.gz file was first uncompressed, so that all the work was done on the tar file (which is larger then 100MB). The commands used on the uncompressed .tar file are as follows (with the time command ommitted again):
ReiserFS 18.5 23.41 0.4
Ext2FS 20.3 21.38 0.57
XFS 16.32 19.42 0.26
FAT32 43.65 27.98 1.59 Second benchmark: This benchhmark was donw with the source code of Linux 2.4.4. The
FS Copy Extract rm file.tar rm -rf dir
Reiser 38.48 58.44 0.45 10.09
Ext2FS 21.31 59.19 2.88 11.12
XFS 16.21 35.44 0.18 21.96
Fat32 39.76 134.19 1.2 6.7
Give the poor site a chance... (Score:4)
$ddif=/dev/zeroof=./provab s=1Mcount=256
$ddif=./provaof=/dev/nullb s=1Mcount=256
$rm-f./prova
FSWriteReadrm
ReiserFS18.523.410.4
Ext2FS20.321.380.57
XFS16.3219.420.26
FAT3243.6527.981.59
$cplinux-2.4.4.tarprova.tar
$tarxfprova.tar
$rm-fprova.tar
$rm-rflinux
FScpextractrmrmdir
ReiserFS38.4858.440.4510.0 9
Ext2FS21.3159.192.8811.12
XFS16.2135.440.1821.96
FAT3239.76134.191.26.7
[1] ReiserFS has a high (reproducible) variance when copying the kernel tarball.
All test ran three times with the exception of the reiserfs kernel tarball copy, which ran six times. Machine is running in single user mode. Results timed with time(1).
Re:Well, I guess.. (Score:1)
Re:XFS size (Score:2)
Were you counting the cmds/ dir? (Score:2)
Any chance your tallies include the whole cmds/ tree? That's all userspace stuff...
XFS size (Score:5)
Although XFS is big, it doesn't stomp on the kernel that much.
If you're looking at it from a pure "volume of code" point of view, here's some info:
The XFS patches are split into 2 parts, one which contains kernel changes, and one which is the filesystem itself.
The "core-kernel" patch is about 190k, while the actual filesystem code patch is about 4.3M.
Bear in mind that a 190k patch to the kernel does not mean 190k of new code, either, since you have to take out the context lines, the headers, and the delete lines.
Overall, the impact on the kernel isn't as big as you might think, looking at the overall size.
Now, whether or not executing code in the filesystem slows down the rest of the system, I don't have any real data on that, although I have not noticed any detrimental effect on my system.
Re:Either the site's slashdotted already... (Score:2)
Words spoken by one person? (Score:1)
That must mean that it's possible to calculate (or guestimate) the number of words spoken by one person during his/her lifetime. I wonder how much storage it would take to hold that?
Macka
Re:I've always been pulling for XFS. (Score:2)
exobyte, yottabyte, zetabyte. x-y-z.
THEN you do a lottabyte. :)
Rob
Re:No ext3? (Score:3)
I installed xfs a week ago. Yesterday I wrote about it to my ex-office mate, saying I couldn't wait for a power failure. Got my wish this morning at about 4:00 a.m. My machine took 15 seconds to reboot. My colleague's machine with ext2 took nearly 3 minutes. Cool messages in the logs:
May 10 03:55:11 musuko kernel: Start mounting filesystem: ide0(3,5)
May 10 03:55:11 musuko kernel: XFS: WARNING: recovery required on readonly filesystem.
May 10 03:55:11 musuko kernel: XFS: write access will be enabled during mount.
May 10 03:55:11 musuko kernel: Starting XFS recovery on filesystem: ide0(3,5) (dev: 3/5)
May 10 03:55:11 musuko kernel: Ending XFS recovery on filesystem: ide0(3,5) (dev: 3/5)
Re:Those "tests" aren't good benchmarks (Score:1)
Suddenly alarm rings, your syslog becomes all red and the security specialist comes storming through the door shouting "someone executed 'rm -rf
At this point, would you prefer to have a filesystem that is *sooo* fast deleting that it's already gone through
Of course that is all just hypothetical, not that anyone has ever had to recover a system from a deleted
regards: andrey
Re:Either the site's slashdotted already... (Score:2)
Well, it seems to break modules in the one patch of the 2.4.3 kernel that I've tried so far. (Unpack fresh kernel source, patch with both XFS patches, compile...XFS works [compiled directly into kernel], but every module I try to load [sound, drm driver, etc.] complains about unresolved symbols. Not sure what to do about that). Anybody else run into this one? I've been itching to try XFS on a couple of my systems...
Lastly, anyone found a way to get XFS, JFS and AFS into the same kernel? (Without using a sledgehammer, preferbly.)If this isn't bad enough, I'd like to get XFS and MOSIX into the same kernel. Hows THAT for masochistic? Need to figure out how to get the XFS patch into 2.4.4, though...
---
FAT32 vs. XFS (Score:3)
Don't get me wrong, I'm not comparing FAT32 to XFS by a long shot! But FAT32 is a fs that not a lot of hackers care about enough to improve the performance under Linux. Personally, I've always found that FAT32 access under Linux has been abysmal compared to access to the same filesystem under Windows.
No ext3? (Score:4)
It claimed that it'll still have fsck run when you crash, but actually nothing was getting run... I'm pretty sure that it was just not repairing itself after crashes. I'm really surprised I never lost any data until, well, one day I lost the whole damn partition.
I'm using ReiserFS now and I'm really happy with performance/crash recovery. I have a lot of cat-related outages...
--
Re:I've always been pulling for XFS. (Score:3)
they had to invent a new size prefix to describe how large a filesystem XFS could accomodate ("exo-byte" = 1024 Gb)
This is a slight exaggeration. A kilobyte is 1024 bytes, a megabyte is 1024 kilobytes, a gigabyte is 1024 megabytes, a terabyte is 1024 gigabytes. Oh wow, it is not even an exaggeration, it is just wrong, I guess. Anyway, if we keep going up the table of SI prefixes in my CRC, we could have petabytes, which would be 1024 terabytes, and then exabytes = 1024 petabytes, and so on. The is no need to invent things, it is all taken care of, from 10E-24 (yocto-) to 10E24 (yotta) (now thats a lotta stuff) ;)
SI Prefixes (Score:3)
This isn't specifically for bytes, but the list of SI prefixes is here [nist.gov]
The new prefixes for binary units (which nobody uses (the prefixes, not binary units)) are here [nist.gov]
Re:Why FAT32? (Score:3)
Fat32 was probably included because it's its so well supported and easy to do.
Re:XFS deletion performance. (Score:1)
If you think about it, almost as much space gets allocated as gets deleted overall. In fact, the difference between total space allocated and total space freed is equal to the space in use.
Re:Those "tests" aren't good benchmarks (Score:1)
http://www.uwsg.iu.edu/hypermail/linux/kernel/0105 .1/0334.html [iu.edu]
Re:ReiserFS and rm -rf (Score:1)
It was funny...all applications currently running (browser, etc.) were all still happily running and usable, but I could not execute ANY new commands at the shell prompt.
I decided to take that opportunity to upgrade to SuSE 7.0. I'm very glad I didn't wipe my wife's files...
Re:IBM's pthread work (Score:1)
Good stuff!
--
A non-machine translation...:-) (Score:1)
it...
--
These are not benchmarks in all reality... I have only did some simple enough tests but some things have surprised me and I would
like to give you a look at them.:-)
Well, this is a little more rigorous than that which happened on the mailing list the other day... the idea is to measure the
performance of various filesystems, which, nevertheless, are very important: XFS and ReiserFS have journaling, and XFS has
advanced characteristics like ACLs, extended attributes, etc. That is to say, this "comparison" takes into account a single aspect of
the filesystems, but there are many other important aspects to take into account when you choose one.
To begin, I have done these tests with RedHat 7.1 with the installer modified by SGI for directly installing XFS 1.0 for Linux, with
the kernel and all the tools to date. I have recompiled the kernel in order to have support for ReiserFS without DEBUG activated
(which makes it very slow).
Aside from that, I have to say that I have done all these tests on a hard drive quicker than that which I used for that comment on the
mailing list (it runs about 12.5 MB/s according to hdparm-t). All the tests were done on the same machine, with the same system,
the same kernel, and they were done on the same hard drive on the same partition. The only thing that has changed has been the
filesystem: my intention was basically to test XFS and compare it with ReiserFS, but I have added Ext 2 FS and Fat 32 as
examples (though some will say that I'm cruel to include Fat 32;-). I have used the 'time' command to measure the execution
time of the commands, and the times that show up in the tables are the average of the times obtained in three experiments. I have
fixed the "real" time elapsed during execution, and you have to take into account that the machine was in single-user mode so that it
was for all practical purposes not doing anything more than execute my commands.
First test: writing, reading, and deletion of a relatively large file (256 MB). The commands used were these (omitting 'time' which
was used with all the commands):
* dd if=/dev/zero of=./prova bs=1 M count=256
* dd if=./prova of=/dev/null bs=1 M count=256
* rm-f./prova
And the times were these (in seconds):
FS Writing Reading Deletion
ReiserFS 18.5 23.41 0.4
Ext 2 FS 20.3 21.38 0.57
XFS 16.32 19.42 0.26
FAT 32 43.65 27.98 1.59
Second test: this was based on the source code of kernel 2.4.4. I had downloaded the tar.gz and had uncompressed it in order to
work with only the tar (of more than 100 MB) without adding the implications of gzip. The commands used were these (again,
omitting 'time'):
* cp linux-2.4.4.tar prova.tar
* tar xf prova.tar
* rm-f prova.tar
* rm-rf linux
And the times were these (in seconds):
FS Copy
ReiserFS 38.48 58.44 0.45 10.09
Ext 2 FS 21.31 59.19 2.88 11.12
XFS 16.21 35.44 0.18 21.96
FAT 32 39.76 134.19 1.2 6.7
Conclusions: I was surprised by the speed of XFS because in the first test that I did (a single experiment) it didn't seem so quick.
For writing, reading, and deletion of big files (sequentially), it seems that this is the one that runs better. Although ReiserFS didn't do
badly, I was hoping that it would beat XFS. In this test, my attention was called to something else: all but Fat32 lagged more in
reading than in writing? Can anyone explain that? Something related to
As far as the tests with the kernel source code, copying the
Ext2FS, later ReiserFS, and finally (what a surprise]:-) FAT32. But this is looking as the averages. There is another curious thing
(which I didn't put in the table): in this test, the variance using ReiserFS was very large, oscillating between 16.94 and 63.48
seconds. I cannot explain that. I repeated this test six times instead of three, and it didn't stabilize. Executing three out of three times
(I had a script prepared for doing this, and I ran it twice) the first copy lagged some 20 seconds, the second some 60 and the third
about 30. This happened to me twice. Poltergeist?
The extraction of the
FAT32, while XFS behaved much better, and ReiserFS and regular Ext2FS (they both lagged about the same). If in the previous
test my attention was called to the enormous variance in the time of copying large files with ReiserFS, here my attention was called
to the consistency of the results of XFS (identical in all three experiments). Not that this is a promotion of XFS, but this is the
impression the tests have left me with. O:-)
Any comments are welcome
Whoa Nellie (Score:1)
Re:I've always been pulling for XFS. (Score:3)
Re:Does it bother anyone... (Score:2)
Little time or patience required, I can explain the delete behavior easily. (Not that I would label myself an FS hacker... distributed, maybe.)
Modern inode-based filesystems scatter both files and metadata across the disk, to reduce average seek time and fragmentation problems (free blocks are always nearby when using cylinder groups, unless the disk is very very full). So deletion can involve visiting many areas of the disk, and also traversing the allocation tree for each file and adding its blocks back to the free list (or vector).
MS-DOS FAT filesystems have most of the metadata tightly clustered in the two file allocation tables... only subdirectories containing filenames, some metadata, and pointers into the FAT are stored outside. (32 bytes each.) So for FAT you have to seek, read, and write once (usually) for each subdirectory, but for all the normal files you just make one pass on each FAT, which probably involves no seeking, since they fit entirely within a single cylinder on modern disks.
DOS just made this process seem excruciatingly slow because you were typically using it without caching and on floppies. FAT-based filesystems are a bit of a naive approach, and their average-case performance, especially under load, can be quite poor if you can't cache the entire FAT in memory. But they aren't all that awful.
lousy metadata performance with XFS? (Score:2)
Hmm... I don't know exactly why metadata operations should be slow on XFS, but my understanding is that XFS is primarily useful for:
Just my thoughts....
ntfs vs fat32 (Score:4)
Though I will grant that NTFS would have been hard or impossible to benchmark in this test, given the lack of robust drivers for it.
-Lx?
Why FAT32? (Score:2)
Re:ntfs vs fat32 (Score:2)
Slashdotted Already (Score:3)
;-)
-sid
Re:XFS deletion performance. (Score:2)
XFS deletion performance. (Score:5)
Here's my theory. Ext2 uses a bitmap to track free blocks, and I'm pretty sure ReiserFS does as well. Free block runs on XFS is managed by two B+trees, one keyed by address, one keyed by size. Thus, allocating space is very fast on XFS, and it is easy to keep things contiguous. However, inserting runs of blocks into both trees is a slower operation then simply clearing the appropriate bits. This would explain the difference between the file creation speed (extraction test) and file deletion speed (rm -rf test.) If this is the case, I think it is quite a good tradeoff, given that space is allocated much more often.
Of course, IANAXE (I am not an XFS engineer) so this is just my theory. I'd appreciate it is someone more informed about XFS could tell me the reason for the performance delta.
Re:Those "tests" aren't good benchmarks (Score:4)
My tests were 1) copy a several gigabyte file tree (/home) to a temp filesystem, 2) run du on the tree, 3) run Bonnie++, 4) build a kernel, and 5) rm -rf the whole tree.
My results were that ext2 and xfs had high read/write throughput, resierfs-notail was slightly lower, and resierfs without notail was a fair bit slower.
All metadata intensive operations (rm -rf, du, bonnie++ small files, etc) were blazing fast on ReiserFS, slow as molasses on xfs, and fast on ext2, but scaling poorly to high numbers of files.
xfs could do slightly more random seeks than either ext2 which in turn beat out Reiserfs, by about the same amount.
reiserfs doesn't have an fsck -- at least the one I have is a noop.
I switched to reiserfs-notail, and was proptly annoyed by the lack of fsck, but otherwise it is running well. I ran a pair of make -j6's, some directory copies, and an updatedb, then hit the power and it is running fine.
My personal preference would be to have ext3 in 2.4 along with Daniel Phillips' hash-tree directory patch stabalized--I think that would smoke reiserfs and still give me journaling.
Re:But reiserfs wastes 1/3 of disk space! (Score:4)
If you had, you'd have seen that Reiserfs is journaling (Well, actually log structured)
It reserves 32mb for the log... Ergo, 32mb of missing space. If I remember right, you can decrease this when formatting it.
exabyte? (Score:2)
1 megabyte = 1024 kilobytes
1 gigabyte = 1024 megabytes
1 terabyte = 1024 gigabytes
1 petabyte = 1024 terabytes
1 exabyte = 1024 petabytes
Data Powers of Ten [caltech.edu]
Re:ntfs vs fat32 (Score:2)
Re:ntfs vs fat32 (Score:1)
I'm just not sure, though, if there are any Linux BFS drivers that are past the compatibility stage of the NTFS drivers . .
Hard-drive access is expensive (Score:1)
Re:Why not test IBM's JFS? (Score:1)
rm -rf linux didn't delete just one file (Score:1)
Re:ntfs vs fat32 (Score:2)
The other problem being that if the Fat32 test was run under Linux as well, all it's showing is how good the Linux driver is, not the filesystem itself.
That's true of all the filesystems being benchmarked. There's no way to seperate the performance of the filesystem from the performance of the driver, unless there are two drivers for the same filesystem on the same OS. Benchmarking it on another OS would change way too many things to get any meaningful results.
In fact, all of the filesystems except ReiserFS have drivers under other operating systems. (FAT32 is obvious. ext2, FreeBSD has it, I think there's an NT driver, etc. XFS was originally for IRIX.)
Re:I've always been pulling for XFS. (Score:1)
Their conclusion (courtesy of babelfish) (Score:1)
Re:ntfs vs fat32 (Score:4)
Re:No ext3? (Score:1)
cat near my computer. But I do get the daily
dodgy hardware random resets so I've switched
to ReiserFS.
BTW my cats name is Linus and he is smart and
he looks like a penguin
Offtopic but, what is exobyte? (Score:1)
Re:I've always been pulling for XFS. (Score:2)
HAR HAR
NAS
Don't matter - think big picture. (Score:1)
Valley-speak compresses pretty well too, like, you know, whatever
Cheerio,
Link.
Babelfish translation to english (Score:1)
Re:I've always been pulling for XFS. (Score:2)
Probably Olsen.
XFS has been used by sgi for their MIPS and Cray machines ever since 1984
You mean 1994. At I can assure you that using XFS in 1994 was a thrill a minute ;-).
Which eventually yields. . . (Score:3)
"Hmmmmmm,large it is. Strong is the file system in this one, hmmmmm."
KFG
Celebrate National Benchmark Week!!!! (Score:2)
GeForce3 [slashdot.org]
Kernel Benchmarks [slashdot.org]
Re:I've always been pulling for XFS. (Score:2)
1000 bytes = 1 kilobyte
1024 bytes = 1 kibibyte
1000000 bytes = 1 megabyte
1048576 bytes = 1 mebibyte
and so on - gibibytes, tebibytes, pebibytes, ebibytes, each larger than the previous one by a factor of 1024.
Even though the prefix names sound silly, I Am Not Making This Up.
Does it bother anyone... (Score:3)
--
Re:I AM THE SYSADMIN OF BULA (Score:2)
1. The maxclients in php.ini doesn't work.
2. We want MySQL !!!
Right now, the author is doing more extensive tests. We will put them is a static page.
Hope the server is still alive when I arrive home.
Cheers, assasins;-)
--ricardo
Re:No real world benchmarks (Score:2)
Sorry if it annoyed you. But don't complain to a guy who likes just to play around with Linux.
--ricardo
I AM THE SYSADMIN OF BULA (Score:3)
You should tell ne before you try to slashdot us ;-) So, I could have time to increase the PG backend.
Hope we can keep it running now... (poor PIII 500Mhz)
--ricardo
Whatever happened to Tux2? (Score:4)
Re:I've always been pulling for XFS. (Score:5)
Actually I believe 1024 EB is a lottabyte (*badoom tch!*) =P
e.
Well, I guess.. (Score:3)
Re:XFS deletion performance. (Score:2)
If this is the case, I think it is quite a good tradeoff, given that space is allocated much more often.
Only if you've got an infinite drive...
--
BACKNEXTFINISHCANCEL
Re:XFS deletion performance. (Score:2)
You're forgetting temporary files: what about all those ccxxxxxx.[iso] files that gcc creates--and then deletes--for each source file you compile, or the test programs autoconf's configure script generates for every test it does?
I will agree that allocation has greater importance overall than deletion, though.
--
BACKNEXTFINISHCANCEL
Re:Those "tests" aren't good benchmarks (Score:2)
Suddenly alarm rings, your syslog becomes all red and the security specialist comes storming through the door shouting "someone executed 'rm -rf /&' as root"...
At this point, would you prefer to have a filesystem that is *sooo* fast deleting that it's already gone through /bin and moved on to /home/, or would you like to have one that has barely just started? :)
At this point, I quit my job, and wander the wastes, warning about using "password" for the root password...
Seriously - this is what full backups are for. Over the lifetime of the system, fast deletes by every user may compensate for the lost time of doing a full restore, rather than catching the global rm halfway through. Plus, a fast delete often means that nothing is really being deleted, file system references are just being dropped - there may be some easy recovery tool for this old worst-case-scenario.
Re:I've always been pulling for XFS. (Score:5)
First, the maximum filesystem size that XFS can handle is 18 exabytes. Since exabyte is also the name of a brand of tape drive, it's more common to hear of people talking of 18 million terabytes.
(Aside: an intresting statistic found on this [caltech.edu] page says that as of 1995, 5 million terabytes was about enough data to store all words ever spoken by humans, ever. Cool.)
Also, XFS was never used on Cray systems. XFS made its first appearance (if I remember correctly) on IRIX 5.3 back in 1994. Other than being off by four orders of magnitude in your sizing and by ten years in your dates, I think you're exactly right. ;-)
Re:I've always been pulling for XFS. (Score:2)
Speed is one thing.. (Score:2)
Needless to say I've got a pair of 61GB IDE's screaming for XFS once the 2.4.4 patches are out. Can't wait.
--
Slashdotted! (Score:3)
No idea why they're including something as outdated as FAT...
Re:XFS deletion performance. (Score:2)
Re:Are the benchmarks on the Reiser site rigged? (Score:3)
Running something like that on ext2fs would (apart from the agony of fsck'ing 800GB of storage) be completely hopeless.
When it comes to setups with a few large files, though, the advantage isn't that great, and the numbers in the article makes reasonable sense.
Reiserfs should be your filesystem of choice if: a) you want to be able to put gazillions of files in a single directory when there is no logical hierarchy to the data (our tests indicate that Reiserfs handles shallow directory hierarchies with many files pr. directory faster than the opposite), and b) you want to be able to efficiently store a 100 byte file when grouping your data logically would give you file sizes of that magnitude, instead of grouping many pieces of data together.
We've been running Reiserfs for well over a year now, and it works great. It's important that you keep up to date on bugfixes, though, and that you're very careful about your recovery procedures - reiserfsck is really a last resort, and you should always ensure you copy out as much as possible of any data on a damaged volume, and preferrably take a raw copy of the entire volume, before running it. That said, I wouldn't hesitate to recommend Reiserfs to anyone with the specific needs I mentioned above.
Re:XFS deletion performance. (Score:3)
It still holds for most cases: You'll copy a bunch of data to the drive over time, and some of it will be deleted, but most of the time a lot of the data you accumulate will be data that you'll keep around for the lifetime of the drive.
Which equals less deletion that allocation...
Perhaps not as significant as the original poster thinks, but his claim still holds true for most users.
More significant, I'd assume, is that a lot of deletion often is maintenance that won't take place during peak usage of the system, and that often may just be left running in the background, to complete whenever it completes.
Allocation is in many applications much more time critical, as allocation is mostly done by applications that the users of the system are much more likely to care about performance for.
FAT32 (Score:2)
a brief summary of the different file systems (Score:2)
JFS has been around for years under AIX. It's a well proven general purpose journalling filesystem.
ReiserFS is the best established of the Linux journalling filesystems. It has several fairly innovative features and is more efficient than ext2 in terms of space utilisation. People are using it as their primary filesystem now, although it's still in development.
EXT3 is (unsurprisingly) a development of EXT2. It lacks most of the pretty features of the other journalled filesystems, but has the significant advantage that you can turn EXT2 partitions into EXT3 (and vice versa) without any trouble at all.
Re:troll? (Score:2)