Ask Slashdot: Best File System For Web Hosting? 210
An anonymous reader writes "I'm hoping for a discussion about the best file system for a web hosting server. The server would serve as mail, database, and web hosting. Running CPanel. Likely CentOS. I was thinking that most hosts use ext3 but with of the reading/writing to log files, a constant flow of email in and out, not to mention all of the DB reads/writes, I'm wondering if there is a more effective FS. What do you fine folks think?"
ZFS (Score:5, Informative)
Or maybe XFS.
Re: (Score:2, Funny)
Re:reiserfs (Score:4, Funny)
I hear reiserfs is killer.
(too soon?)
Whatever... I really did love reiser3 back in the day, if only because rm -rf on large dirs was blazingly swift compared to ext2
Re: (Score:3, Informative)
ZFS if funky and all but you don't need the extra features and the additional CPU overhead is just wastful. The only real thing to care about is lake of fsck on unclean reboot, and fast reads. XFS+LVM2+mdraid (although a proper RAID controller is preferable) is perfect.
Re: (Score:3)
Agreed. XFS is solid and fast, with relatively low CPU overhead.
EXT4 from what I have read is very good, as well. Some debate its stability though.
Re: (Score:2)
at the console with a tissue and a box of chocolates
Cheap date!
Re:ZFS (Score:5, Funny)
Depending on the type of web content, XXXFS might be appropriate.
in that case.. (Score:2)
Why not btrfs and backups?
Re: (Score:3)
Why not btrfs and backups?
BTRFS is not stable! I just lost my /home and all it's snapshots, two days ago.
"You should keep and test backups of your data, and be prepared to use them."
Yes I know about the latest tools. In the end I had to do a btrfs-restore.
https://btrfs.wiki.kernel.org/index.php/Restore [kernel.org]
Re: (Score:2)
Re: (Score:2)
In ext3, when you would delete a multi-gigabyte file, it would take up to a few minutes for it to happen. In ext4, that process is measured in fractions of a comparison.
Re:ZFS (Score:4, Informative)
Why hasn't anybody mentioned JFS?
Since the demise of ReiserFS, that's what I've been using everywhere. It's fast, really stable and has the lowest CPU usage of all. So, why not JFS?
ext3 (Score:5, Insightful)
if you have to ask you should stick with ext3
Re: (Score:3)
+1 to this.
Unless you have a business case where you know you need something different, stick to what's simple and what works.
ext4 is also a nice option over ext3. It uses extent instead of bitmap block allocaiton which improves metadata efficiency with no downside.
Re: (Score:2)
Why not ext4?
Re: (Score:2)
Why not zoidberg?
Re:ext3 (Score:4, Funny)
I don't quite trust ext4 for writes.
app: Hey, can you write this data out to
ext4: DONE!
app: Uhh, that wasn't long enough to actually write the data.
ext4: Sure it was, I'm super faGRRRRRRRRRRRRRst at writing too.
app: wait, did you just cache that write and report it written but then not actually write it to disk until 30 seconds later?
ext4: Yeah, what about it?
That being said, ext3 and mount it with the noatime flag. If you're on a web server you don't want to be hammering it with writes to update the last access time. That's just silly.
Re: (Score:3)
Re: (Score:2)
fsync() is waaaay too slow. You could have at least recommended the fdatasync(), which is less slow. Or even better: opening files with O_SYNC/O_DSYNC flag.
The experimental nature of Linux IO subsystem, its unpredictability, is one of the reasons why some actually pick *BSD instead. OK, disk IO is slower than that of Linux, but at least one has sensible IO guarantees: data are written probably not right away, but without any great delay. (The only major problem of *BSD is the lack of drivers for the sto
Re: (Score:2)
I plan on waiting until at least late next year before I'd test btrfs for production. Let others be the pioneers in that, because ext4 handles our workload just fine.
Re: (Score:3)
Re: (Score:2)
The best server? (Score:3, Insightful)
The best file system would be one not running: mail, database, web hosting, and CPanel.
ext4 unless there's a good reason not to. (Score:3, Insightful)
The obvious argument for ext4, the current ext version, is that it's been around a long time and is very solid. I'd only use something else if I knew the performance of ext4 would be an issue.
Re:ext4 unless there's a good reason not to. (Score:5, Informative)
By my benchmarks ext4 was about 25% faster than ext3 for my typical database loads, largely due to extents. This is on my twin RAID 1 10krpm drives.
I still use ext3 for my /boot partitions, but other than that there doesn't seem to be much reason to stick to ext3 at all.
Re: (Score:2)
I use ext2 for /boot and /tmp. On boot for compat and because it is rarely written to, on /tmp because it is faster and /tmp doesn't need to be able to recover after a crash.
Re: (Score:2)
I store all my most important files in /tmp, you insensitive clod!
Re: (Score:2)
I use tmpfs for /tmp, but then it is a webserver with a rather large amount of database throughput.
Re: (Score:3, Interesting)
Re: (Score:2)
Recommending ext3 over ext4 at this point is silly. It's like recommending Vista over Win7.
Re: (Score:3)
ext2 which is what ext4 is based on beats xfs by a year. 19 years old. :)
ReiserFS Sure (Score:4, Funny)
It will kill your innocent files to save some space....
Re: (Score:3, Funny)
I heard it can murder your server's performance.....
Re:ReiserFS Sure (Score:4, Funny)
I heard it only kills the wifi. And then makes it disappear completely.
Re: (Score:2)
But it also has a feature that will tell you where the wifi is...
Re: (Score:2)
CPanel will be your problem (Score:5, Insightful)
Re: (Score:2)
Meditation is your best bet, since it involves not thinking.
Whatever is the best with the given OS (Score:4, Interesting)
WinFS (Score:3, Funny)
Turn off the last accessed time stamp (Score:2, Interesting)
Especially if you decide to use a SSD. Even if there's not alot of data writing going on the constant rewriting of the directory entries to update the last accessed time stamp would wear an SSD and slow a regular hard drive.
Re: (Score:2)
XFS for huge mailqueues, otherwise EXT3 or EXT4 (Score:2)
From memory (I've been out of that business for 6 months) CPanel stores mail as maildirs. If you have gazillions of small files (that's a lot of email) then XFS handles it a lot better than ext3 - I've never benchmarked XFS against ext4. Back in the day, it also dealt with quotas more efficiently than ext2/3, but I really doubt that is a problem nowadays.
If you aren't handling gazillions of files, I'd be tempted to stick to ext3 or ext4 - just because it's more common and well known, not because it is neces
Re:XFS for huge mailqueues, otherwise EXT3 or EXT4 (Score:4, Interesting)
From memory (I've been out of that business for 6 months) CPanel stores mail as maildirs. If you have gazillions of small files (that's a lot of email) then XFS handles it a lot better than ext3 - I've never benchmarked XFS against ext4. Back in the day, it also dealt with quotas more efficiently than ext2/3, but I really doubt that is a problem nowadays.
If you aren't handling gazillions of files, I'd be tempted to stick to ext3 or ext4 - just because it's more common and well known, not because it is necessarily the most efficient. When your server goes down, you'll quickly find advice on how to restore ext3 filesystems because gazillions of people have done it before. You will find less info about xfs (although it may be higher quality), just because it isn't as common.
XFS is probably better for large maildirs, but ext3 in recent kernels has much better performance on large directories starting in the late 2.6 kernels. It doesn't provide for infinite # of files per directory, but it doesn't take a huge hit listing e.g. 4k files in a directory anymore.
Re: (Score:2)
it doesn't take a huge hit listing e.g. 4k files in a directory anymore.
Umm, maildirs store each message in its own file. I clean up (archive) emails from each past year in a separate folder and still easily have 8k files in each... and that is not my busiest mailbox.
After a few thousand items of anything, the proper tool for the job is a database, not a file system. Though file system can be described as a kind of database, any in case there are problems common to both, such as fragmentation, a specialized data storage always beats generic ones. Personally, I like what Dovecot
Is it for work? Don't roll a custom solution (Score:5, Insightful)
You're not going to be there forever, and all using a non-standard filesystem is going to accomplish is to cause headaches down the road for whoever is unfortunate enough to follow you. Use whatever comes with the OS you've decided to run - that'll make it a lot more likely the server will be kept patched and up to date.
Trust me - I've been the person who's had to follow a guy that decided he was going to do the sort of thing you're considering. Not just with filesystems - kernels too. It was quite annoying to run across grsec kernels that were two years out of date on some of our servers, because apparently he got bored with having to constantly do manual updates on the servers and so just stopped doing it...
XFS (Score:2)
Stick with the default (Score:3)
Unless you want the special features of other file systems (say ZFS), the default (ext3 or ext4) should be fine. They are capable of handling high I/O loads.
If you want even more I/O performance, then use SSDs.
What is wrong with you? (Score:5, Insightful)
This isn't 1999. You have no reason to host your web server, email server, and database server on the same operating system.
You would be well advised to run your web server on one machine, your email server on another machine, and your database server on a third machine. In fact, this is pretty much mandatory. Many standards, such as PCI compliance, require that you separate all of your units.
Take advantage of the technology that has been created over the past 15 years and use a virtualized server environment. Run CentOS with Apache on one instance - and nothing else. Keep it completely pure, clean, and separate from all other features. Do not EVER be tempted to install any software on any server that is not directly required by its primary function.
Keep the database server similarly clean. Keep the email server similarly clean. Six months from now, when the email server dies, and you have to take the server offline to recover things, or when you decide to test an upgrade, you will suddenly be glad that you can tinker with your email server as much as you want without harming your web server.
Re: (Score:2, Insightful)
After having worked for companies that do both, I honestly disagree. If you host your dbs and web servers on different machines, you wind up with a really heavy latency bottleneck which makes lamps applications load even slower. It doesn't really make a difference in the "how many users can I fit into a machine" catagory. CPanel in particular is a very one-machine centric piece of software, while you could link it to a remote database its really a better idea to put everything on one machine.
Re: (Score:2)
Thank you for the polite response. I did get a bit carried away in my post, so allow me to clarify.
The basic principle I'm approaching here is that you should design your environment for simplicity of maintenance. Keeping your machines separate makes maintenance easier, it makes disaster recovery easier, it makes documentation easier, it makes upgrades easier, and it makes downgrades easier. The gains just keep on going.
When I managed hundreds of separate machines - or even when I manage only three or fo
Re: (Score:3)
use a virtualized server environment
And ðere goes I/O thru ðe drain.
Re: (Score:3)
Run CentOS with Apache on one instance - and nothing else. Keep it completely pure, clean, and separate from all other features. Do not EVER be tempted to install any software on any server that is not directly required by its primary function.
Why is this required? Shouldn't we expect our operating systems to multitask?
Re:What is wrong with you? (Score:5, Insightful)
Re: (Score:2)
Why is this required? Shouldn't we expect our operating systems to multitask?
We should expect our servers to be secure. But they're buggy.
We should expect defense in depth to be unnecessary. But people screw up.
We should expect OS tunables to be variable on a per-process basis, but they're not (with Linux anyhow).
You are asking wrong question (Score:2)
If you are concerned about performance and expect constant email stream you should host mail, database and web-servers on separate computers. There is a reason any reputable host does it this way. Plus increased load on one component doesn't affect others.
I think file picking system should be the least of your worries.
Re: (Score:2)
* I think picking file system should be the least of your worries.
Re: (Score:2)
performance concerns? (Score:2)
Use the standard and focus on what matters (Score:4, Insightful)
Go old school (Score:4, Informative)
I think you're not a very well trained sysadmin.
There is no reason to not have various parts of the filesystem mounted from different disks or partitions on the same disk. If you do this, you can run part of the system on one filesystem, other parts on others as appropriate for their intended usage. This is commonly done on large servers for performance reasons, quite like the one you are asking about. It's also why SCSI ruled in the server world for so long since it made it easy to have multiple discs in a system.
So run most of your system on something stable, reliable and with good read performance, and the portions that are going to take a read/write beating on a separate partition/disc with the filesystem which has better read or write, whichever is needed, performance. If you segregate your filesystem like this correctly, an added benefit is that you can mount security critical portions of the filesystem readonly, making it more difficult for an attacker.
Red
Re: (Score:2)
Actually, there is a reason not to have different apps using different filesystems in partitions on one disk. If those apps just use subdirectories within one filesystem, that filesystem can do a pretty good job of linearizing I/O across them all, minimizing head motion (XFS is especially good at this). If those apps use separate partitions, you'll thrash the disk head mercilessly between them if more than one is busy. Your advice is good in the multiple-disk case, but terrible in the single-disk case, a
You should use XFS ... avoid ext3 at all costs (Score:5, Informative)
Contrary to the majority of the people replying to this post, I emphatically DO NOT recommend ext3. ext3 by default wants to fsck every 60 or 90 days; you can disable this, but if you forget to, in a hosting environment it can be pure hell if one of your servers reboots. Usually shared hosting web servers are not redundant, for cost reasons; if one of your shared hosting boxes reboots you thus get to enjoy up to an hour of customers on the phone screaming at you while the fsck completes
XFS is a very good filesystem for hosting operations. It has superior performance to ext3, which really helps, as it means your XFS-running server can host more websites and respond to higher volumes of requests than an ext3-running equivalent. It also has a feature called Project Quotas, which allows you to define quotas not linked to a specific user or group account; this can be extremely useful for hosting environments, both for single-homed customers and for multi-homed systems where individual customer websites are not tied to UNIX user accounts. The oft-circulated myth that XFS is prone to data loss is just that; there was a bug in its very early Linux releases that was fixed ages ago, and now its no worse than ext4 in this respect.
Ext4 is also a good option, and a better option than ext3; it is faster and more modern than ext3 and is being more actively developed. Ext4 is also more widely used than XFS, and is less likely to get you into trouble in the unlikely event that you get bit by an unusual bug with either filesystem.
Btrfs will be a great option when it is officially declared stable, but that hasn't happened yet. The main advantages for btrfs will be for hosting virtual machines and VPSes, as Btrfs's excellent copy on write capabilities will facilitate rapid cloning of VMs.
This is already a reality in the world of FreeBSD, Solaris and the various Illumos/OpenSolaris clones, thanks to ZFS. ZFS is stable and reliable, and if you are on a platform that features it, you should avail yourself of it. I would advise you steer clear of ZFS on Linux.
Finally, for clustered applications, i.e. if you want to buck the trend and implement a high availability system with multiple redundant webservers, the only Linux clustering filesystem I've found to be worth the trouble is Oracle's open source OCFS2 filesystem (avoid OCFS1; its deprecated and non-POSIX compliant). OCFS2 lets you have multiple Linux boxes share the same filesystem; if one of them goes offline, the others still have access to it. You can easily implement a redundant iSCSI backend for it using mpio. Its somewhat easier to do this then to setup a high availability NFS cluster, without buying a proprietary filer such as a NetApp.
Reiserfs was at one time popular for mail servers, in particular for maildirs, due to its competence at handling large numbers of small files and small I/O transactions, but in the wake of Hans Reiser's murder conviction, it is no longer being actively developed and should be avoided. JFS likewise is a very good filesystem, on a par with ext4 in terms of featureset, but for various reasons the Linux version of it has failed to become popular, and you should avoid it on a hosting box for that reason (unless your box is running AIX).
Speaking of older proprietary UNIX systems; on these you should have no qualms about using the standard UFS, which is a tried and true filesystem analogous to ext2 in terms of functionality. This is the standard on OpenBSD. NetBSD features a variant with journaling called WAPBL, developed by the now defunct Wasabi Systems. DragonFlyBSD features an innovative clustering FS called HammerFS, which has received some favorable reviews, but I haven't seen anyone using that platform in hosting yet. The main headache with hosting is the extreme cruelty you will experience in response to downtime, even when that downtime is short, scheduled or inevitable. Thus, it pays to avoid using unconventional systems that customers will use as a vector for claiming incomp
Re: (Score:2)
While I agree with what you say, mostly, I've got contention with a couple key points.
Btrfs will be a great option when it is officially declared stable, but that hasn't happened yet.
On the contrary, btrfs will not be a good option 'when it's officially declared stable'. It'll be a good option when it's vetted as stable without too much regressive or destructive behavior, in the wild. Until then, it's still immature and best suited for closed environments.
The main advantages for btrfs will be for hosting virtual machines and VPSes, as Btrfs's excellent copy on write capabilities will facilitate rapid cloning of VMs.
This is already a reality in the world of FreeBSD, Solaris and the various Illumos/OpenSolaris clones, thanks to ZFS. ZFS is stable and reliable, and if you are on a platform that features it, you should avail yourself of it.
I agree, but a word of caution... FreeBSD lacks the necessary stable storage controller support to make ZFS fully stable on FreeBSD on all but a hand
It cracks me up when... (Score:2)
... in the year 2012, people are seriously suggesting others use filesystems that can (and eventually will) lose data on an unclean shutdown. C'mon people, this isn't stone age anymore.
PostgreSQL (Score:2)
Go for PostgreSQL-backed services whenever feasible. For example, ðere is a quite competent IMAP server called Archiveopteryx, you can run Mediawiki on PostgreSQL, as well as Zope and whatnot.
Re: (Score:2)
Take a look at BetterLinux (Score:3)
I spent some time late last year and earlier this year working very closely with the developers of BetterLinux, and in the work I did, I did stress testing (on a limited scale) to see how the product performed. It has some OSS components and some closed-source components, but the I/O leveling they do is pretty amazing.
http://www.betterlinux.com/ [betterlinux.com]
Definitely FAT, but which one? (Score:3, Funny)
Re: (Score:2)
404ERR~1.HTM
Re: (Score:2)
Because "OR" uses more characters than "~1".
Re: (Score:2)
FAT12 does support long file names, you know...
Google Apps (Score:2, Insightful)
People still run their own email servers?
Re: (Score:2)
Yes, I have been running my own mail / web server since 1992. As soon as something is more reliable than that, I might consider switching to it. ;-)
My email archive is about 30GB last I checked. Fully backed up. Very fast to search.
Maildirs are dumb. Imap to mbox folders are the way to go. I roll them over at 200MB. With Thunderbird caching and a good Imap server indexing, it is faster than any available email service.
Of course Thunderbird is great with Gmail, AOL, and Outlook.com too.
I also have a question (Score:2)
I'm planning to race a Yugo kitted out with cast iron spoilers and wooden tires.
Which type of decals will make me go fastest?
Ontopic; the choice of filesystem will have far less impact than the choice of programming language, database, webserver application and how you use those. The choice to go with CPanel (or any *Panel) means the impact of the filesystem will be unnoticable. Nothing wrong with those panels; they drive down human cost, but if you need the absolute best performance, panels won't let you g
ext4 is good enough, but I always liked JFS (Score:2)
I used JFS on all my machines from around 2007-2011, including laptops. I had many unclean shutdowns (especially on laptops) and JFS rarely had any problems, except that one time briefly in 2009 where I did actually lose a bunch of data, but then so did my ext4 reinstall a few weeks later (bad hardware).
JFS was much, much better than ext3. Especially in low-CPU situations/hardware.
I can't remember why I went back to ext4, I guess I wanted to see if it still sucked compared to JFS. With noatime I decided I c
tmpfs (Score:2)
Go with tmpfs. It has the highest performance of any of the "standard kernel" filesystems, and if you use it for your personal webserver/blogserver/mailserver/etc, it will never lose any valuable data if the server reboots unexpectedly.
--Joe
Re: (Score:3, Insightful)
Even with an SSD you still need a file system format for it to be usable.
I'm all for ZFS, very reliable over long periods of time.
Re:Just (Score:5, Insightful)
yeah - its especially good for your log files, after all, SSD is just like a big RAM drive.....
you're going to be better off forgetting SSDs and going with lots more RAM in most cases, if you have enough RAM to cache all your static files, then you have the best solution. If you're running a dynamic site that generates stuff from a DB and that DB is continually written to, then generally putting your DB on a SSD is going to kill its performance just as quickly as if you had put /var/log on it.
RAID drives are the fastest, stripe data across 2 drives basically doubles your access speed, so stripe across an array of 4! The disadvantage is 1 drive failure kills all data - so mirror the lot. 8 drives in a stripe+mirror (mirror each pair, then put the stripe across the pairs - not the other way round) will give you fabulous performance without worry that your SSD will start garbage collecting all the time when it starts to fill up.
Re:Just (Score:5, Informative)
don't know the budget, but 250gb of "RAM" for 500$ looks like a good deal. and you just suggested an array of 4 drives to someone that wants the classic webserver with CPanel, all stuffed in one system, that would be like 3-4k$ just for the disks. SSD is the way to go on this cases mainly because of the money you save; and the lifespan? i replaced way more HDD than SSD in the last 3 years since using them, and they are in the same ratio right now and the SSD get way more I/O.
Re: (Score:3)
You're still going to want redundancy. At the very least 2 identical drives mirrored with software RAID.
If redundancy is important, 500GB/1TB "Enterprise" drives are cheap. 4 drives in RAID10 would give the best cost:redundancy:performance ratio. You can probably get 4 HDD's for the cost of the one $500 240GB SSD you mentioned.
RAID10 (Score:5, Insightful)
Yep, agreed... agonizing over the FS choice isn't going to provide many gains compared to spending time optimizing the physical disk configuration and partitioning.
FS performance is only going to really matter if you're going to have directories with thousands of nodes in them. But then hopefully you have better ways to prevent that from happening.
But you do want to spend a good deal of time benchmarking different RAID and partitioning setups, where you can see some gains in the 100-200% range rather than 5-10%, especially under concurrent loads. Spend some quality time with bonnie++ and making some pretty comparison graphs. Configure jmeter to run some load tests on different parts of your system, and then all together to see how well it deals with concurrent accesses. Figure out which processes you want to dedicate resources to, and which can be well-behaved and share with other processes. Set everything up in a way to make it easier to scale out to other servers when you're ready to grow.
The FS choice is probably the least interesting aspect of the system (until you start looking at clustered FSs, like OCFS2 or Lustre)
Re: (Score:2)
2? 4? fuq dat, use 12. use another 12 if you need redundancy. and scsi is still a better performer than sata...
Re: (Score:2)
Re: (Score:3, Insightful)
Re:Just (Score:5, Insightful)
Due to the amount of read writes & the life span of SSD's they are some of the worst drives you can get for a high availability web server.
Only if you're completely ignorant about the difference between consumer and enterprise SSDs. The official rated endurance of a 200GB Intel 710 with random 4K writes (the worst case scenario) with no over-provisioning is 1.0 TB. In order to wear this drive out in a high-load scenario, you could write 100GB of data in 4k chunks to this drive every day for nearly 30 years before you approached even the official endurance.
If you use a consumer SSD in a high-load enterprise scenario, you're going to get bit. If you use an enterprise SSD in a high-load enterprise scenario, you'll have no problems whatsoever with endurance, regardless of what people spreading FUD like you would have you believe.
Re: (Score:3)
Came here to say this. Unfortunately I have no mod points. Enterprise drives are more expensive, but if you need the performance, are an excellent option.
Re: (Score:2)
Shoot, sorry, yes. I meant 1.0 PB.
Re:Just (Score:5, Insightful)
Intel rates the endurance of the 710 at 1.0 PB and the 330 at 60 TB, so yeah, there's a pretty big difference there.
In Intel's case, specifically, the difference is between using MLC flash and MLC-HET flash. The difference is largely from binning, but it's the difference between 3k to 5k p/e cycles on typical MLC, and 90k p/e cycles on MLC-HET. SLC produces similar improvements. I could explain how they achieve this, but Anandtech and Tom's Hardware have both done pretty good write-ups explaining the difference.
It depends entirely on your workload. If you've got an enterprise workload where you don't do many writes, then a consumer drive will work just fine. And since most drives report their current wear levels, it's actually pretty safe to use a consumer drive if you monitor that.
Anandtech gave one example, when they were short on capacity and were facing a delay in getting some new enterprise SSDs; they walked out to the store, bought a bunch of consumer Intel SSDs, and slapped those into their servers. They were facing a write-heavy workload, so they wouldn't have lasted long, but they only needed them for a few months and kept an eye on the media wear indicator values, so they were fine.
My point overall is that you can't look at SSDs the same way if you're a consumer versus an enterprise user, and if you're an enterprise user, you need to pick an SSD appropriate for your workload.
One thing people don't consider is upgrade cycles. Hanging on to an SSD for ten years doesn't really make sense, because it only takes a few years for them to be replaced by drives enormously cheaper, larger, and faster. They're improving by Moore's Law, unlike HDDs. I paid $700 for a 160GB Intel G1, and three years later, I paid $135 for a much faster 180GB Intel 330. If you're going to replace an SSD in three to five years, does it matter if the lifespan is 10 or 30?
Re:Just (Score:4, Insightful)
If only they had some kind of way of allowing drives to fail while still retaining data integrity. It's probably because I just dropped Acid, but I'd call the system RAID.
Re: (Score:2)
Re: (Score:3)
Has XFS gotten over it's corruption problems when shut down dirty?
Back when I used it, I was always very careful to have *good* UPS support.
Re: (Score:2)
My MythTV box gets shut down a few times a year when we lose power at our house and so far there haven't been any problems. I'm not sure I'd trust it for valuable data, though.
Re:XFS (Score:5, Informative)
The biggest source for early XFS corruption issues was that at the time the filesystem was introduced, most drives on the market lied about write caching. XFS was the first Linux filesystem that depended on write barriers working properly. If something was declared written but not really on disk, filesystem corruption could easily result after a crash. But when XFS was released in 2001, all the cheap ATA disks in PC hardware lied about writes being complete, Linux didn't know how to work around that, and as such barriers were not reliable on them. SGI didn't realize how big of a problem this was because their own hardware, the IRIX systems XFS was developed for, used better quality drivers where this didn't happen. But take that same filesystem and run it on random PC hardware of the era, and it usually doesn't work.
ext4 will fail in the same way XFS used to, if you run it on old hardware. That bug was only fixed in kernel 2.6.32 [phoronix.com], with an associated performance loss on software like PostgreSQL that depends on write barriers for its own reliable operation too. Nowadays write barriers on Linux are handled by flushing the drive's cache out, all SATA drives support that cache flushing call, and the filesystems built on barriers work fine.
Many of the other obscure XFS bugs were flushed out when RedHat did QA for RHEL6. In fact, XFS is the only good way to support volumes over 16TB in size, as part of their Scalable File System [redhat.com] package, a fairly expensive add-on the RHEL6. All of the largest Linux installs I deal with are on XFS, period.
I wouldn't use XFS on a kernel before RHEL6 / Debian Squeeze though. I know the software side of the write barrier implementation, the cache flushing code, works in the 2.6.32 derived kernels they run. The bug I pointed to as fixed in 2.6.32 was specific to ext4, but there were lots of other fixes to that kernel in this area. I don't trust any of the earlier kernels for ext4 or xfs.
Re:XFS (Score:4, Informative)
Re: (Score:2)
Good to know. The last time I dealt with it was Red Hat 9, so that should give you an idea how long ago that was. It was for database servers, and we *did* have good UPSs.
Trying it on my workstation wasn't nearly so protected, and I paid the price on that.
Re: (Score:2)
Re: (Score:2)
I don't see any indication that a) this has any relation to the size of the directory
The cited bug [redhat.com] points out a problem with readdir(). This manifests itself as failures with other software, including dovecot [redhat.com] (where maildir is used) and bonnie++ [centos.org]. Some of those other bugs were reported and marked as duplicates [redhat.com], some weren't.
Ultimately it boiled down to a flaw in Linux NFS that was fixed [kernel.org] by Trond Myklebust and was still perculating through distributions [debian.org] over a year later.
Never in years has it given me one problem, but hey, that's just me.
Yeah, that's just you. Me? I've been watching hapless administrators discover NFS flaws since the 90's and I have lon
Re: (Score:2)
Re: (Score:2)
I have seen as much data corruptions on HP-UX/ia64, AIX/POWER and Solaris/SPARC boxes as on cheaper Linux/x86 and Linux/x64 boxes with SUSE or RHEL. I'm pretty sure in long term Solaris/x64 would show the same results.
Cause of most corruptions were faulty power supply or faulty memory module (and dumb admins who had ignored the ECC errors).
The only memorable software-induced corruption I can recall right now was on HP-UX/ia64. Some weird admins have exported all local files systems via NFS. And then,