Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software Linux

Ask Slashdot: Best File System For Web Hosting? 210

An anonymous reader writes "I'm hoping for a discussion about the best file system for a web hosting server. The server would serve as mail, database, and web hosting. Running CPanel. Likely CentOS. I was thinking that most hosts use ext3 but with of the reading/writing to log files, a constant flow of email in and out, not to mention all of the DB reads/writes, I'm wondering if there is a more effective FS. What do you fine folks think?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Best File System For Web Hosting?

Comments Filter:
  • ZFS (Score:5, Informative)

    by Anonymous Coward on Thursday November 29, 2012 @07:31PM (#42136119)

    Or maybe XFS.

  • Go old school (Score:4, Informative)

    by RedLeg ( 22564 ) on Thursday November 29, 2012 @08:12PM (#42136629) Journal

    What do you fine folks think?"

    I think you're not a very well trained sysadmin.

    There is no reason to not have various parts of the filesystem mounted from different disks or partitions on the same disk. If you do this, you can run part of the system on one filesystem, other parts on others as appropriate for their intended usage. This is commonly done on large servers for performance reasons, quite like the one you are asking about. It's also why SCSI ruled in the server world for so long since it made it easy to have multiple discs in a system.

    So run most of your system on something stable, reliable and with good read performance, and the portions that are going to take a read/write beating on a separate partition/disc with the filesystem which has better read or write, whichever is needed, performance. If you segregate your filesystem like this correctly, an added benefit is that you can mount security critical portions of the filesystem readonly, making it more difficult for an attacker.

    Red

  • by Vekseid ( 1528215 ) on Thursday November 29, 2012 @08:12PM (#42136639) Homepage

    By my benchmarks ext4 was about 25% faster than ext3 for my typical database loads, largely due to extents. This is on my twin RAID 1 10krpm drives.

    I still use ext3 for my /boot partitions, but other than that there doesn't seem to be much reason to stick to ext3 at all.

  • Re:ZFS (Score:3, Informative)

    by Anonymous Coward on Thursday November 29, 2012 @08:20PM (#42136715)
    "Maybe" XFS? XFS.

    ZFS if funky and all but you don't need the extra features and the additional CPU overhead is just wastful. The only real thing to care about is lake of fsck on unclean reboot, and fast reads. XFS+LVM2+mdraid (although a proper RAID controller is preferable) is perfect.
  • by Anonymous Coward on Thursday November 29, 2012 @08:23PM (#42136757)

    Contrary to the majority of the people replying to this post, I emphatically DO NOT recommend ext3. ext3 by default wants to fsck every 60 or 90 days; you can disable this, but if you forget to, in a hosting environment it can be pure hell if one of your servers reboots. Usually shared hosting web servers are not redundant, for cost reasons; if one of your shared hosting boxes reboots you thus get to enjoy up to an hour of customers on the phone screaming at you while the fsck completes

    XFS is a very good filesystem for hosting operations. It has superior performance to ext3, which really helps, as it means your XFS-running server can host more websites and respond to higher volumes of requests than an ext3-running equivalent. It also has a feature called Project Quotas, which allows you to define quotas not linked to a specific user or group account; this can be extremely useful for hosting environments, both for single-homed customers and for multi-homed systems where individual customer websites are not tied to UNIX user accounts. The oft-circulated myth that XFS is prone to data loss is just that; there was a bug in its very early Linux releases that was fixed ages ago, and now its no worse than ext4 in this respect.

    Ext4 is also a good option, and a better option than ext3; it is faster and more modern than ext3 and is being more actively developed. Ext4 is also more widely used than XFS, and is less likely to get you into trouble in the unlikely event that you get bit by an unusual bug with either filesystem.

    Btrfs will be a great option when it is officially declared stable, but that hasn't happened yet. The main advantages for btrfs will be for hosting virtual machines and VPSes, as Btrfs's excellent copy on write capabilities will facilitate rapid cloning of VMs.

    This is already a reality in the world of FreeBSD, Solaris and the various Illumos/OpenSolaris clones, thanks to ZFS. ZFS is stable and reliable, and if you are on a platform that features it, you should avail yourself of it. I would advise you steer clear of ZFS on Linux.

    Finally, for clustered applications, i.e. if you want to buck the trend and implement a high availability system with multiple redundant webservers, the only Linux clustering filesystem I've found to be worth the trouble is Oracle's open source OCFS2 filesystem (avoid OCFS1; its deprecated and non-POSIX compliant). OCFS2 lets you have multiple Linux boxes share the same filesystem; if one of them goes offline, the others still have access to it. You can easily implement a redundant iSCSI backend for it using mpio. Its somewhat easier to do this then to setup a high availability NFS cluster, without buying a proprietary filer such as a NetApp.

    Reiserfs was at one time popular for mail servers, in particular for maildirs, due to its competence at handling large numbers of small files and small I/O transactions, but in the wake of Hans Reiser's murder conviction, it is no longer being actively developed and should be avoided. JFS likewise is a very good filesystem, on a par with ext4 in terms of featureset, but for various reasons the Linux version of it has failed to become popular, and you should avoid it on a hosting box for that reason (unless your box is running AIX).

    Speaking of older proprietary UNIX systems; on these you should have no qualms about using the standard UFS, which is a tried and true filesystem analogous to ext2 in terms of functionality. This is the standard on OpenBSD. NetBSD features a variant with journaling called WAPBL, developed by the now defunct Wasabi Systems. DragonFlyBSD features an innovative clustering FS called HammerFS, which has received some favorable reviews, but I haven't seen anyone using that platform in hosting yet. The main headache with hosting is the extreme cruelty you will experience in response to downtime, even when that downtime is short, scheduled or inevitable. Thus, it pays to avoid using unconventional systems that customers will use as a vector for claiming incomp

  • Re:Just (Score:5, Informative)

    by M0j0_j0j0 ( 1250800 ) on Thursday November 29, 2012 @08:32PM (#42136863)

    don't know the budget, but 250gb of "RAM" for 500$ looks like a good deal. and you just suggested an array of 4 drives to someone that wants the classic webserver with CPanel, all stuffed in one system, that would be like 3-4k$ just for the disks. SSD is the way to go on this cases mainly because of the money you save; and the lifespan? i replaced way more HDD than SSD in the last 3 years since using them, and they are in the same ratio right now and the SSD get way more I/O.

  • Re:XFS (Score:5, Informative)

    by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Thursday November 29, 2012 @09:47PM (#42137539) Homepage

    The biggest source for early XFS corruption issues was that at the time the filesystem was introduced, most drives on the market lied about write caching. XFS was the first Linux filesystem that depended on write barriers working properly. If something was declared written but not really on disk, filesystem corruption could easily result after a crash. But when XFS was released in 2001, all the cheap ATA disks in PC hardware lied about writes being complete, Linux didn't know how to work around that, and as such barriers were not reliable on them. SGI didn't realize how big of a problem this was because their own hardware, the IRIX systems XFS was developed for, used better quality drivers where this didn't happen. But take that same filesystem and run it on random PC hardware of the era, and it usually doesn't work.

    ext4 will fail in the same way XFS used to, if you run it on old hardware. That bug was only fixed in kernel 2.6.32 [phoronix.com], with an associated performance loss on software like PostgreSQL that depends on write barriers for its own reliable operation too. Nowadays write barriers on Linux are handled by flushing the drive's cache out, all SATA drives support that cache flushing call, and the filesystems built on barriers work fine.

    Many of the other obscure XFS bugs were flushed out when RedHat did QA for RHEL6. In fact, XFS is the only good way to support volumes over 16TB in size, as part of their Scalable File System [redhat.com] package, a fairly expensive add-on the RHEL6. All of the largest Linux installs I deal with are on XFS, period.

    I wouldn't use XFS on a kernel before RHEL6 / Debian Squeeze though. I know the software side of the write barrier implementation, the cache flushing code, works in the 2.6.32 derived kernels they run. The bug I pointed to as fixed in 2.6.32 was specific to ext4, but there were lots of other fixes to that kernel in this area. I don't trust any of the earlier kernels for ext4 or xfs.

  • Re:XFS (Score:4, Informative)

    by segedunum ( 883035 ) on Friday November 30, 2012 @05:23AM (#42139645)
    Red Hat spent a lot of time effectively saying to everyone that they didn't support XFS. Eventually they had to throw in the towel because it's the only Linux filesystem that genuinely works well once you start dealing in terabytes of data. It's also recently got better at handling lots of smaller files and metadata. It's an incredibly useful filesystem and unfortunate that it still gets a lot of FUD thrown at it because of many peoples' misunderstanding about data loss issues several years ago.
  • Re:ZFS (Score:4, Informative)

    by joaommp ( 685612 ) on Friday November 30, 2012 @08:42AM (#42140469) Homepage Journal

    Why hasn't anybody mentioned JFS?

    Since the demise of ReiserFS, that's what I've been using everywhere. It's fast, really stable and has the lowest CPU usage of all. So, why not JFS?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...