Managing RAID on Linux 225
Managing RAID on Linux | |
author | Derek Vadala |
pages | 245 |
publisher | O'Reilly |
rating | The best |
reviewer | Robert Nagle (aka idiotprogrammer) |
ISBN | 1565927303 |
summary | This book brings RAID to the masses |
A person deciding to go with RAID faces a panoply of options and gotchas. Hardware or software? How many controllers? ATA or SCSI (or ataraid)? RAID 1 or RAID 5? Which file system or distribution? Kernel options? Mdadm or raidtools? /swap or /boot on raid? Hybrid? Left or right symmetric? One poster pointed out that putting two ATA drives on the same controller could impact performance. Yikes! Didn't I do that? Upon discovering that O'Reilly had just published its Managing RAID on Linux book, looking at sample chapter , I bought the book and let my blood pressure return to normal.
RAID is one of these subjects that is really not complex; it's just very hard to find all the information in one place. This is precisely the book to solve the problem. Author Derek Vadala, sysadmin and founder of Azurance.com, an open source/security consulting firm, has gathered a lot of information and even personal anecdotes to go through the decision making process when going over to RAID. He goes step-by-step through that process, educating us about hard drives, controllers, and bottlenecks along the way. This exhaustive book may be the first to bring RAID to the masses.
Although parts of the book (RAID types, file system types) may seem already familiar to experienced Linux users, it is helpful nonetheless to have everything in a nifty little book. A section of file systems provided not only a rundown of the merits and drawbacks of each one, but also a guide to their utilities. I learned for example what "file tails" for Reiser are, and why using them causes performance to degrade after reaching 85% capacity. The book compares raidtools with mdadm as well as lovely commands like nohup mdadm -monitor -mail=paranoidsysadmin@home.com (which, if you haven't guessed, causes the system to email you RAID status reports upon boot).
People who use software RAID may skip over the chapter on RAID utilities for the leading RAID controller cards. Still, there was one interesting tidbit: Why, the author asks, do makers of controller cards put all their BIOS utilities on DOS floppies which require us to find a DOS boot disk? Seriously, how many of us carry around DOS boot disks nowadays? The book made me aware for the first time of freedos, an open source solution that solves precisely that problem.
The Software RAID stuff was pretty thorough and clarified a lot of things. The book does an excellent job in helping to identify and eliminate bottlenecks and optimizing hard drive performance (using hdparm and various monitoring commands). The anecdotes and case studies definitely clarified which RAID solution is suited for which task.
I am less impressed by the book's sections on disaster recovery and troubleshooting. Although these subjects are brought up at several places in the software RAID chapter, the book could have discussed several failure scenarios or used a fault tree (such as the famous Fault Tree in Chapter 9 of the Samba book, a marvel for any tech writer to read). The book doesn't even discuss booting with software RAID until the last 10 page of the book and then gives it only a single paragraph (even though the author acknowledges it as "one of the most frequently asked questions on the linux-raid mailing list."). Call me old-fashioned, but isn't the ability to boot into your RAID system ... kinda important? As someone who just spent a significant amount of time troubleshooting RAID booting problems in Gentoo, I for one would have liked more insight into the grub/lilo thing. Also, in the next paragraph in the last chapter on page 228, the author casually mentions that "all /boot and / partitions must be on a RAID-1." Say what? Please pity the poor newbie who religiously follows the instructions in the book but fails to read until the end. I'm not sure what the author meant by this statement, but it required a much more substantial explanation and needed to go into a much earlier chapter.
These complaints don't detract very much from this excellent book, a true O'Reilly classic and a model of clarity and helpfulness. This book provides enough knowledge to avoid the dread and uncertainty that comes with trying to tackle Linux RAID. With a book like this, a sysadmin can sleep a little easier.
Recommended Readings:- Reliable Linux , by Iaian Campbell, Wiley and Sons, Dec 2001, ISBN: 0471070408. Gives excellent information not only about RAID but on general Linux reliability issues.
- Software RAID in the Linux 2.4 Kernel by Daniel Robbins. (Part Two).
- Linux Journal Article on Software RAID by Joe Edwards, Audin Malmin and Ron Shaker. ( Part Two).
- "How to do a gentoo install on software RAID" by Chris Atwood. Gentoo User Forum.
Robert Nagle (aka Idiotprogrammer )is a Texas technical writer, trainer and Linux aficionado. You can purchase Managing RAID on Linux from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
I know this book is about software RAID ... (Score:5, Informative)
Mind you I'm thinking of RAID used in producion instead of someone RAIDing two drives in there home machine.
Re:I know this book is about software RAID ... (Score:2)
Re:I know this book is about software RAID ... (Score:4, Informative)
Those cheap-o-RAIDs are essentially software RAIDs. Most if not all RAID functions are done by the drivers, not on the card itself.
Entry-level real hardware IDE RAID cards [3ware.com] cost approximately $500 - almost the same as a SCSI RAID. That's obviously offset by the cheaper disks, but still...
Re:I know this book is about software RAID ... (Score:2, Informative)
If you're looking to do any more than 4 channels, I'd take a serious look at the SATA cards and drives simply to reduce cabling hassles.
Re:I know this book is about software RAID ... (Score:2)
If you need a RAID solution that never fails and want to spend some money might as well go Sun A/Dx000 and build a couple redundant servers to setup load balancing and failover. The whole point of using IDE RAID is to save money and get massive amounts of relatively stable storage space.
Re:I know this book is about software RAID ... (Score:2)
The other thing you get is the ability to add drives on the fly.
I would agree that if I didn't have the $300.00 for a 3ware RAID controler then I would try software RAID again. My first experience with it (RED HAT 7.3) sucked big time. I couldn't get the thing to install and be stable. This brings up another point. Your OS doesn't know that it is being RAIDed. In some cases that is nice.
Re:I know this book is about software RAID ... (Score:4, Informative)
You can get excellent performance for less than $100. Why pay more?
Re:I know this book is about software RAID ... (Score:2, Informative)
"Type df -k and you should see your hard disks as /dev/sdaX instead of /dev/hdaX. This is because the Promise Driver is actually a special type of Software Emulation RAID, not exactly Hardware RAID. (Promise RAID works through a BIOS Hack)."
By hardware RAID I mean a RAID system where none of the work is done by CPU (Adaptec, 3Ware). HighPoint and Promise ATA RAIDs are hardware-software hybrids and your CPU has to work.
Re:I know this book is about software RAID ... (Score:3, Interesting)
CPU usage isn't entirely the point. It doesn't take much CPU power to do RAID these days (that's why software RAID in Linux is a pretty good option). The problem is that it requires drivers and software control over RAID functionality (just what you want to avoid) when the RAID card should just be making the RAID array look like a single drive to the operating system. Notable examples include the HighPoint "RAID" controller found on some Abit motherboards.
Anyway, I think "Please get your facts straight before posting" is kind of a nasty response to somebody who is pointing out something that is well known to most people who have tried using these pieces of crap with their non-Windows operating systems. Try using Google if you want references, one way or another. They won't be hard to find if you search through driver development lists for Linux and *BSD.
Re:I know this book is about software RAID ... (Score:3, Informative)
Those cheap IDE raid cards do most of the work in the driver, and don't give you much adavntage over software raid.
True hardware raid is a few hundred dollars, like the 3ware series of cards.
Re:I know this book is about software RAID ... (Score:2)
3ware (Score:3, Informative)
Up until now I've bought only SCSI drives because heavy compiles (which I do a lot) just choke IDE down. I now have a 4 x 60 GB RAID-1 and it just screams. With a one time investment in a proper IDE RAID card with escalator scheduling, tagged queueing and big cache I still save a lot of money by being able to buy large but cheap IDE disks.
Re:I know this book is about software RAID ... (Score:2)
check out the highpoint and dawicontrol offerings unless you need RAID5.
Re:I know this book is about software RAID ... (Score:2)
Re:I know this book is about software RAID ... (Score:3, Interesting)
That's only one consideration. It used to be that the headache of booting from, and installing to Linux with software RAID was a huge hassle. Today almost every distribution supports out of the box installation to software RAID. So the 'ease of use' considerations for going hardware are all but gone.
Now here's the issue that always starts the tug of war-- performance. Traditionally hardware RAID was simply better because it didn't hit the CPU. Today that doesn't make a difference, especially if you use SCSI. Now with ATA you might see the overhead of RAID a little more, but that's because ATA already has overhead to begin with. The CPU hit with SCSI is negligible, and I doubt if it will be noticed in most cases, even in so called "production". That's because the real bottleneck in most systems in I/O throughput and not CPU performance. That's most systems, not all systems. Obviously if you are a good sysadmin you are evaluating these issues on a case by case basis.
Finally I just want to say that it's a widely held opininion among the Linux RAID community that the kernel RAID (the md driver) outperforms all but the most high-end SCSI RAID controllers. I'm sure many will disagree, but that's been my experience and I know that if you ask certain kernel developers who shall remain nameless they will tell you the same thing.
Run bonnie, you'll see.
Derek Vadala, lowly author.
Re:I know this book is about software RAID ... (Score:3, Insightful)
Another problem - perhaps less serious - is that hardware RAID controllers often require a reboot into their proprietary BIOS to do anything. This isn't very useful if you want to expand the RAID array without disrupting service. Some vendors offer utilities to modify the RAID configuration but I've never found all the functionality to be exposed within the utilities. Of course, if you are mucking about with disk arrays on production systems then you have bigger issues to deal with.
My favorite part of the review... (Score:3, Funny)
That word simply isn't used enough in the modern vernacular.
Okay, mod me down now...
Re:My favorite part of the review... (Score:2)
Eh?
david@debian:~$ dict panoply
Ah. Interesting...
Re: My favorite part of the review... (Score:2)
You might be thinking of the archaic use where it referred to a complete set, rather than just "alotta something", but that's no longer the standard use. Or maybe you just confused it with "panopeas", which, I agree, makes no sense in this context.
RAID and Firewire (Score:5, Interesting)
Is it possible to use Firewire and a service like Rendevous to make an intelligent redundant system? It's a thought at least. My firewire drive I use for my Inspiron works nicely enough. Would firewire be cheaper than RAID for servers, however?
Syr GameTab.com [gametab.com] - Game Reviews Database
Re:RAID and Firewire (Score:4, Interesting)
1. Rendevous probably wouldn't come into play - it's really system-to-system.
2. The device to device communication could be especially useful when recovering a failed disk - no overhead on the controller. This, though, would require the devices themselves be better than mere drives, driving the cost up.
3. Unfortunately - without drives with actual FireWire interfaces (all externals use FW-IDE bridges, the Oxford 911 being the fastest at 50MB/s [fwdepot.com], 35MB/s sustained) the true potential of FireWire will remain untapped. Perhaps as we move to Serial-ATA and away from the standard parallel IDE, manufacturers will be prompted to offer FireWire drives as well.
Additional possibilities:
Think of a trimmed-down Xserve RAID [apple.com] with FireWire instead of Fibre Channel - it would be able to take advantage of the bandwidth of FireWire and still maintain (?) affordability for low-to-mid range businesses looking for large high-speed external storage.
All sorts of possibilities.
Re:RAID and Firewire (Score:2, Informative)
there are raid arrays with firewire interfaces, and software raid using firewire drives is quite possible. (osx makes it easy as pie)
here are some cool firewire raid products:
http://www.usbshop.com/firewireraid.html
http://www.sancube.com/
http://www.voyager.uk.com/products_master.asp?pro
the x-stream from sancube has two firewire busses for double the speed, or for sharing.
Re:RAID and Firewire (Score:2)
Attached to the firewire bus is a RAID controller, but just a controller -- no disks or cabinet. The controller is configured to read/write to N drives also connected to the firewire bus. Writes to the card would be buffered and parceled out to the individual drives as the bus becomes available.
The beauty would be you could connect generic firewire drives to the bus, and wouldn't need an expensive cabinet or dedicated drives. With enough buffer and courage to do cached writes, you could get good throughput since the real disk writes could wait for the bus to be free.
Great review... (Score:4, Funny)
panoply
n. pl. panoplies
Looks like number one is most appropriate, although I've never referred to my arrays as "splendid".
You may have a point there... (Score:2)
/boot / on RAID 1? (Score:5, Informative)
With raidtools, at least,
Hunk 'o fstab:
Similar hunk 'o raidtab
raiddev
raid-level 1
nr-raid-disks 2
chunk-size 64k
persistent-superblock 1
#nr-spare-disks 0
device
raid-disk 0
device
raid-disk 1
raiddev
raid-level 5
nr-raid-disks 3
chunk-size 64k
persistent-superblock 1
#nr-spare-disks 0
device
raid-disk 0
device
raid-disk 1
device
raid-disk 2
*Shrug* Wonder what the context of that quote was within the book?
Not getting the point. (Score:2, Insightful)
Trying to make everyone be an expert before they can operate their machine is how operating systems die.
Re:Not getting the point. (Score:2)
Re:Not getting the point. (Score:2)
Anyone has the right to complain about anything, regardless of their contribution to it. Nobody will buy Linux twice if they find it hard to use. Linux won't be a viable competitor for Windows unless people adopt it for life.
multipath? (Score:2, Informative)
Does this book talk about the md driver's
multipath personality?
This is the most poorly documented part of the
md driver.
if you read the raidtab man page ("man raidtab")
you will find _no_ mention of multipath whatsovever.
Yet, the md driver can do mulitpath (well, failover) if you set it up right.
It has limitations though... You can't install to multipath devices, or boot from them (lilo/grub, the various distributions installers don't understand md multipath) and, if an hba fails in such a way that interrupts are not generated...commands just go out to lunch... then md won't notice anything is wrong, and so won't failover. Also, it does nothing to notice if the failover path is actually working, so if that path fails you won't have any notice that redundancy is lost....
Well, multipath is not RAID, so maybe this book
doesn't cover it, but any book on software RAID for linux should probably cover all the features of the md driver.
I will be interested to see this book.
Hardware IDE Alternatives / LVM (Score:5, Interesting)
The problem I've had with the software RAID is reliability and expandability. It is a pain in the ass if you lose a drive in the array, and it is next to impossible to add a drive (other than a stand by drive) to your existing RAID 5 setup.
Aah, opinions...
Fasttrak Sx4000 Linux RAID review (Score:5, Informative)
You actually feel good about the Linux drivers that Promise gives you with the SX4000? I bought this card, and I wished I stayed away from it.
I am using it with four 120gb IDE drives with 8mb cache. For starters, if you use anything but the sxcslapp program in Linux to configure the drive, your drives are corrupt. All of 'em. And, your bios will return corrupt information regarding them. This causes DOS not to boot (hard freeze), and Linux to produce keyboard smashings on boot. This is a known firmware problem, and I'll be damned if they have any flashes available, even though the card is four months old. I just checked before writing this review.
Once I figured out that all the work had to be done with sxcslapp in Linux, I started building my RAID5, albeit with caution. Things here went pretty well, except a) performance sucked about as bad as a single drive and b) the closed source drivers rebuild the raid array with no warning if a drive fails and is replaced, even if the file system is mounted. So, this means that if you have a drive that bombs and you replace it, anything you write to the raid array will be wiped out. I could have used some notification.
The Linux drivers are horrible. They are written in 'Engrish', and the documentation might as well have been written by someone who doesn't understand computers. "Select the remove drive from array option to remove a drive from array". This continues for all of the options in their menu-driven app.
I am also forced to use Red Hat 7.3 for this. Great. I now have a cluster of Debian 3 servers I administrate and one Red Hat server.
I would have returned the card if my reseller would have taken my money. It's about equally expensive to buy IDE add-on cards, or maybe a bit less, and the software RAID in Linux seems to be firmly documented. I've used RAID1 in software on servers before, and it works nicely.
Pity the newbie (Score:3, Funny)
Please pity the poor newbie who religiously follows the instructions in the book but fails to read until the end.
On the other hand, pity the newbie who cracks a book open and starts setting a server up page-by-page.
Better title... (Score:5, Funny)
RTFM: RAID - The Fucking Manual.
I'd buy the book if it could explain this... (Score:4, Interesting)
Jan 26 04:15:02 hostname kernel: hdb: dma_intr: status=0x51 { DriveReady SeekComplete Error }
Jan 26 04:15:02 hostname kernel: hdb: dma_intr: error=0x84 { DriveStatusError BadCRC }
I've looked all over the place for the answer, google, mailing list archives, Usenet, local Linux friends, etc. and haven't been able to find a definitive answer. It's like nobody really knows what that error messages really means.
Newsgroups suggested bad cables, so I replaced those (twice, once with brand new cables bought specifically for the purpose). Some info suggested the drive or the drive's controller was failing, so I replaced it. Other info pointed to my IDE controller, so I installed a new one dedicated only to the RAID pair. I saw info that said the raid tools were to blame, and to see if the errors go away when the mirror is broken. No dice. Other info I found suggested that it was the IDE drivers in the kernel and that the messages were nothing to worry about unless I was seeing data corruption. I'm not seeing corruption so I'm left with this option.
If the book can shed some light on the error message voodoo one sees with Linux's IDE driver, then I'll buy it. I'd pay double what they're asking, even.
-B
Re:I'd buy the book if it could explain this... (Score:3, Informative)
Re:I'd buy the book if it could explain this... (Score:2)
Well, data safety is the reason I have RAID. Trouble is, this is the replacement drive on /dev/hdb. The first had the same problem, which is why I was looking for controller/cable/driver issues.
I'm tempted to go buy a real RAID controller card and get away from software RAID. Problem is the Linux drivers are usually pretty strange. I like being able to upgrade my kernel, for instance.
-B
Re:I'd buy the book if it could explain this... (Score:2)
- have you tried putting the hard drive in another linux box and seeing if the same errors show up
- have you tried diddling with the dma/hdparm settings. I know some controllers have "issues" with dma
- Are there actual errors, trouble getting data, slow performance, etc? This could be random warnings thrown out by the driver that may not mean that there is anything disasterously wrong (wild guess).
Re:I'd buy the book if it could explain this... (Score:2)
Turns out that the error isn't actually related to RAID at all. I mean, it is. The drives don't have errors when used outside a RAID setup. Put 'em back into a mirrored pair, and I start getting errors. But the problem is really caused by Linux's software RAID, per se.
You are exactly correct about the DMA stuff, though. Someone else suggested it, and I found out [slashdot.org] that it was in fact DMA. I had DMA enabled on one drive and not the other. Take the drives out of the RAID pair, and they don't individually show errors. Put them together, errors. That's why I thought RAID was the culprit (and why the book might help).
Thanks for the suggestions, BTW. Very much appreciated.
-B
Re:I'd buy the book if it could explain this... (Score:2, Informative)
I'm tempted to go buy a real RAID controller card and get away from software RAID.
What do you think it'll buy you, honestly? I've got a half dozen software RAID1 systems out there, three of them being pounded mightily every day (10k-user ISP mail/radius servers) without so much as a squeak of complaint. Throughput is pretty decent as well:
(yes I know it's not a thorough benchmark) -- So without taking the drive cache into play, I can hit about 30MB/sec sustained. If I had better drives I bet I could boost those numbers significantly. Probably close to the 90MB/sec I am seeing on my new server, single-drive stats.
Re:I'd buy the book if it could explain this... (Score:3, Informative)
Well, I had thought that my IDE controller was bad, the IDE drivers are wonky, the raid tools stuff was weird, whatever. I mean, I had two drives which both worked great when used by themsleves. I put them in a RAID pair, and I got errors. Turns out I had DMA disabled on one of them, but I was looking at Linux software RAID as the culprit. I thought buyiung dedicated hardware would isolate any problems. It was a last ditch, straw-grasping effort to tell the truth.
I'm actually a fan of Linux's software RAID1. No "special" drivers, I can use any kernel I want, easy to set up, minimal performance impact, and fairly transparent to use. Now that I know why I was getting errors, and that it wasn't anything to do with software RAID, I'm fine with it.
-B
Re:I'd buy the book if it could explain this... (Score:2)
Check your backups...
Re:I'd buy the book if it could explain this... (Score:5, Informative)
hdparm -d 0
You might also have to turn off 32 bit mode:
hdparm -c 0
Of course, this will slow things down.
Be sure everything's jumpered correctly.
Also, of course, I'm not responsible if you fry your data!
Re:I'd buy the book if it could explain this... (Score:3, Informative)
You know what? The other drive in the RAID pair (/dev/hdd) had DMA off, while /dev/hdb had it turned on. I don't know why that was the case. Perhaps my late night fiddling resulting in some sort of fat fingering (wait... that sounded really bad). Anyway, I decided to do some tests by copying about 150MB of MP3s to my array while setting DMA to either on or off.
With DMA on/off (regardless of which drive has DMA on or off), I get the errors. With it set to off/off, I don't get errors, and the array is slower than a wounded prawn and a huge CPU hog (the copy takes around 50 seconds and the load avg hovers around 4.50). I don't care about slow since this is an NFS/Samba server and CAT5 is my bottleneck. The CPU load I do care about since the box does other things besides simply serve files. With DMA set to on for both drives, I also don't get the errors, which is very cool. The copy takes around 10 seconds and the load avg is about 0.70. All to be expected, since DMA gives quite a performance boost. But it's good to know I can turn it on.
Looks like my issue was with wacked DMA settings, and not the hardware going bad. So thanks for getting me to take another look! I probably ought to go buy the RAID book now...
-B
Re:I'd buy the book if it could explain this... (Score:2)
The work around is either to turn the Maximum UDMA level down on the hard drive itself (with the dos utils provided by the manufacture) or turn it down in Linux using hdparm -X6n, where n =
6 -> UDMA2 -> ATA-33
8 -> UDMA4 -> ATA-66
9 -> UDMA5 -> ATA-100
Re:I'd buy the book if it could explain this... (Score:2)
I will certainly do that. This problem drove me nuts. Everyone said it was something different. Not unusual for a thing which has lots potential points of failure.
I also posted [monkeygumbo.com] an edited version of my original slashdot reply on my web site. Google should be by soon to pick it up. Every time I figure something weird like this out I put it up there. Get some decent referrers from google, too, so it's helping someone.
-B
Re:I'd buy the book if it could explain this... (Score:4, Informative)
This can be done with (as root):
wget http://www.linux-ide.org/smart/
smartsuite-2.1.tar.gz
tar -xzvf smartsuite-2.1.tar.gz
cd smartsuite-2.1
make
make install
You might get some non-fatal type errors. The makefile doesn't always work for setting up the rc.d scripts.
Now run:
I'm assuming the bad disk is /dev/hda, but change it to suit your needs. If you get some errors, then SMART may not be enabled, so you'll need to run:
Anyway, when you run smartctl with the -a, it will tell you all about hardware failures and whatnot. For more info on the codes it returns, go to this page: http://www.ariolic.com/activesmart/docs/smart-attr ibute-meaning.html
I hope this helps
Beware TPB
Re:I'd buy the book if it could explain this... (Score:2)
-B
Re:I'd buy the book if it could explain this... (Score:3, Funny)
I can't wait for a review of a book about Gentoo (1.4rc2) installation so that I don't have to camp out on irc.openprojects.net everytime GCC segfaults on my Athlon MP :)
Re:I'd buy the book if it could explain this... (Score:2)
Dig those knee-jerk accusations, man.
I've got a Linux RAID setup, it's been giving me errors for a while. I read the book review, and was wondering if maybe the book had any info about those errors, since no online source I could find did. After all, the problems are most definitely related to the drives being in a RAID pair because they don't have problems otherwise. So I composed a wool-gathering post about wondering how much detail could fit in 245-odd pages, and whether or not the book was worth it.
Then I read what I was about to post and judged it to be completly useless, uninformative, and uninteresting. So I added a question as to whether or not anyone had actually read the book, and could they tell me if it had info about the errors I was seeing. That was basically useless as well, so I pasted in an actual error (in order to be specific and get away from some lame "uh, I have RAID and it has errrors... can the book help?" question; it was also easier to copy-n-paste than explain what the error was), explained my situation, and said I'd buy the book if it could help me. Turns out it probably wouldn't be able to, which is exactly what I wanted to know.
Anyway, there was the rational behind my post. Anything else you'd like me to explain to you?
-B
Re:I'd buy the book if it could explain this... (Score:2)
This does, however, prompt an interesting question. Maybe /. could have a section devoted to technical queries?
Re:I'd buy the book if it could explain this... (Score:2)
Crafty or not, my post really wasn't a thinly veiled tech support plea. I'd been up and down and back and even mapped that road. I pretty much considered my problem unsolvable since there appeared to be a million ways to solve it. I never figured I'd get any kind of answer here I hadn't gotten elsewhere the few dozen times I'd been looking. I'd googled, went through newsgroups, asked on mailing lists, asked friends that do lots of RAID stuff at work, asked folks at a LUG, asked around at school, and -- back to my point -- looked through more than a couple books. I got pretty much different answers every time, like I said, and none of them worked. So you can see where I'd rate my chances of getting an answer on Slashdot to be fairly slim. Besides, I could sneak in a tech suport question much better than that. :-)
Looking back, I could see how you came to think I was trying to get cheap tech support. The trouble is, that I tend to err on the side of giving too much information. It's easy to start making something like that into a bug report. But my post was more of a "Yeah, well, if that tiny book is so great, does anyone know if it can explain this apparently unexplainable mystery? If so, I'm buyin' it..." I bought the book, BTW.
Anyway, the horse has been well beaten by now.
Thankfully, whoever moderated my comment as 'Funny' had the insight to interpret my response in the spirit in which it was intended.
Heh... I get you. I just don't like accusations, and saw your reply as such. Maybe I saw it as questioning my intent (another pet peeve). I dunno. It's really no big deal; it's just a Slashdot post after all.
This does, however, prompt an interesting question. Maybe /. could have a section devoted to technical queries?
That's a good idea, but it might be hard to set up. You'd have to vet the people that did the answering, maybe like Google [google.com] does it. The whole thing would hinge on the quality and speed of the answers. You might be able to have the people who ask pay a small fee (two bucks? 5? 10?) and then the people that answer get a small kickback, or a Slashdot subscription or discontinued Thinkgeek stuff or karma or something. They could make answering questions like tossing rings onto bottles at the fair: the more you get, the higher up the shelves you get to pick your prize from. Those that wanted to get way into it could, those that pitched in here and there would get a small bennie.
I think it'd work. There are probably lots of folks who see the Slashdot membership as more clueful than most (I'm taking the Fifth on that issue). In any case, it's normally good to have a lot of eyes look at a problem and there are lots of eyes here if nothing else.
-B
Re:Bullshit! (Score:2)
I bought the book [booksmatter.com] two hours ago, Einstein. And I'm betting that it doesn't have an answer to my original question; the book doesn't look large enough. So you're wrong. My post was not a blatant tech support request, no matter what you may believe. I think I explained myself well enough already, even though I didn't really have to.
I normally don't bother replying to ACs, but if you wanna slam me, at least have the balls to put your name behind your words.
-B
Re:Now go write the book... (Score:2)
Done.
-B
Newsgroups, FAQs, and on-line docs in general. (Score:4, Insightful)
BIOS utilities (Score:5, Interesting)
Well, given Dell's recent announcements, I suppose fewer and fewer of us will be doing so.
But really, the author's point is so moot that it's embarassing: if it's my job to maintain a RAID array, and the utilities are on DOS floppies, of course I'm going to have access to a DOS boot disk. So what ? Just how hard is it to carry such a thing around, and why is this is a worthy thing to rail about, in a book about RAID ? If the author wastes too much time talking about stuff like this, this book can't be that useful - arggh, I've wasted too much of my own time already.
Re:BIOS utilities (Score:2)
But really, the author's point is so moot that it's embarassing: if it's my job to maintain a RAID array, and the utilities are on DOS floppies, of course I'm going to have access to a DOS boot disk. So what ? Just how hard is it to carry such a thing around, and why is this is a worthy thing to rail about, in a book about RAID ? If the author wastes too much time talking about stuff like this, this book can't be that useful - arggh, I've wasted too much of my own time already.
I thought it was an issue last week, when I was at a customers site, and needed one to flash a BIOS on an old pentium. Google is your friend : "dos bootdisk" reveals bootdisk.com.
I had more problems trying to boot off my new Asus A7V333 with Promise's 'RAID LITE' crap. The drives are found, but individually..I ended up using ataraid... but that just doesn't seem right. (Note: I've had a Promise Ultra 66 RAID running for two years now - installing the driver 'just worked'. But not with this 'Lite' verison)
RAID on Linux. (Score:5, Funny)
It's not that hard.
- Power down the computer
- Remove cover
- Blow out all dust and insect husks
- Spray in RAID
- put cover back on for 15 minutes.
- Remove cover again
- blow out insect husks.
Harware RAID != Hardware RAID (Score:4, Informative)
At any rate, taking the view that hardware RAID is always the solution and software RAID is never the solution is just bad sysadministration.
Re:Harware RAID != Hardware RAID (Score:2)
Re:Harware RAID != Hardware RAID (Score:2)
Re:Harware RAID != Hardware RAID (Score:2)
You should really stripe over mirrors, not mirror over stripes - more reliable.
So the real question is why pay $40 for 10 pages (Score:2)
And why would I buy this book or any book on RAID if I am going to use a hardware solution. If I have hardware then I am going to just make sure it has support & instructions for Linux and be done with it.
There are alternatives to HOWTOs (Score:3, Insightful)
So while there are good collections of information out there, there are also very good tools out there with which to accomplish useful tasks.
I think it's precisely that HOWTOs are rarely if ever needed with Windows stuff that it still has an edge over Linux where the masses are concerned. So it's nice that HOWTOs are out there, I think it's more important that good tools are out there that are easy and self explanatory.
Wow (Score:2)
Enterprise Volume Management System (Score:2, Informative)
multiplatform (Score:2)
My limited experience with hardware RAID on Linux (Score:5, Interesting)
The lesson learned was, never have a production Linux system with (binary) drivers tied to a specific kernel or distro version.
That said, we have been very happy with the controllers, and since at least two disks has died without warning, the expense has easely been worth it. Our systems are used 24/7/365, so every minute of downtime annoys somebody. RAID really makes me sleep better, restoring a server from a slow tapestreamer, at some ungodly hour, while people nervously checks in, asking when we will be up again, is something I really want to avoid too much of.
YMMV, but I think hardware RAID still has an edge over software raid, mostly because I find it simpler to maintain in the long run.
If you are into LVM's, FS tools, and software RAID, go to:
http://evms.sourceforge.net/
and _drool_. Future stuff for now on production servers, but nevertheless.
Hardware raid (Score:2)
The nice thing about hardware raid is other than the driver for the scsi card, the OS thinks there is just one drive sitting there. No configuration on the OS side.
Also, RAID is going before the OS even starts booting. If a disk dies, so what.
Please correct me if I'm wrong, but if you have software raid and the disk the os/boot/raidconfig files are on goes, you have a dead box.
Re:Hardware raid (Score:2)
I suppose the bios could be a problem if using software raid on IDE's.
raidtab? MDADM's better, it can take care o'itself (Score:2, Interesting)
at O'Reilly, mdadm [oreillynet.com]
and, I'd recommend Enterprise Volume Management System [sourceforge.net] rather than LVM ( Logical Volume Manager ), simply because LVM's seems to be being dropped as
redundant ( ironic, that : ) as EVMS gets more effective, and I don't want the conversion-work from LVM to EVMS, if I can just do EVMS right now, see
complete waste (Score:2, Interesting)
I purchased a used p2 system with a stable mb and two ibm scsi drives on an adaptec controller. I installed Debian GNU/Linux stable and upgraded to the latest stable. Then I put up a softraid and opted for xfs in case of a power failure. I decided against an ups, because I hooked the machine up to the local power network, which is very stable, since the server lives in Berlin/Germany, and I wanted to save the cost.
Then I moved the root filesystem over to the raid device. Up until now everything was documented very good, except for the fact, that I heard that reiserfs doesn't work with softraid and I didn't find that info on the net anymore. I would have taken reiserfs instead if I would have had a reliable source, such as the book, telling me that that is OK.
The only thing I had problems with was how to make the system boot off the raid device. Here the howtos and man pages had contradicting stands on how to do this.
I read this Slashdot article with some regret, because I thought it could have saved me a lot of trouble. But the only section that gave me trouble also seems to confuse the auther of the book. Now that is no help at all. So this book is a waste of time if You know how to use google, which I had to learn painfully fast getting into Debian
But since Debian is still by far the best system out there overall I have no choice. If You start to rely on seemingly simple things such as a reliable update of Your system with very low hassle then You are hooked.
Re:Why bother with software RAID? (Score:3, Insightful)
The performance hit is not worth the return.
For you, it's not. For someone else, it might be.
There are any number of situations where it might be appropriate to exchange some performance for increased data security. Just because you can't imagine them, doesn't mean they don't exist.
Re:Why bother with software RAID? (Score:2, Insightful)
It sucks on your hardware. When you use fast SCSI disks and have fast CPU(s), software RAID is much faster then (very expensive) hardware RAID solutions. The chip on your hardware RAID card (usualy ARM) can't be faster than CPU.
Regarding trust, you should trust (open source) software RAID more than proprietary firmware.
Re:Why bother with software RAID? (Score:2)
Re:Why bother with software RAID? (Score:2, Informative)
You are going to beat hardware controller, because the chip running your software RAID (P4 Xeon, 2GHz) is much faster than the chip on the hardware controller (arm, 100MHz). Your only limitation is the IO bandwidth, thats why you go with SCSI.
Server manufacturers sell hardware RAID as expensive add-on, but they are not advertising any benchmarks showing speed advantage. Because there is none. Current controllers are just not good enough, can't keep up with speed advances of CPUs.
Re:Why bother with software RAID? (Score:2)
Server manufacturers sell hardware RAID as expensive add-on, but they are not advertising any benchmarks showing speed advantage. Because there is none. Current controllers are just not good enough, can't keep up with speed advances of CPUs.
The main selling point for RAID on a server (at least for me) is ease of management. When you have something as basic as your drives, you'd rather not worry about driver incompatibility, problems with upgrades or anything like that. Just address the sucker as sda and be done with it.
Re:Why bother with software RAID? (Score:3, Insightful)
Re:Why bother with software RAID? (Score:4, Informative)
a 'rubbish' 500Mhz CPU - 500,000,000 ops / sec
a 5ms access time SCSI HDD - 200 ops / sec.
so what if the CPU on the RAID card is a pathetic 100MHz job, it'll still be able to keep up with the data flow from the HDD, even when that data is being burst through.
How much cache ram have you got on that RAID card is a better indication of performance improvements for your hardware.
Re:Why bother with software RAID? (Score:2)
a 5ms access time SCSI HDD - 200 ops / sec.
care to back that up? Sure, if you send the drive out to get random sectors and disable reordering, that will happen, but it doesn't work like that. You frequently get contiguous chunks of a MB or more, which don't suffer from the 5ms access time. How did you think SCSI disks hit 40MB/s sustained, anyway?
Re:Why bother with software RAID? (Score:2, Insightful)
First of all, some of today's controllers (such as the HSGs or HP Smart Arrays) are running on pretty good RISC chips. Moreover, they have good amounts of memory to use as read ahead or writeback cache, which do speed up I/O instead of sharing memory with the OS.
About the speed of the controller's processor as compared to the main processor, just remember that, in today's standards, one SCSI channel could only work at 160MB/s, and, even if we needed one processor cycle for each byte to be read/written (we don't), we would only need a 160MHz processor to do the job.
Well, come think about it, processors embedded in today's modern RAID controllers usually have a 64-bit data bus. This means that any transaction is 8 bytes long. Being the worst case in performance a RAID-5 write (which involves 4 I/O operations) we still get an average of 2 bytes per processor cycle.
That's why RAID controllers don't come with fantastic processors -- there's simply no need to.
We could also think of availability, but that would be another long issue, and hardware RAID wins almost in all cases (except for controller multiplexing), but the best reason you would have to think about software raid would be the cost.
I could be wrong, though
Re:Why bother with software RAID? (Score:2)
Regarding trust; why should I trust open source anything? Even though I am a programmer, am I expected to read through every line of code and understand how it works enough so that... um... what? I can look for buffer overflows? So that if the software fails I can patch it myself? No, I think I'll trust the hardware manufacturer with millions in R&D and years of experience specifically aimed at RAID and drives. If my open sores RAID solution fails, will I have to wonder if some 13 year old norwegien kid's mom is going to let him reply to newsgroup messages after being grounded for downloading porn via IRC again?
Re:Why bother with software RAID? (Score:3, Informative)
As far as IDE channels, many many motherboards these days have about 4 ide channels (mine does, and it's not even NEW) 4 ide channels can make a good raid. My linux RAID 5 (software) is pretty transparent and read speeds are noticable faster. This is even MORE true if you put in the EVMS patches from IBM and use the GUI tools to create and manage RAIDS without even editing
Hardware RAID is marginally, not always better. For one thing, you are limited to the idea of RAID that you board manufacturer believes in.. It's not always what you need. CPU power? On any machine faster than 1ghz you never even notice. 2ghz and software RAID is invisible. Yes, software RAID sucks on windows (due to the stupidest fucking volume/RAID managing service I've ever used), but it's viable almost everywhere else.
Sometimes that extra few hundred dollars is an extra $20k (if you're doing lots of machines), if you can deal with the CPU hit is still more economical as long as it's reliable. Solaris/Linux RAID are ready for prime time, W2k's is still trying to figure it out. (For Windows boxes, please get hardware, save yourself headache.. thanks!)
Re:Why bother with software RAID? (Score:3, Informative)
Isn't that just 4 IDE plugs, but only really 2 IDE channels? RAID embedded in your motherboard is usually of the Promise variety and cheap hardware raid isn't much better than software raid. Tom's hardware has an informative article on the difference between hardware and software RAID and they reported that this is the case. [tomshardware.com]
Re:Why bother with software RAID? (Score:2)
Re:Why bother with software RAID? (Score:2, Funny)
That's striping. Why am I even bothering posting this? Maybe if my class wasn't cancelled, you wouldn't have to read such a worthless post.
incripshin
Re:Why bother with software RAID? (Score:2)
Re:Why bother with software RAID? (Score:4, Informative)
So, you say it sucks, I say it's fine. You say toe-mott-oh, I say toe-mate-oh. Hardware RAID is more than just a few $. It costs hundred(s) more than software RAID controllers. I've had software controllers that performed better than the current high-end SCSI drives at the time. I can attest to the fact that CPU load was a non-issue. Performance was excellent and was the most inexpensive way to gain speed. It's ideal for home users that aren't wanting to spend a fortune on limiting the swapfile chug.
So, please define "sucks". Enlighten us softRAID users on what the problem is. Or is the problem really that you've spent your fortune on some overpriced SCSI drives that get outperformed by a couple of ATA100s?
Re:Why bother with software RAID? (Score:2)
If you just wanted to match the benchmark values of SCSI drives using software raid you might have come close - but in a real world application where your CPUs have to do more than just run the software RAID array - you'll find you've hit yourself hard.
I don't spend a fortune on SCSI drives, I feel $250 is not bad to pay for a 15,000 RPM 8 MB cache Cheeta 36 gig drive. $250 for the fastest single drive on the planet? Sounds like a bargin to me. Now, let me put two of those drives on a $200 RAID card from Adaptec -- lets run that against your ATA100 7200 RPM 2 mb cache IDEs running on a serial interface. OUCH! "Sucks"
Re:Why bother with software RAID? (Score:2)
If I was running a P200, then yeah. I could see some sense in your argument. But modern day 1GHz+ CPUs don't even dent under the minimal load on the CPU, when processing instructions for the ATA RAID controllers. It's no worse than say- software controlled DirectSound mixing, or Winmodems.
Re:Why bother with software RAID? (Score:4, Insightful)
Hmm, I get rather good performance from my IDE software RAID-5. As far as I can tell, reading from the buffers pretty much maxes out the PCI bus and I also get good performance for actual platter reads. Here are some quick numbers:
(granted this is not an exhaustive benchmark)
Not spectacular, but certainly more than fast enough for my media server. Also probably better than I could do on a 68-pin Ultra Wide SCSI bus, even with multiple drives.
Re:Why bother with software RAID? (Score:2)
The read speed on a (non-degraded) RAID5 should be identical to the read speed on a RAID0. It's the writes where RAID5 punishes you.
Linux software RAID rocks. (Score:2, Informative)
And I've found the RAID 5 overhead is nominal, and very reliable.
Re:Why bother with software RAID? (Score:4, Informative)
(From the raid howto)
4.7 The Persistent Superblock
Back in ``The Good Old Days'' (TM), the raidtools would read your /etc/raidtab file, and then initialize the array. However, this would require that the filesystem on which /etc/raidtab resided was mounted. This is unfortunate if you want to boot on a RAID.
Also, the old approach led to complications when mounting filesystems on RAID devices. They could not be put in the /etc/fstab file as usual, but would have to be mounted from the init-scripts.
The persistent superblocks solve these problems. When an array is initialized with the persistent-superblock option in the /etc/raidtab file, a special superblock is written in the beginning of all disks participating in the array. This allows the kernel to read the configuration of RAID devices directly from the disks involved, instead of reading from some configuration file that may not be available at all times.
You should however still maintain a consistent /etc/raidtab file, since you may need this file for later reconstruction of the array.
The persistent superblock is mandatory if you want auto-detection of your RAID devices upon system boot. This is described in the Autodetection section.
Re:Why bother with software RAID? (Score:3, Interesting)
2/ Problem is worse with hardware RAID, because if I lose the card, I'm fucked. I either have to have spares, or wait on a controller. Never mind what happens if the manufacturer goes out of business.
Re:Why bother with software RAID? (Score:2)
or put more simply; two IDE drives stripped on the same channel perform no better than two individual IDE drives joined as a single spanned volume.
Re:The average power user... (Score:3, Interesting)
Well, maybe for the average power user, but not the real power users. Pretty much every stock exchange, airline reservations system, credit card switching system in the world uses mirroring and striping. Operating systems such as HP's Non-Stop Kernel (from Tandem) and IBM's Transaction Processing Facility (TPF) work this way and run these mission critical systems.
Why? I/O throughput and redundancy in applications that can't afford to fail. The disks aren't expensive compared to the rest of the system and even less expensive than the downtime.
These aren't Linux systems, but as Linux scales up there will be times when it will necessarily copy from mainframe-class systems.
Re:The average power user... (Score:5, Informative)
RAID 0 is pointless - gosh, I wish all the video editing studios out there knew this. They've been duped into believing 150 megs a second sustained has value. What morons.
Too bad cheap RAID5 cards don't exist. - Hmm, you mean like the Promise SX4000 [googlegear.com] that costs $150?
I'll call bullshit on that one (Score:2, Insightful)
Ever ripped 500 CDs to MP3 format?
Ever done it twice?
I have, and never will again if I can help it...go RAID 1 go!
Re:The average power user... (Score:2)
1) He already stated this fact. 40% I/O throughput increase. (actually quite a large variability, but it's a usable number)
2) Read the subject line! He said, "THE AVERAGE POWER USER..." Now I read that as meaning the home Linux geek/developer who likes messing with the guts of their system. Companies use RAID0 all the time, or more often RAID1+0. RAID5 is equally common, implemented in hardware. This is not what he's talking about. This is not the target of his comment.
Sigh. Sorry to rant, but every follow up to this article has neglected this point.