Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

Linux HW and SW RAID Benchmarked 226

An anonymous reader writes "A Norwegian site has written up an article with various RAID solutions benchmarked using both bonnie++ and dbench. The result shows a lot of surprises, especially when comparing low end sw RAID with high end hw RAID. The text is in Norwegian but the numerous graphs are self explanatory. It does look like a few kernel drivers need a little tweaking."
This discussion has been archived. No new comments can be posted.

Linux HW and SW RAID Benchmarked

Comments Filter:
  • by suso ( 153703 ) * on Sunday May 22, 2005 @12:47PM (#12605384) Journal
    Surprising? No. Reading the results I can see that software raid is generally slower than hardware raid and that some of the SCSI drivers are not completely tweaked, probably because they can't get enough information from the manufacturer.
    • by Frumious Wombat ( 845680 ) on Sunday May 22, 2005 @01:21PM (#12605602)
      Drilling through the article with my utterly minimal norwegian (Prarie Home Companion + German + exposure to Danish coworkers), I think I've distilled the following:

      Cache on the LSI RAID controller is 1/2 the adaptec. Performance is comparable, though not equivalent.

      All of the controllers are 64-bit.

      Adaptec SCSI is good for both hardware RAID and software RAID.

      LSI has good hardware SCSI RAID only.

      Don't use current SATA controllers (RAID or Otherwise) for best performance.

      Does anybody with access to a good collection of modern hardware care to re-run this test in a language that Babelfish understands?
      • "Drilling through the article with my utterly minimal norwegian (Prarie Home Companion + German + exposure to Danish coworkers), I think I've distilled the following"

        I'm sure there was something in there about lutfisk being particularly delicious. Though I always thought it was an ancient Viking recipe for cleaning dried blood from weapons and armor...
    • Ok, this comment is uninformed as I count myself among those unable to read the article. Would also consider myself a raid amateur.

      I ran some benchmarks a while ago for my own server with four 15k scsi drives softraid5d on a dual channel aic7xxx card against an Adaptec hardware raid controler with write cache and 128mb of ram. Though the hardware did take load off of the cpu, read/write performance was much better with the software raid setup and since the machine was smp, the raid overhead wasn't notic

      • by Anonymous Coward
        Yes.

        First off, most SATA controllers are NOT hardware RAID although they support hardware raid options.

        This is called BIOS raid and esseciantly uses software drivers in a similar fasion that Winmodems or software modems use software in their drivers to emulate hardware.

        Dedicated hardware RAID devices are much more expensive, up in the hundreds of dollars for the controller. These devices use a embedded style cpu running around 200-400mhz that is specially designed for doing work like this.

        For Linux MD,
      • Try disabling the cache on the raid card, it will making writing a lot of files faster, cache is good if you are writing less than the cache size to the disk on a regular basis, like small text files, but if you are doing CAD files or large photoshop files then the cache is really just a hinderance as the card fills the cache then starts writing to disk. disabling cache also saves you from needing a battery backup on the raid controller. so try disabling cache and trying again. and compare the difference, I
    • No they are not!
      SiI 3114 did really well and it is cheap.

      Never mind all these other posts claiming that SCSI beats the crap out of everything else, it does not!

      SCSI is bloody expensive and only marginally faster in these benchmarks. Now, unless fast disk access is the only way to improve your systems performance, you are probably better off using SiI 3114 and having many more of those.

      Now that does not cover issues like hotswap support, noise, MTBF, etc...

      But still it was an interresting read (albeit

      • I'm running two U160 disks from 1999 on this computer, as well as a Raptor on a Sil 3114. The U160 disks beat the crap out of the Raptor in real situations, such as handling around 10k different text files, or working with a 45MB image, or a large video stream.

        The only situations where the Raptor wins is in benchmark programs, and only if write-caching is enabled. Disable the write-cache, and the Raptors performance sinks like a stone.
    • I'm not familiar with the benchmark software, but do they stress the CPU to any degree?

      In the Real World, CPUs with RAID storage usually don't just sit there spinning the platters. They're running high volume SQL database servers and application servers. These things have a habbit of hammering the CPU.

      When the CPU is otherwise occupied, you'd think "software RAID" would take a big hit. Was this situation tested in these benchmarks?
  • Norwegian (Score:5, Funny)

    by bcmm ( 768152 ) on Sunday May 22, 2005 @12:47PM (#12605385)
    Anyone know an internet translator that supports Norwegian? Or even a Norwegian? It would be nice to have a translation so we don't have to sit around making uninformed comments about what we can't understand...

    Oh, wait...
    • by Andreas(R) ( 448328 ) on Sunday May 22, 2005 @12:55PM (#12605439) Homepage

      Jeg skal ikke gå så langt som å si at man burde satse på verken SATA, billige kontrollere eller software-RAID.

      In english; I will not go as far as to recommend SATA, cheap controllers or software-RAID.

      Seriously, is this frontpage news on Slashdot? I'm a native speaker, and the article did not impres s me much. In fact, there is nothing newsworthy about the article, and the author admits it in the conclusion. Not very insightful, the article is crearly written by an amateur. In fact, in my opinion, the only reason this was submitted to Slashdot, is because hwb.no is a new site, which is trying desperately to get visitors.

      /cynical

    • by wfberg ( 24378 ) on Sunday May 22, 2005 @12:57PM (#12605448)
      Anyone know an internet translator that supports Norwegian? Or even a Norwegian? It would be nice to have a translation so we don't have to sit around making uninformed comments about what we can't understand...

      I think "Jeg vil rette en advarsel til alle dere som skal ut å handle kontrollere etter dette. Sjekk_nøye_om kontrolleren er støttet av kjernen! " speaks for itself.
    • by biglig2 ( 89374 ) on Sunday May 22, 2005 @12:59PM (#12605469) Homepage Journal
      Let me help. Apparently, software RAID is slower than hardware RAID, and Linux SCSI drivers are of variable quality, and also setting a PC on fire degrades its disk performance.
    • by Man In Black ( 11263 ) <`ac.wahs' `ta' `or-ez'> on Sunday May 22, 2005 @01:01PM (#12605481) Homepage
      It would be nice to have a translation so we don't have to sit around making uninformed comments about what we can't understand...

      Somehow, I don't think a translation would keep them away.
    • C'mon, it's quite obvious:

      "Ten thousand XXXXXX slashdot editors
      ran through the weeds,
      chased by vun norvegian"

      :)

      hawk

  • Heh (Score:5, Funny)

    by FlyByPC ( 841016 ) on Sunday May 22, 2005 @12:48PM (#12605392) Homepage
    My bonnie++ was used by Norwegians, To see how fast my RAID could be, My bonnie++ was used by Norwegians, ...but was bonnie++ written in C?
  • Better Link (Score:5, Informative)

    by XanC ( 644172 ) on Sunday May 22, 2005 @12:48PM (#12605399)
    here [www.hwb.no]
  • Damn... (Score:5, Funny)

    by broody ( 171983 ) on Sunday May 22, 2005 @12:49PM (#12605402)
    Damn. I've been a geek too long. After all these years I know understand how my pointy haired boss feel when attempting to read a technical article.
  • http://www.hwb.no.nyud.net:8090/artikkel/15307/5 [nyud.net]

    The page with the pretty pictures.

  • Translation (Score:3, Informative)

    by FlyByPC ( 841016 ) on Sunday May 22, 2005 @12:52PM (#12605425) Homepage
    http://www.translation-guide.com/free_online_trans lators.php?from=Norwegian&to=English [translation-guide.com]
    Not that it's really useful. It's a *little* more readable than the original. I think.
  • Time to troll (Score:5, Insightful)

    by Afrosheen ( 42464 ) on Sunday May 22, 2005 @12:54PM (#12605433)
    Ok, who the fuck allowed this submission to go through? A whole 2% of Slashdot readership will probably be able to read this, the rest of us are left in the dark. Are longer bars better, or worse? WTFOMGBBQ?!
  • norse--
    If only I had a "Learn Norse in 30days" book to advertise about now, i'd be rich
  • The caption: "Større er bedre" means "Bigger is better" (yeah right).

    The caption: "Mindre er bedre" means "Smaller is better" (even more yeah right)

    Whoever approved this article in "non-english" should be trambled to death by a mob of angy penguins.

  • Norwegian (Score:5, Funny)

    by magarity ( 164372 ) on Sunday May 22, 2005 @01:04PM (#12605505)
    From TFA: To innebygde gigabit-nettverkskort

    That is just the coolest; I am hereby recommending everyone refer to networking as 'nettverkskort'. It might be cold in Norway, but they have some awesome sounding linguistic constructions!

    PS - What the heck is nettverkskort, exactly? 'Networking'? 'Network Adapter'? Heck, I don't know what it is; I just know I like it.
    • PS - What the heck is nettverkskort, exactly?

      Network card.

    • Re:Norwegian (Score:3, Informative)

      by Novus ( 182265 )
      "nettverkskort" = "network card".
    • nettverkskort = network card
    • by EvilMonkeySlayer ( 826044 ) on Sunday May 22, 2005 @01:15PM (#12605563) Journal

      nettverkskort = A badly limping half eaten gorilla.

      Those crazy Norwegians!

    • Really. It looks really difficult when written, but spoken Norwegian is *almost* understandable to English speakers. Much like Geordie in fact.

      • Unless spoken in norwegian, that is.
        But you are right about the word similarities. This is where you can see some linguistic remnant of the language spoken by the saxons. The saxons came from borthern germany/soth denmark and migrated on a big scale to england.

        As dutch speaker you see the same when traveling though scandinavia. The writing is rife with spelling errors, but you can quickly get the hang of it and read/guess newspaper headlines. That is untill you get to Finland, as finnish has a complete dif
        • A funny anecdote:

          I was an exchagne student in Finland when I got out of high school. I was struggling to learn Finnish, which is totally unrelated to any European language. The part of Finland I lived in had a significant number of Swedish speakers, and my exchange student buddies who were learning Swedish were practically fluent already.

          Every day on my way to school I passed by an AMT. The sign above of stuck out of the wall. One day I glanced up at it and it made sense -- "Gold mint" -- 'money mint'? I

    • That's nothing. In Finland, one of the most high-tech countries on the planet, they still refer to magazines and newspapers as 'leaves' (lehtiä)!
  • This is not a very good review, they have used kernel 2.6.8, 2.6.11 has many fixes upon previous releases in regards to RAID and md (software raid) drivers.

    Lets get a review that uses 2.6.11, then lets see where we are.
    • I don't know this to be true, but since the commercial vendors backported
      patches for the 2.4 series, I would suspect that they would backport patches
      for the 2.6 series. If that is indeed the case, then fixes in the 2.6.11
      kernel have probably been incorporated into the 2.6.8 kernels used by
      Red Hat, SUSE, etc.

      So, the question is: did they benchmark using the vanilla 2.6.8 kernel
      or a heavily patched 2.6.8 kernel from one of the commercial distros?
  • involves those little hex drivers, and of course there is always one nut left over....

    What are we talking about?
  • by Nichotin ( 794369 ) on Sunday May 22, 2005 @01:25PM (#12605621)
    Ok, here is a rough translation:

    I have wanted to test some real SATA controllers against SCSI controllers for some time now, to see how good SATA has become. I once thought that cheap controllers like Sil 3114 is cheap crap that manufacturers put on their boards simply to provide SATA-support, and that software RAID was a cheap, but insufficient solution, since I have followed the principle that hardware does the job best. "A more expensive controller, means more hardware", was my initial guess, but it seems that even the cheap controllers are worthy. Software RAID also performs very well. SATA is no longer some gag for disk systems that are supposed to perform well, and many myths have been dispelled by my test.

    I will not go as far as to say that you shall place your bet on cheap controllers or software RAID. The reason is simple, in a expensive controller, there is much more functionality, that a cheap controller can just dream about. Functionality like hot-spare drives and hot-swap, just to mention some. I do not want to recommend SATA over SCSI in a while either. The lifespan of a SCSI drive is in most casese many times as long as a vanilla SATA-disk. When you choose a solution, it should last. If you have machines that has a big fat controller, RAID50, then SATA might be something for you. If you have a machine that needs redundancy on the internal drives, but where changing controllers, or even buying them in the first place has been in the way, then software RAID might be the solution for you.

    I shall be careful to mock the LSI controller, as I think there might be a problem with the way the test machine talks to it. I think the new Megaraid driver in the kernel might be the problem. Either it needs to mature, or it is simply that it does not like 64-bit Linux. I have not tampered too much with the default settings, but it runs superparanoid verification algorithms when it sends and recieves data. I have not fleshed the BIOS on any of the controllers.

    Adaptecs controllers do very well. Everything was not perfect with them, and the aacraid driver in the kernel was too old for both of the controllers. From their website, I found something that looked like source code (Adaptec seems to rely on 100% RPM based distros), and I could bouild my own module. After that, no problem. A little minus is that the aacraid does not report how long the controller has gotten in building the array after you have set up a RAID. By looking at the SCSI-BIOS after some hours, I got to verify that the array was built.

    I want to warn everyone that is going to buy a controller. Carefully check that the controller is supported in the kernel! I use Google to check for references to the card on mailing lists, but that does not help much when you have Debian, and all that exist is binary RedHat drivers.

    Now, run to your console and test your disk system. This test does only give you indications on what to choose. I allow myself to give you one final advice: Run tests for yourself.
    • This is one of the worst behchmark articles I read in a long time.

      What is missing is a systematic analysis of DISK performance in respect to various dimensions:
      1) Pure disk read vs. pure disk write vs mixed I/O/
      2) Sequential I/O vs random I/O vs. mixed
      3) block size (512 bytes, 1K, ... 256K), or various requests combined.
      4) Does it matter when you have multiple RAID volumes vs. only one? (this matters especially for SW RAID)
      5) Disk/LUN size.

      Also, the performance numbers should include:
      1) The average and m
  • by Anonymous Coward on Sunday May 22, 2005 @01:38PM (#12605692)
    A few years back I was responsible for benchmarking potential RAID solutions for a major computer company. We investigated both software based and hardware based solutions.

    The conclusion we reached; software RAID gave greatly superior performance than the hardware RAID solutions available at the time, but the hardware RAID solutions had better feature sets and usability.

    The superiority in performance that the software raid solutions showed was due to a quirk in what was then state-of-the-art in RAID and systems design.

    Most RAID controllers at that time contained embedded Intel i960 processors running at around 100 MHz, and had caches that topped out in the 128 MB range. Meanwhile, systems contained 2-4 CPUs in the 1.2 GHz range, and 2-8 GB of memory. There was simply no way that the embedded processor and cache on the RAID card could manipulate the data as quickly as the primary system resources could, and the benchmarks showed it.

    The "exception" to this performance was when RAID-5 was used. Because RAID-5 requires computational resources above and beyond simply moving data back and forth in order to calculate parity, the host-based RAID solutions couldn't always keep up.

    It was the fact that RAID-5 required additional computational resources that led fairly directly to the "ROMB" (RAID on motherboard) solutions that some vendors today. The ROMB chip is often nothing more than an XOR engine, to accelerate parity calculations.

    The major, major, shortcoming we found with software RAID solutions was that they did not work with our customer's software, if that software ran outside of an operating system that had drivers for the solution. With hardware RAID, the physical disks were completely abstracted away, and you could run in any possible environment and still be able to read/write from your RAID volumes.

    All of the above commentary about hardware vs. software performance is meant to apply to a specific point in time. I wouldn't try to extrapolate those results to current technology without rerunning the experiments today.
    • The "exception" to this performance was when RAID-5 was used. Because RAID-5 requires computational resources above and beyond simply moving data back and forth in order to calculate parity, the host-based RAID solutions couldn't always keep up.

      RAID5 isn't slow because of the "computation overhead", it's slow because of the additional disk seeking. Even a paltry 300Mhz P2 has checksumming speeds near a gigabyte a second.

      Hardware RAID5 may have outperformed software RAID5 in your comparison, but unless y

  • I know that's tech heresy but I think RAID isn't cost-effective. Spend the money you sould spend on RAID improving your backup and restore solutions.

    Yes, 3 times out of 10 you can hot-swap a failed drive. The other 7 times, the controller itself goes, and 2 out of those 7 times it takes one or both drives with it.

    • The other 7 times, the controller itself goes

      That's why you have redundant controllers and a dual channel architecture.

    • In my expirience it's more like 9 out of 10 times can you swap (or even hot-swap) a failed drive.

      Out of 5 drive failures that I have expirienced only one brought the machine to a halt (assume the failed drive did something strange to the ide bus).
      In all other cases the raid degraded gracefully and the machine could be shutdown cleanly to swap in a new drive.

      It *should* be even better with server-grade SCA (hot-swap SCSI) drives because since these are built for hot-swap they are even less likely to confus
    • I dunno, man (Score:3, Interesting)

      by lorcha ( 464930 )
      I have software RAID5 set up at home for my media server. Only had one disk failure, and the array dropped into degraded mode and I got an email alerting me that I had a disk failure. The next day I swapped in a new disk and rebuilt the array.

      I'm a happy RAID customer.

  • At my previous job I built a number of RAID systems, from hardward SCSI to software ATA to hardware SATA.

    I can't tell how many drives they are using (4?) or what raid level, but their benchmark results just aren't correct. They should be able to get bonnie++ read bechmarks in the 200 MB/sec range. They're getting in the 8 to 60 MB/sec range. The single character I/O benchmarks don't make sense either, they should be nearly the same with CPU usage at 99%. For some reason their disks are running much much
  • by zokum ( 650994 ) on Sunday May 22, 2005 @02:54PM (#12606077) Homepage
    My own comments are inside []-brackets.

    Bolded text:
    What controllers should you look for for a new machine? Do you need one costing 2-4000 NOK (300-500USD) to maintain uptime and data integrity without losing speed? In this test we will look at some of the options and the results when building a good system.

    Published May 13.
    Most modern motherboards has some form of S-ATA. Both desktop and servers. One of the most common is the Silicon Image 3114. This is a 4 port SATA with alledged RAID-capabilities.

    In almost all cheap SATA-controllers they tout raid capabilities. This is a half-truth. These controllers can as just about ever other controller be used in a RAID-array, but most of the work is done in the OS. In windows, mostly in the driver. Rumor has it one should look for raid 5 capabilities if one is looking for a true hardware raid solution (as in transparent to the rest of the hardware). Whether this is correct is not known since GNU/Linux has RAID5 support in software. The road ahead should be short. [idomatic expressiob, doesn't make much sense in Norwegian either in this context.]

    [2 controllers' pics]

    In this test we will look at different controllers. The aforementioned Sil 3114 is one of those cheap ones with fake hardware raid. How well does it do compared to a much more expensive SCSI setup. Is SATA so good that expensive SCSI setups is only useful in special cases?

    Thanks to Nextron for a machine, several controllers and other equipment. And also thanks to MPX for the loan of several Adaptec controllers.

    [Next page]

    [I will skip most of the redundant translating] Fire diskport -> four disk ports.
    [The comment about the 1.5 GiB memory is about finding a faulty chip.]
    enhet -> unit
    [long text]
    This pretty server has almost all one needs in its small cabinet. It comes with "speed-couplings" [hard to translate] for SATA-disks, so the test with the SCSI controllers is done with an external SCSI cabinet and a PSU. The barebone system kan be delivered with SCSI if needed or one can add this oneself.

    With it's 6 angry [slightly different conotations in Norwegian] and tiny little fans I would recommend being in the same room. Noicy like a small machine room. [as in say a boat].
    [Next page] David. This chip has several Goliaths to fight.
    * SiL 3114
    On most controllers one sees this one or it's little brother. 3112 is often used as an interface to the disks. Simple controller with no RAID caps in HW.
    * Megaraid. 150-4
    This one has two 3112 chips for the SATA part and 64MiB ECC cache and an intel processor. It is not low profile but has a nice space saving design. Supports Raid 0,1,5 and 10.
    * Megaraid 320-1
    Low profile SCSI, internal and external connector. Has the GC08302 procsessor. Supports RAID 0,1,10,5,50.
    * Adaptec 2130SLP.
    Low profile. Internal and external connector. Has a staggering 128 MiB DDR Cache. Supports RAID 0,1,10,5,50 and JBOD.
    *Adaptec 2410SA
    Low profil SATA with two 3112 chips for SATA support, comes with 64 MiB cache. Supports 0,1,5,10 and JBOD.

    [Rant about "true" RAID and level 0 and JBOD with link to a guide.]

    The different controllers has support for various functions. LSI controllers tout their "on the fly" changes in the array, changing of raid-type without losing data and similar. Adaptec focuses on SNMP and a lot of the same as LSI. What one needs is up to the reader. The four "external" controllers come with various cables, manuals and CDs.

    [Next page] During the test we used 50GB partitions. Sata disks were almost 3.5 times as big as the SCSI ones, and under this test the file system etc should change the results due to different physical size. It's not really possible to compare it directly, since the disks are quite different, we're looking for patterns in how te configs behave, not only if SATA can compare with SCSI.

    For the test we used bonnie++ and dbecnh. [links]

    Nonnie++ was us
  • 2.4.27 still provides better md performance than 2.6.9 says Neil, not sure if this go fixed in .11.

    http://cgi.cse.unsw.edu.au/~neilb/ [unsw.edu.au]
  • I use md so I can span PCI buses with multiple controllers to get better performance than a single HW raid card. Also, when my controller goes south I don't have to get the same controller. If I was really desperate I could use the onboard. I can upgrade my controller without backing up and restoring the array. I could get a SATA-II controller and slowly move my drives to SATA-II. I feel like I get more control with mdadm, too. At least I can inspect, alter or wipe the drive metadata while I'm up in the

    • With a decent (133 MHz, 64-bit) PCI bus providing 600-800 MB/sec bandwidth, it's going to take a good number of fast drives in RAID 0 before you even come close to filling one bus. With any other RAID level, you won't even have to worry about filling that bus.

      steve

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...