Forgot your password?
typodupeerror
Data Storage Software Upgrades Linux

Tweaking Solid State Drive Performance On Linux 33

Posted by timothy
from the like-a-movie-with-no-moving-parts dept.
perlow writes "While Solid State Drives are expensive and shouldn't be used exclusively for primary storage, they perform exceedingly well for things like MySQL databases, provided you tweak your kernel, BIOS, and filesystems accordingly. Here's a few tips to get excellent performance out of your new $500-$900 investment on a Linux system."
This discussion has been archived. No new comments can be posted.

Tweaking Solid State Drive Performance On Linux

Comments Filter:
  • EeePC (Score:2, Interesting)

    by lukrop (1302325)
    I'm interested how useful this could be for my EeePC since it only uses SSDs. let's see
    • by lukrop (1302325)
      Ahhh solid state drives... :O damn, i should go to bed... But i like those tiny drives for backing up data. Thanks for the guide anyway
  • SSD (Score:3, Interesting)

    by jrwr00 (1035020) <jrwr00@@@gmail...com> on Saturday July 26, 2008 @03:28PM (#24350167) Homepage

    SSD Drives are nice and all, but a Major MYSQL DB would eat a flash disk alive, all those writes, even if the drive supports 10 billion writes, for a big mysql DB, it could eat that in a year, You will still need a shit ton of ram cache for the main DB so all the minor writes dont kill the disk, there is the fact of data loss in the power loss... all writes stored in memory will be lost... HDs are slow yes, but SSD is just not there just yet

    • Depends on the DB. For instance if you are running a DB that has lots of reads but very few(comparatively) writes, such as an online store(where you only update your catalog when you have new prices/stuff to sell, but tons of people look at it). But in those types of DBs you would probably be better of using lots of cache....
    • Does MySQL do journaling? (I'll guess only BDB and InnoDB do).
    • Does MySQL allow one to store the indices in a separate file/drive from the rest of the database entries? That would speed things up significantly without needing a large SSD drive.

    • by jgoemat (565882)

      Do the math. Most flash media are good for at least 100,000 writes. They also use wearing algorithms so that each block averages out to about the same. Even if you try to write to a certain block over and over, the algorithms take over and move blocks that are not written very often to those locations, so blocks end up being written to about the same.

      With that in mind, let's take the 64 gb model for $899 as an example. Let's say you have a huge workload and are writing at the max [crucial.com] of 35MB/sec. At 100

  • to be extract
  • by karnal (22275) on Saturday July 26, 2008 @03:39PM (#24350265)

    I've just purchased a 4gb transcend 300x card (udma5) and an addonics IDE to CF adapter. This article comes along at a great time, since scouring the net for information on "flash linux" usually results in some article regarding USB installations, which are about 2x (or more, depending on drive used) slower than this solution.

    The only thing that this PC runs is minor browsing when looking up a car repair, or Amarok (with a MySQL backend.) Regarding the mysql backend, it doesn't seem like there will be enough action on the disk to even begin to worry about the flash failing.

    In addition, the transcend I purchased (and most high end == costly) is an SLC device, which I've read is faster as well as can sustain more writes over the lifetime of the drive. I see this as worthwhile in a system such as this, as I want it to be an "install and forget" type of system. Using a spinning platter in the variable humidity and temperature conditions just doesn't seem to make sense - especially when most of the media used would be in a server in my basement.

    I'm also looking into network booting; however the PC that I've slated for use in the garage would probably require a bootable floppy/cd since it's old enough to not be bootable from an installed NIC. Maybe I just need to add a newer NIC - but I've not spent a lot of time researching that.

    If anyone has other articles I could enhance my knowledge with - and possibly doing something similar to my setup - I'd love to read them.

    • Did the same to my main house server more than a year ago. My regular disk drives sleep, while the CF handle the bulk of the regular load. I have noticed that the system is MUCH cooler and quieter. In fact, for my shuttle shoebox system, I am about to take out the 80G HDD and put in a 32G CF and figure that it will not only be faster, but also cheaper on electricity and quieter (the fan HUMS when I am using it; that HDD adds heat).
  • Wait, what? (Score:5, Insightful)

    by Briareos (21163) * on Saturday July 26, 2008 @03:42PM (#24350279)

    He recommends disabling journalling and using RAID instead?

    So exactly how will a RAID make sure the filesystem metadata is still intact when I yank out the power cable for fun and no profit, as opposed to using a filesystem with a journal?

    Sheesh... that's just begging for an accident to happen.

    np: Yello - You Gotta Say Yes To Another Excess (Orb Goes The Weasel Mix) (Auntie Aubrey's Excursions Beyond The Call Of Duty (Disc 2))

  • Partitioning (Score:5, Insightful)

    by mickwd (196449) on Saturday July 26, 2008 @03:43PM (#24350289)

    I'm surprised I've heard very little about using Unix/Linux partitioning to get the best out of SSDs.

    Seems to me that the best use of an SSD on a normal system is to buy a smallish one (say 16GB) and use it for the read-mainly partitions: say /usr, /opt, maybe /lib.

    It would be good to get users' "dot" files in there too. Maybe create a /homedot on the SSD and symlink /home/myname/.example to /homedot/myname/.example.

    Even if this doesn't make your applications run much faster, the faster read and seek times are going to make the machine boot faster, load applications faster (especially including the desktop environments, if user directories like .kde and .gnome are on SSD) and compile code faster (with /usr/include, etc on SSD).

    • by Xtravar (725372)

      Not to mention, partitioning is a good idea for any disk. It prevents fragmentation and data loss (if one partition gets corrupted). Also, it makes reinstalling an OS easier and booting faster. Well, if you partition correctly...

      Partitioning: One reason why I don't use Windows anymore. Linux handles it much better.

      • Re: (Score:3, Informative)

        by imroy (755)

        ...partitioning is a good idea for any disk. It prevents fragmentation...

        How do you figure that? Modern Unix filesystems actively avoid fragmentation by being careful about allocating blocks to files. With more free space, a filesystem has more options as to where to place a new file. It's better to have one big filesystem with 10G free, than four or five with only about 2-3G free each.

        • by Xtravar (725372)

          Not only fragmentation of files, but fragmentation of related files.

          Let's say I'm installing Firefox to /usr/lib.

          I don't want Firefox's files spread out across my entire physical disk and intersperced with /tmp /var /etc, because then Firefox will load slower as the hard drive must spin more to load related files.

          Yes, I know that isn't a concern with SSDs, but it is a concern with standard hard disks.

          No file system is smart enough to read your mind and know how your data is going to be used. Only you can o

      • Partitioning is more trouble than it's worth. I've found myself with full partitions on too many occasions over the years.

        It's not always easy to be smart about it. One can't always anticipate their future needs.

        The only partitioning that makes any sense at all is that which is used in Linux with LVM (at least as far as a home user is concerned). If I'm running Linux and not using LVM I have just one partition mounted on "/".

        Just the other day I decided to marge the two partitions I have on one of my drive

        • by Henkc (991475)
          Couldn't agree more about partitions. I've run out of space on /usr /opt /var etc several times over the last decade on various systems. There is just no foolproof way to predict the future. I also don't use LVM - it's just another layer of indirection adding complication if your FS becomes corrupted. I've had too many problems with LVM: and no, not from power outages or resets.
      • Re: (Score:2, Insightful)

        by Siffy (929793)
        Fragmentation is of little concern with SSDs. The time needed to read out of order bits on a rotating media is much higher as we know, but reading bits out of order in memory or on SSD is almost identical to reading the bit beside it due to the low seek times. Also, partitioning schemes that try to keep the most often accessed files at the beginning of the media lose their edge for the same reason. A SSD has the same performance characteristics from the beginning to the end of the drive from 1% to 99% us
    • by DarkOx (621550)

      I don't think that would work very well. Symlinks are stored in most file systems as "special" files. You would have to seek read the read the link. Since dot files are usually pretty small ie would fit in one block or a few consequtive blocks, any time at all spent accessing the SSD will be a loss, because you will have done all the expensive part of the operation on the traditional disk. Now its possible some filesystems that keep more data about a link in their internal structures would end up having

  • s but the article then advocates some dangerous behavior. Write back cache only works if you don't think you will have a power failure, and as we saw on /. previously, that can be real disaster. He also advocates forgoing journaling in favor of RAIDs, but again that can be dangerous if your machine somehow gets into a weird state. Not sure I would trust mission critical data without Journaling or Write-through unless that data was backed up somewhere else.
    • by Cato (8296)

      If you have a power failure during a write to an SSD, you are very dependent on the FTL (Flash Translation Layer) between the FS and the device: if it does its job properly it can recover from this, detecting blocks that are invalid because they were partially written. If not, the whole device can be unrecoverable... This is one reason why using an SSD in a laptop (i.e. with battery) or a server with UPS is a good idea.

      Having looked at the very long lifetimes of most flash devices (see http://www.storages [storagesearch.com]

  • by damg (906401) on Saturday July 26, 2008 @04:26PM (#24350693)
    Here some SSD benchmarks for MySQL [bigdbahead.com] with a conclusion of "with the relatively low cost of the technology, you could net 10X+ performance increase on your database servers for under $2000."
  • Buy a drive intelligently. Maybe they should check out these drives. http://www.newegg.com/Product/Product.aspx?Item=N82E16820227344 [newegg.com] http://www.hothardware.com/News/OCZ_Core_Series_SSD_Vs_VelociRaptor_Sneak_Peek/ [hothardware.com] The 32GB version that would be suitable for a linux boot drive is going to have a $140-160 price point I believe. I'm considering a 64GB for my Vista laptop to reduce the heat it generates.

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...