Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Data Storage Software Intel Linux

Optimizing Linux Systems For Solid State Disks 207

tytso writes "I've recently started exploring ways of configuring Solid State Disks (SSDs) so they work most efficiently in Linux. In particular, Intel's new 80GB X25-M, which has fallen down to a street price of around $400 and thus within my toy budget. It turns out that the Linux Storage Stack isn't set up well to align partitions and filesystems for use with SSD's, RAID systems, and 4k sector disks. There are also some interesting configuration and tuning that we need to do to avoid potential fragmentation problems with the current generation of Intel SSDs. I've figured out ways of addressing some of these issues, but it's clear that more work is needed to make this easy for mere mortals to efficiently use next generation storage devices with Linux."
This discussion has been archived. No new comments can be posted.

Optimizing Linux Systems For Solid State Disks

Comments Filter:
  • by wjh31 ( 1372867 ) on Saturday February 21, 2009 @11:27AM (#26940921) Homepage
    I think the bigger challenge will be in getting mere mortals to have a $400 toy budget to afford the SSD
  • by KibibyteBrain ( 1455987 ) on Saturday February 21, 2009 @11:36AM (#26940973)
    Well, they will obviously go down in price eventually. The real price issue won't be affordability but rather value. Do most consumers out there really want a what would seem to average out to slightly faster drive, or an order of magnitude or two more storage? There have always been fast drive solutions in the past and they have never been very popular, and quickly become obsolete. Eventually some sort of SSD will take over the market, but I don't believe this sort of compromised experience business model will sell them, unless cloud storage and internet everywhere becomes mainstream fast.
  • by von_rick ( 944421 ) on Saturday February 21, 2009 @11:44AM (#26941031) Homepage

    From economics, lets turn our attention to optimizing this toy of ours. The thing with SSDs is that they don't have a read/write head to worry about. This means that no matter where the data is stored in the device, all we need to do is specify the fetch location and the logic circuits select that block to extract the data from desired location. From what I've heard, the SSDs have an algorithm to actually assign different blocks to store the data so that the memory cells in a single locations aren't overused.

  • by jensend ( 71114 ) on Saturday February 21, 2009 @12:07PM (#26941187)

    SSDs gradually gain more and more sophisticated controllers which do more and more to try to make the SSD seem like an ordinary hard drive, but at the end of the day the differences are great enough that they can't all be plastered over that way (the fragmentation/long term use problems the story linked to are a good example). I know that (at present- this could and should be fixed) making these things run on a regular hard drive interface and tolerate being used with a regular FS is important for Windows compatibility, but it seems like a lot of cost could be avoided and a lot of performance gained by having a more direct flash interface and using flash-specific filesystems like UBIFS, YAFFS2, or LogFS. I have to wonder why vendors aren't pursuing that path.

  • by Antique Geekmeister ( 740220 ) on Saturday February 21, 2009 @12:08PM (#26941193)
    Such tools already exist. Even the venerable "dd if=/dev/zero of=/dev/sda" is extremely efficient at flushing a drive well beyond the ability of any but the most well-equipped recovery services, and it's a lot faster than the "overwrite with zeroes, then ones, then 101010..., then 010101..., then random data" approach used by some people with too much time on their hands and too much paranoia for casual data.
  • by NekoXP ( 67564 ) on Saturday February 21, 2009 @12:29PM (#26941339) Homepage

    Yeah, hard disk manufacturers.

    Since they moved to large disks which require LBA, they've been fudging the CHS values returned by the drive to get the maximum size available to legacy operating systems. Since when did a disk have 63 heads? Never. It doesn't even make sense anymore when most hard disks are single platter (therefore having 1 or 2) and SSDs don't even have heads.

    What they need to do is define a new command structure for accurately determining the best structure on the disk - on an SSD this would report the erase block size or so, on a hard disk, how many sectors are in a cylinder, without fucking around with some legacy value designed in the 1980's.

  • by spineboy ( 22918 ) on Saturday February 21, 2009 @01:54PM (#26942001) Journal

    Why not functionally group files to decrease or eliminate fragmentation? Or maybe this is already done.
    For example - I have a large collection of MP3 files. They essentially do not change, as in I don't edit them, and rarely erase them. The file system could look at they type of file (mp3, vs doc) and place it accordingly. It could also look at the last change in the file and place it in a certain area. Older unchanged files are placed in a tightly placed/packed file area that is optimized and not fragmented.

  • by Anonymous Coward on Saturday February 21, 2009 @02:16PM (#26942159)

    Well maybe you should check who the story submitter is.
    If he doesn't "have the time to optimize it", we're in deep trouble :-)

  • by Britz ( 170620 ) on Saturday February 21, 2009 @03:19PM (#26942647)

    I purchased an X300 Thinkpad for the company this week and took a close look at it. I thought expensive business notebooks come without crapware. And I was sure the X300 would be optimized. But they had defrags scheduled! I always thought defrag is a no no for ssds. Now I am not sure anymore. I deinstalled it first. But who knows?

  • by ggendel ( 1061214 ) on Saturday February 21, 2009 @04:41PM (#26943347)

    Although the technology it is used in is repugnant, NTFS has always been the One True Filesystem.

    I thought ZFS was.

    And ZFS has native support for SSD as L2ARC. http://www.c0t0d0s0.org/media/presentations/ssd.pdf [c0t0d0s0.org] I have nothing but praise for ZFS. Simple to manage, reliable, fast. With native CIFS instead of User file system Samba, I've seen orders of magnitude performance from windows machines when doing networked file access. Gary

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Saturday February 21, 2009 @05:09PM (#26943631) Homepage Journal

    The modern hot-shit high-speed CF cards have wear leveling and do UDMA transfers, you get a CF to ATA adapter, not CF to USB, and they will outperform most hard disks.

  • by DJRumpy ( 1345787 ) on Saturday February 21, 2009 @06:15PM (#26944161)
    The TFA would disagree with you, as it states that write performance does indeed drop, sometimes up to half the original performance or more due to wear leveling and write combining techniques used. Your talking read access times, where we're talking write/erase access times.
  • by gillbates ( 106458 ) on Saturday February 21, 2009 @07:16PM (#26944587) Homepage Journal

    Why is the Linux block subsystem still stuck in the 20MB hard-disk era like this?

    As one who had to tune the performance of hard drives at the kernel level, I can say with some authority that the Linux block subsystem is not at all stuck in the 20MB hard-disk era. In fact, everything is logical blocks these days, and it's the filesystem driver and IO schedulers which determine the write sequences. The block layer is largely "dumb" in this regard, and treats every block device as nothing more than a large array of blocks. A properly designed wear-leveling filesystem has no dependencies on the underlying hardware with one exception: block size. But seeing as every Linux filesystem since Ext2 has had the option of creating filesystems with different block sizes, I doubt this is, or ever will be, an issue.

    The only real issue with wear-leveling filesystems is that they don't work well with conventional hard disks, largely due to the fact that with flash, the block access time is pretty much constant no matter where on the drive it is located. Hence, there's no need to schedule based on C/H/S values. Because of this disparity, there won't be ONE TRUE FILESYSTEM in Linux. This might actually be a good thing, if you've ever been privy to the debates over Reiserfs and Ext3...

    The hardware SSD wear-levelling algorithms used by Intel, et al... are nothing special. Yes, they probably do offer higher performance than a general purpose filesystem, but performance is not their reason for existence. They exist largely because the overwhelming majority of consumer devices still use FAT32, which would destroy an SSD without wear-leveling very quickly. Think of how many flash chips are used in cameras, cellphones, thumb drives, etc... Intel had to do this just to access the non-Linux market.

Radioactive cats have 18 half-lives.

Working...