Optimizing Linux Systems For Solid State Disks 207
tytso writes "I've recently started exploring ways of configuring Solid State Disks (SSDs) so they work most efficiently in Linux. In particular, Intel's new 80GB X25-M, which has fallen down to a street price of around $400 and thus within my toy budget. It turns out that the Linux Storage Stack isn't set up well to align partitions and filesystems for use with SSD's, RAID systems, and 4k sector disks. There are also some interesting configuration and tuning that we need to do to avoid potential fragmentation problems with the current generation of Intel SSDs. I've figured out ways of addressing some of these issues, but it's clear that more work is needed to make this easy for mere mortals to efficiently use next generation storage devices with Linux."
Mere mortals need mroe toy budget (Score:5, Insightful)
Re: (Score:2, Insightful)
Re: (Score:2)
Sure. There are *lots* of considerations beyond speed to want SSDs.
First is battery life. Batteries suck. Laptops pulling 5 or 6 watts total make that suck more bearable. SSDs are part of that.
There's also noise. Hard drives have gotten much quieter. But in a dead-silent conference room, I want dead-silence.
Even form-factor is an issue. a 2.5" cylinder is a notable chunk of a small notebook. 1.8" drives are, generally, quite slow. SSDs can be worked into design.
Re: (Score:3, Informative)
Sure. There are *lots* of considerations beyond speed to want SSDs
And SSD drives are also shock-resistant.
Re: (Score:2)
Re: (Score:2, Interesting)
As other components become less noisy, the "solid state" electronics' acoustic noise becomes audible. It isn't necessarily faulty electronics, just badly designed with no consideration for vibrations due to electromagnetic fields changing at audible frequencies. These fields subtly move components and this movement causes the acoustic noise. Most often it is a power supply or regulation unit which causes high pitched noises. Old tube TV sets often emit noise at the line frequency of the TV signal (ca. 15.6k
Re: (Score:2)
I can't hear over 9114Hz you insensitive clod!
Re: (Score:3, Funny)
Surely, if you can't hear over 9kHz, that makes you the insensitive one?
Re: (Score:2)
I don't mean to frighten you, but perhaps you should have your ears checked next time you get a physical. If you've spend considerable time around heavy machinery or loud music, it might be you have lost the ability to hear high pitched sounds. As this goes gradually, it isn't generally noticed.
Really, get it checked out and (when applicable) change your habits regarding to ex
Re: (Score:2)
I was born 2 weeks prem, consequently I can't hear over 9114Hz. Which I didn't find out until I was working in a music studio and the other people started shouting at me to turn that feedback off. "What feedback?" was all I could say and turned the amps off.
And that was the end of that chapter.
Re: (Score:2)
You weren't running the karaoke night at a pub I was in the other night, were you?
Re: (Score:2)
Another example: I have a tiny NSLU2 network appliance that I use as a music server. In the out-of-the-box configuration, it runs Linux from a ROM, but you can add an external drive via a USB cable and boot Linux off of that. It doesn't have SATA, so that wasn't an option.
I'm not sure why this guy paid $400 for an 80 Gb SSD. I just upgraded my music server to a 64 Gb SSD, and it only cost $100. Maybe the one he got is a fancier, faster d
Re: (Score:2)
Price/GB for SSDs seems to be largely proportional to the number of write operations per second the SSD can handle. Once a handful of manufacturers solve that particular puzzle, I expect prices will drop significantly.
Re:Mere mortals need more toy budget (Score:2)
I've been wrestling this idea around as a sound studio solution, and it seems that an external storage unit makes the most sense, with a DRAM card for the currently working files. Almost affordable, anyway.
Re: (Score:2)
You can buy a 32GB SSD for less than $100 [oempcworld.com] today. Is that within the budget of mere mortals?
Re: (Score:2)
Re:SSD's should have no problem with fragmentation (Score:4, Insightful)
From economics, lets turn our attention to optimizing this toy of ours. The thing with SSDs is that they don't have a read/write head to worry about. This means that no matter where the data is stored in the device, all we need to do is specify the fetch location and the logic circuits select that block to extract the data from desired location. From what I've heard, the SSDs have an algorithm to actually assign different blocks to store the data so that the memory cells in a single locations aren't overused.
Re: (Score:2)
This means that no matter where the data is stored in the device, all we need to do is specify the fetch location and the logic circuits select that block to extract the data from desired location.
Which is why you don't need head-optimized I/O schedulers like Anticipatory, which waits a couple of ms after every read to see if there's more from that area, thus saving on seek times.
SSD's must be optimized differently. For instance, they can't write arbitrary small pieces of data, only whole blocks. Thus, if you want to optimize it, you'd better make sure to write whole blocks at a time if possible, and not have small files cross boundaries if they don't have to.
Re: (Score:2)
Yes, but for SSD's the blocks are larger - problems when essentially all software is optimized for smaller blocks.
Re:SSD's should have no problem with fragmentation (Score:5, Interesting)
I don't think this is going to be a significant problem when compared to normal seek time problems.
Lets say we have 100 k of data to read. 512 byte blocks would require 200 reads. 4k blocks would require 25 reads.
For rotating discs: If the data is contiguous, we have to hope that all the blocks are on the same track. If they are, then there is 1 (potentially very costly) seek to get to the track with all the blocks on it. The cost of the seek is dependent on the track it's going to, the track it's on, and whether or not the drive is sleeping or spun down. Otherwise we also get to do another very short seek, which is going to add a bit of time to get to the next adjacent track. Worst case scenario all 200 blocks are on different tracks, scattered randomly on the platter, requiring 200 seeks. Ouch ouch ouch.
For SSDs: What is important is the number of cells we have to read. Cells will be 4k in size. All seek times are essentially zero. Best case scenario, all data is contiguous, and the start block is at the start of a cell. Read time boils down to how fast the flash can read 20 cells. Worst case scenario is where the data is 100% fragmented, such that all 200 512 byte blocks reside in a different cell, requiring 200 cell reads. (10fold increase in time required) There will also be overhead in copying out the 512 byte data from each buffer and assembling things, but this time is negligible for this comparison.
While the 20x time increase (order N) looks significant, it's important to compare the probabilities involved, and just how bad things get. The most important difference between how these two drives react is the space between fragments. In the "worse case' for SSD, 100% fragmentation, is highly unlikely. I don't even want to think about what a spinning disc would do if asked to perform a head seek for 100% of the blocks in say, a 1mb file. The read head would probably sing like a tuning fork at the very least. 2000 cell reads compared to 2000 seeks, the SSD will win handily every single time, even if the tracks on the disc are close.
If the spacing between fragments is anything near normal, say 30-100k, then there will be some seeking going on with the disc, and there will be some wasted cell reads with the SDD, but having to do an extra one cell read compared with having to do an extra head seek, again the SSD wins hands down. The advantage of the SSD actually goes down as fragmentation goes down, because most fragments are going to cause a head seek, each of will significantly widen the time gap. Also a spinning disc will read in the blocks much faster than the cells on a SSD.
I realize the OP was more describing the possibility of "not so much bang for the buck as you are expecting" due to fragmentation, and I know the above hits more on comparing the two than what happens to the SSD, but if you consider the effects of fragmentation on a spinning disc, and then weigh how the impact compares with a SSD, it's easy to see that fragmentation that sent you running for the defrag tool yesterday may not even be noticeable with a SSD. So I'd call this a "non-issue".
What I'm waiting for is them to invest the same dev time in read speeds as write speeds. SSDs don't appear to be doing any interleaved reads - they're doing it for the writes because they're so slow. Though at this point I wonder if read speeds are just plain running into a bus speed limit with the SSDs?
Another file strategy - file segregation by f(x) (Score:5, Insightful)
Why not functionally group files to decrease or eliminate fragmentation? Or maybe this is already done.
For example - I have a large collection of MP3 files. They essentially do not change, as in I don't edit them, and rarely erase them. The file system could look at they type of file (mp3, vs doc) and place it accordingly. It could also look at the last change in the file and place it in a certain area. Older unchanged files are placed in a tightly placed/packed file area that is optimized and not fragmented.
Organizing by partition (Score:4, Informative)
Why not functionally group files to decrease or eliminate fragmentation? Or maybe this is already done.
In a Linux system, this is easily done, but few people bother.
Most of the write activity in Linux is in /tmp, and also in /var (for example, log files live in /var/log). User files go in /home.
So, you can use different partitions, each with its own file system, for /, /tmp, /home, and /var.
The major problem with this is that, if you guess wrong about how big a partition should be, it's a pain to resize things. So my usual thing is just to put /tmp on its own partition, and have a separate partition for / and for /home.
The /tmp partition and swap partition are put at the beginning of the disc, in hopes that seek penalties might be a little lower there. Then / has a generous amount of space, and /home has everything left over.
When a *NIX system runs out of disk space in /tmp, Very Bad Things happen. Far too much software was written in C by people who didn't bother to check error codes; things like disk writes don't fail often, but when /tmp is 100% full, every write fails. A system may act oddly when /tmp is full, without actually crashing or giving you a warning. So, the moral of the story is: disk is cheap, so if you give /tmp its own partition, make it pretty big; I usually use 4 GB now. However, if you run out of disk space in /var, it is not quite as serious. Your system logs stop logging. And, many databases are in /var so you may not be able to insert into your database anymore.
The main Ubuntu installer is fast, because it wipes out the / partition and puts in all new stuff. So, if you have separate partitions for / and /home, life is good: you just let the installer wipe /, and your /home is safely untouched. It's annoying when you have /home as just a subdirectory on / and you want to run the installer. But, by default, the Ubuntu installer will make one big partition for everything; if you want to organize by partitions, you will need to set things up by hand.
steveha
Re:Another file strategy - file segregation by f(x (Score:4, Interesting)
Re: (Score:2)
Good analysis. The statistics I've read indicate that SSD's don't perform all that much better than hard drives in real-world scenarios. I think this is part of the reason for that performance. On the other hand, they do use less energy, which is a clear positive for a laptop.
Re: (Score:2)
On the other hand, they do use less energy, which is a clear positive for a laptop.
And thus they are cooler. A clear positive for any system, but especially a laptop.
They are also silent and don't vibrate.
They are also, from what I understand, more reliable.
I'm seriously considering flash drives for my desktop PC... they just need one more capacity jump and I think they'll be worth it. $400 for 128MB is a touch small.. but I'll go for it at $400 for 256MB. On my main PC I'm only using 236GB of my 500GB driv
Re: (Score:2)
If you'll pay $400 for 256*MB*, I think you've got a little too much money and should give me some....
Re: (Score:2)
tytso (Score:3, Informative)
"tytso" is Theodore T'so.
He and Remy Card wrote ext2. He and Stephen Tweedie wrote ext3. He and Ming Ming Cao wrote ext4.
He maintains the filesystem repair tool (e2fsck) and resizing tool for those filesystems.
He also created the world's first /dev/random device, maintained the tsx-11.mit.edu Linux archive site for many years, and wrote a chunk of Kerberos. He's been the technical chairman for many Linux-related conferences. He pretty much runs the kernel summit.
He's certainly not a kid. I think he's about
Is it only linux? (Score:4, Interesting)
Re: (Score:3, Informative)
unfortunately the default 255 heads and 63 sectors is hard coded in many places in the kernel, in the SCSI stack, and in various partitioning programs; so fixing this will require changes in many places.
Looks like someone broke the SPOT rule.
As for other OSes:
Vista has already started working around this problem, since it uses a default partitioning geometry of 240 heads and 63 sectors/track. This results in a cylinder boundary which is divisible by 8, and so the partitions (with the exception of the first, which is still misaligned unless you play some additional tricks) are 4k aligned.
Re:Is it only linux? (Score:5, Insightful)
Yeah, hard disk manufacturers.
Since they moved to large disks which require LBA, they've been fudging the CHS values returned by the drive to get the maximum size available to legacy operating systems. Since when did a disk have 63 heads? Never. It doesn't even make sense anymore when most hard disks are single platter (therefore having 1 or 2) and SSDs don't even have heads.
What they need to do is define a new command structure for accurately determining the best structure on the disk - on an SSD this would report the erase block size or so, on a hard disk, how many sectors are in a cylinder, without fucking around with some legacy value designed in the 1980's.
Re: (Score:2)
A bigger problem is our reluctance to move off 512-byte sectors. Who needs that fine granularity of LBA?
That's two sectors per kilobyte.. dating back to the floppy disk. And we still use this quanta on TB hard disks.
Re: (Score:2, Informative)
CHS disappeared ages ago. The maximum device supported was ~8 Gbyte (1023 cylinders * 255 heads * 63 sectors * 512 bytes)
Re: (Score:3, Informative)
Of course it goes beyond just Linux. Microsoft is aware of the problem and working on improving its SSD performance (they already did some things in Vista as the article states, and Windows 7 has more in store; google around to find a few slides from WinHEC on the topic).
The problem with Windows w.r.t. optimizing for SSDs is that it LOVES to do lots and lots of tiny writes all the time, even when the system is idle (and moreso when it is not). Try moving the "prefetch" folder to a different drive. Try movin
Re: (Score:2)
Sorry, but you are glossing over something here -- it's not the "megabytes per minute" thing that bothers, it's the "many small writes" thing. Even the very best wear leveling algorithm can't do much about that, unless they use a write cache (which most SSDs do not; I do not know the exact procedere of the Intel offering (which is ahead of its competitors at the moment), but I would be somewhat surprised if the chip waited overly long to commit). A one-byte-write will, in the worst case, cause an entire 128
Re:Is it only linux? (Score:5, Informative)
Sun's new 7000 series storage arrays use them, and that series runs OpenSolaris. So I guess Solaris has at least some SSD optimisatioons... http://www.infostor.com/article_display.content.global.en-us.articles.infostor.top-news.sun_s-ssd_arrays_hit.1.html [infostor.com]
Re: (Score:2)
There is no major OS that makes anything remotely like an appropriate use of persistent RAM. SSD is one application of persistent RAM, but it's a terrible one, which ignores most of the benefits of persistent RAM. I want to treat flash as heirarchical memory, not as disk. I want the OS to support me not with inconsequential filesystem optimizations, but by implementing cache-on-write with an asynchronous write-back queue for mapped flash memory. I want to map allocated regions of a terabyte flash array
Re: (Score:2)
Ironically I was just going out to buy a small one (Score:4, Informative)
If I mount /home on a separate drive, (good to do when upgrading) the rest of the Linux file system fits nicely on a small SSD.
Re: (Score:2)
I would move /tmp to either a RAM disk or a hard drive. There is no point in having tmp files using up the lifespan of your SSD, especially after you just moved /home to extend its life. Also, you could move some of the stuff in /var to a hard drive or ramdisk. Good candidates might be /var/tmp and /var/log. Alternatively, you could just move the entire /var hierarchy to a hard d
Re: (Score:2)
Good point, I will have to think about that...
Well, I fired up Ubuntu with the new configuration and I wasn't disappointed - WOW!
Booting is lightning quick - I am still doing a lot of downloads so I haven't had a chance as some real performance tests but from what I have seen so far the results are impressive.
Toy budget (Score:2)
Most of us can't afford to worry about this, but does the Fusion-io suffer from this issue?
No. Not Now. Not Ever. I'm Coming For All of You! (Score:5, Funny)
> Vista has already started working around this problem, since it uses a default partitioning geometry of 240 heads and 63 sectors/track. This results in a cylinder boundary which is divisible by 8, and so the partitions (with the exception of the first, which is still misaligned unless you play some additional tricks) are 4k aligned. So this is one place where Vista is ahead of Linuxâ¦.
Although the technology it is used in is repugnant, NTFS has always been the One True Filesystem. It descended from DIGITAL's ODS2 (On Disk Structure 2) which traces back to the original Five Models (PDP 1, 8, 10, 11 and 12). You see, ODS was written by passionate people with degrees and rich personal lives in Massachusetts who sang and danced before the fall of humanity to the indignant Gates series who assimilated their young wherever possible and worked them into early graves during his epic battle with the Steves before the UNIX enemy remerged after a 25 year sleep and nuked the United States, draining all of its technological secrets to the other side of the world. Gates, realizing what he's done, now travels the universe seeking to rebuild his legacy by purifying humanity while the Steve series attempts to rebuild itself. Some of the original Five are still around, left to logon to Slashdot and witness what's left of the shadow of humanity still in the game as they struggle blindly around in epic circles indulging new and different ways to steal music, art and technology to make up for their lack of creativity long ago bred out of them by the Gates series.
Re: (Score:2)
Re: (Score:2, Insightful)
Although the technology it is used in is repugnant, NTFS has always been the One True Filesystem.
I thought ZFS was.
And ZFS has native support for SSD as L2ARC. http://www.c0t0d0s0.org/media/presentations/ssd.pdf [c0t0d0s0.org] I have nothing but praise for ZFS. Simple to manage, reliable, fast. With native CIFS instead of User file system Samba, I've seen orders of magnitude performance from windows machines when doing networked file access. Gary
Why pretend these are ordinary disks? (Score:5, Insightful)
SSDs gradually gain more and more sophisticated controllers which do more and more to try to make the SSD seem like an ordinary hard drive, but at the end of the day the differences are great enough that they can't all be plastered over that way (the fragmentation/long term use problems the story linked to are a good example). I know that (at present- this could and should be fixed) making these things run on a regular hard drive interface and tolerate being used with a regular FS is important for Windows compatibility, but it seems like a lot of cost could be avoided and a lot of performance gained by having a more direct flash interface and using flash-specific filesystems like UBIFS, YAFFS2, or LogFS. I have to wonder why vendors aren't pursuing that path.
Re:Why pretend these are ordinary disks? (Score:5, Interesting)
Because Intel and the rest want to keep their wear-leveling algorithm and proprietary controller as much of a secret as possible so they can try to keep on top of the SSD market.
Moving wear-levelling into the filesystem - especially an open source one - effectively also defeats the ability to change the low-level operation of the drive when it comes to each flash chip - and of course, having a filesystem and a special MTD driver for *every single SSD drive manufactured* when they change flash chips or tweak the controller, could get unwieldy.
Backing them behind SATA is a wonderful idea, but this reliance on CHS values I think is what's killing it. Why is the Linux block subsystem still stuck in the 20MB hard-disk era like this?
Re: (Score:2)
> and of course, having a filesystem and a special MTD driver for
> *every single SSD drive manufactured* when they change flash
> chips or tweak the controller, could get unwieldy.
Large numbers of flash chips can be supported by the MTD CFI drivers:
http://en.wikipedia.org/wiki/Common_Flash_Memory_Interface [wikipedia.org]
Something similar could be done for SSDs too, except they've chosen HDD standards as they are a better fit.
Mike
Re: (Score:2)
Same reason it doesn't reasonably support heirarchical persistent RAM: Everybody who wants to do it is too busy with other work.
Re: (Score:3, Insightful)
Why is the Linux block subsystem still stuck in the 20MB hard-disk era like this?
As one who had to tune the performance of hard drives at the kernel level, I can say with some authority that the Linux block subsystem is not at all stuck in the 20MB hard-disk era. In fact, everything is logical blocks these days, and it's the filesystem driver and IO schedulers which determine the write sequences. The block layer is largely "dumb" in this regard, and treats every block device as nothing more than a la
Take a look at Maemo . . . (Score:2)
. . . which runs on the Nokia N800/N810 "Internet Tablets" (www.maemo.org). They might have done some tweaking, since this is Linux running on SSDs.
Re: (Score:3, Interesting)
Don't forget android.
Re: (Score:2)
Maemo and several other embedded systems have been using flash based disk storage for years. The problem is that SSD isn't a flash storage device, its a hard-drive interface wrapped around a flash device.
Since Linux can't see the flash devices themselves, it can't properly implement a flash based hard-drive interface.
repeated re-write issues? (Score:2)
Re:repeated re-write issues? (Score:5, Informative)
It will outlast a standard hard drive by orders of magnitude so it's completely not an issue.
With wear leveling and the technology now supporting millions of writes it just doesn't matter. Here's a random data sheet: http://mtron.net/Upload_Data/Spec/ASIC/MOBI/PATA/MSD-PATA3035_rev0.3.pdf [mtron.net]
"Write endurance: >140 years @ 50GB write/day at 32GB SSD"
Basically the device will fail before it reaches the it runs out of write cycles. You can overwrite the entire device twice a day and it will last longer than your lifetime. Of course it will fail due to other issues before then anyway.
Can there be a mention of SSDs without this out-dated garbage being brought up?
Re:repeated re-write issues? (Score:5, Informative)
1. large block size (120k-200k?) means that even if you write 20 bytes, the disk physically writes a lot more. For logfiles and databases (quite common on desktops too, think of index dbs and sqlite in firefox for storing the search history...) where tiny amounts of data are modified, this can add up rapidly. Something writes to the disk once every second? That's 16.5GB / day, even if you're only changing a single byte over and over.
2. Even if the memory cells do not die, due to the large block size, fragmentation will occur (most of the cells will have a small amount of space used in them). There has been a few articles about this that even devices with advanced wear leveling technology like Intel's exhibit a large performance drop (less than half of the read/write performance of a new drive of the same kind) after a few months of normal usage.
3. According to Tomshardware [tomshardware.com] unnamed OEMs told them that all the SSD drives they tested under simulated server workloads got toasted after a few months of testing. Now, I wouldn't necessary consider this accurate or true, but I'd sure as hell would not use SSDs in a serious environment until this is proven false.
Re: (Score:3, Informative)
Having a few Compact Flash disks wear out in the recent past, I'm not exactly anxious to replace my server disks with SSD.
Re: (Score:2)
Re: (Score:2)
What is different about SSD's? (Score:2)
From what I can scrape together quickly off of the Internet IANASE (I am not a software engineer). The biggest difference seems to be the lack of a need for error checking and disk defrag etc. Since the a normal spinning hdd does not actually delete a file but just removes the markers the filesystem treats all areas the same and does the same things to both real and non-real data to keep the disk state sane. In an SSD all of this leads to a lot of unneeded disk usage and premature degradation of the dri
Re: (Score:2)
Flash devices have the inherent weakness that if you write to the same place in the disk say 10000 times, that part of the disk will stop working.
Its kind of like a corrupt sector(piece of the disk) on your regular hard-drive, but instead of the timer being based on some drive defects or head crashes, its based on a write timer.
Why is this a big deal? Say I have a file called foose.txt. I decide that my neat program will open the file, increment a number, then close the file again. It sound pretty simple, b
Re:What is different about SSD's? (Score:5, Informative)
Because of this, I imagine that the author would like Linux devs to better support SSD's by getting non-flash file systems to support SSD better than they are today.
Heh. The author is a Linux dev; I'm the ext4 maintainer, and if you read my actual blog posting, you'll see that I gave some practical things that can be done to support SSD's today just by better tuning parameters given to tools like fdisk, pvcreate, mke2fs, etc., and I talked about some of the things I'm thinking about to make ext4 better at support SSD's better than it does today.....
Don't SSD's have a pre-set number of writes? (Score:2, Funny)
Does it really matter if they spread these writes around on the hard drive when the number of writes the drive is capable of doing is still the same in the end?
To drastically oversimplify, lets say that each block can be written to twice. Does it really matter if they used up the first blocks on the drive and just spread towards the end of the drive partition with general usage rather than jumping a
Re: (Score:3, Informative)
Having every cell written to nine times: 100 * 9 = 900 writes and you still have a completely working disk.
Writing 900 writes to the first couple of cells: you now have 90 defective cells. In fact, as you still have to rewrite the data to working cells, you have lost your data as there aren't enough working cells.
Comment removed (Score:4, Informative)
Re: (Score:3, Informative)
Flash using MLC cells have 10,000 write cycles; flash using SLC cells have 100,000 write cycles, and are much faster from a write perspective. The key is write amplification; if you have a flash device with an 128k erase block size, in the worst case, assuming the dumbest possible SSD controller, each 4k singleton write might require erasing and rewriting a 128k erase block. In that case, you would have a write amplification factor of 32. Intel claims that with their advanced LBA redirection table tec
Re: (Score:2)
Re: (Score:2, Insightful)
Thinkpad X300 came with defrag tools (Score:3, Insightful)
I purchased an X300 Thinkpad for the company this week and took a close look at it. I thought expensive business notebooks come without crapware. And I was sure the X300 would be optimized. But they had defrags scheduled! I always thought defrag is a no no for ssds. Now I am not sure anymore. I deinstalled it first. But who knows?
Raid SSD (Score:2)
I just recently put in two 128Gb SSD disks in a raid 0 set. I set up a ram drive for use as /tmp and have /var going to another partition on a standard SATA harddrive. I changed fstab to mount the drives noatime so it doesn't record file access times. I also made some other tweaks pointing any programs or services that write logs or use a temporary cache somewhere to use /tmp. Its a software raid I use so I'm using /dev/mapper/-- as the device so I'm not exactly sure how to use the schedular, although I hav
SSD sucks battery life. No no, not a troll. (Score:2)
Tasks for task a SSD saves power, possibly more than would be lost by any higher CPU speed steps, but in something like a looping benchmark more work is done in the same time therefore more power draw.
This phenomena Had tom's hardw
Re: (Score:2)
Re: (Score:2)
Its not the volume of supply which is causing the high prices.
They are inherently expensive to make with todays methods.
Re: (Score:2)
I've considered getting a large capacity CF card (16 GB or 32 GB) to use as a solid state drive for my laptop. The CF + adapter combination is a lot cheaper than these new SSD. So why should I get a SSD vs. a CF card?
Re:Still too expensive... (Score:5, Informative)
> So why should I get a SSD vs. a CF card?
10 times better performance and wear-leveling worth a crap.
Re: (Score:2, Informative)
Your CF card is going to use the USB interface which maxes out at about 40Mbps as opposed to using an internal SSD's SATAII interface which maxes at 300Mbps. Not quite an order of magnitude, but close.
On the other hand, if you're going to use an external SSD connected to the USB port, then you wouldn't see any difference between the 2 in terms of speed. Lifespan might be longer w/ the SSD due to better wear leveling, but in either case you're probably going to lose o
Re:Still too expensive... (Score:5, Informative)
A real SSD has several advantages over using CF cards, but not for the reasons you state.
With a simple plug adapter, CF cards can be connected to an IDE interface, so speeds won't be limited by interface speed. The most recent revision of the CF spec adds support for IDE Ultra DMA 133 (133 MB/s)
A couple of additional points, just because I love nitpicking:
- A USB 2.0 mass storage device has a practical maximum speed of around 25 MB/s, not 40 Mb/s.
- The so-called SATA II interface (that name is actually incorrect and is not sanctioned by the standardization body) has a maximum speed of 300 MB/s, not Mb/s.
Re: (Score:3, Informative)
Why is this informative? CF with an adapter is NOT USB.
From my experience, using an adapter puts it on the native interface - notably, with CF, it's easiest to put the device into a machine that has a native IDE (not SATA) interface. CF is pin compatible with IDE.
Now, in the current offering of SLC/MLC "drives" you can actually get better read/write since they "raid" for lack of a better term the internal chips. I'm using a transcend ATA-4 CF device that gets around 30MB/sec read/write in a machine in my
Re: (Score:2)
Yea, I just wanted to stress the fact that it's not USB more than anything; haven't tested personally the CF-->SATA bridges. They work well?
Re: (Score:3, Informative)
Your CF card is going to use the USB interface
This is Informative?
CF cards are actually IDE devices. The adapters that plug CF into your IDE bus are just passive wiring.. no protocol adapter needed.
It's trivial to replace a laptop drive with a modern high-density CF card, and sometimes a great thing to do.
The highest-performance CF cards today use UDMA for even higher bandwidth.
HighSpeed USB can't reasonably get over 25MB/sec from the cards using a USB-CF adapter, but you can do better by using its native bus.
Re: (Score:3, Informative)
Your CF card is going to use the USB interface which maxes out at about 40Mbps as opposed to using an internal SSD's SATAII interface which maxes at 300Mbps. Not quite an order of magnitude, but close.
There are three factual errors in that statement.
1. CF-cards can be connected directly to the ATA-port via a simple passive connector-adapter and therefor have a theoretical maximum transfer speed of 133MB/s, which roughly translates to 1300Mbps. There's even adapters with room for both a master and slave CF-card in the same shape, size and connector position as a 2.5" ATA drive, specifically made to use CF-cards in laptops.
2. USB is 480Mbps.
3. SATA is 3000Mbps
The big speed-difference between SSD and CF is
Re: (Score:3, Interesting)
If it's an older laptop or the mechanical hard disk died, go for it. Addonics make SATA CF adapters so you are not restricted to IDE CF adapters.
Re: (Score:3, Informative)
CF works passably in WORM-like scenarios, where you basically use it in read-only mode and update it rarely and in big chunks. For random R/W access, CF lacks wear leveling to give it a tolerable life expectancy... Thus you commonly see it used in embedded devices such as routers and dumbterms where you may update the firmware or OS every few months; You don't see it used much in real, live writable FSs.
It also tends to have rather poor performance, with reads i
Re: (Score:3, Insightful)
The modern hot-shit high-speed CF cards have wear leveling and do UDMA transfers, you get a CF to ATA adapter, not CF to USB, and they will outperform most hard disks.
Re: (Score:2)
No doubt. But, I really think that within 5 years you're going to see most laptops using only an SSD.
Re: (Score:3, Informative)
However, for
many of us who require better-than-average data security, the matter of SSD's read/write behaviour makes the devices extremely vulnerable to analyses and discovery of data the owner/author of which believes to be inaccessible to others: 'secure wiping', or lack thereof, is the issue.
Obviously you should be encrypting your sensitive data.
Also, it should be no problem to write a bootable cd/usb that does a complete wipe. Just write over the whole disk, erase, repeat. No wear leveling will get around that.
Re:Agreed .. But equally important is ... (Score:4, Insightful)
Re: (Score:2)
Yes, dd, especially with random data, is pretty much as secure as any commercial product. But they all fail to touch the hidden blocks the drive has remapped because of potential failure.
Re: (Score:2)
then nuke the disk from orbit, this approach is the only way to be sure.
Re: (Score:2, Informative)
Also, it should be no problem to write a bootable cd/usb that does a complete wipe. Just write over the whole disk, erase, repeat. No wear leveling will get around that.
At least for OCZ drivers, the user capacity is several gigs lower than the user capacity, like 120GB to 128GB. I don't know about your data but pretty much can ble left in those 8GB. The only real solution is to not let sensitive data touch the disk unencrypted.
Re:Agreed .. But equally important is ... (Score:4, Informative)
Unfortunately flash SSDs usually have some percentage of sectors you cannot directly access, these are used for wear leveling and bad sector remapping. So when you dd with /dev/zero, it is quite possible that some part of the original data is left intact. And there can be quite alot of those sectors, I recall reading on one SSD drive that had 32GiB flash in it, but had 32GB available for the user, so 2250MiB was used for wear leveling and bad sectors (helps to get better yealds if you can have several bad 512KiB cells).
Re: (Score:2)
Agreed. And not just SSDs. Regular HDs remap sectors if they think they're failing. But usually they do so without you noticing a failure, which means that an almost perfectly readable copy of that sector has simply been remapped. No amount of overwriting will ever hit that sector because the drive is sure it's doing you a favor.
The info is still there, just a few debug commands away.
Re: (Score:2)
" 'secure wiping', or lack thereof, is the issue. "
The desire to wipe with software instead of the trivial amount of effort to physically smash and/or incinerate the media is the issue.
Compared to important data, media costs are trivial. Wipe media by destroying it thoroughly and you won't have to wonder about forensic recovery. Drive shredders and the like are spiffy, but a few dollars worth of common hand tools can destroy any drive.
destruction is fun too (Score:2)
So many choices!
belt sander
nitric acid
cutting torch
charcoal and a blower
chip wired into an AC wall socket
thermite
repeated use as a model rocket blast deflector
drill press
Re: (Score:3, Interesting)
This could be fun. Here are some more suggestions:
- Welder - The little chips don't last long against a good arc welder.
- 600 VAC - Why stop at a wall outlet?
- Tesla Coil - 200 kV is better than 600 VAC
- Lightening Rod. Why stop at 200 kV?
- Oxy-acetylene Torch - higher temperatures
- Plasma Cutter - even higher temperatures
- NdYAG Laser - Etch your name into the remains of the flash chip.
- Chew Toy for Dog - Don't underestimate some of those canines, although USB keys might not be g
Re: (Score:3, Funny)
Re: (Score:2)
L2ARC is interesting for servers, but on a desktop or laptop you can just put all your data on flash.
Re: (Score:2)
I'm familiar with the L2ARC idea. I think time will tell whether or not adding an extra layer of cache between the memory and commodity SATA hard drive really makes sense or not. For laptop use where we care about the power and shock resistance attributes of SSD's, it makes sense to pay a price premium for SSD's. However, it's not clear that SSD's will indeed become cheap enough, and even if they do, historically the cache hierarchy has 3 orders of magnitude between main memory and disks, and over the
Re: (Score:2)
Seems to me that Sun's zfs filesystem is ready to use the ssd storage. The copy-on-write strategy would seem to avoid the hot spots as zfs picks new blocks from the free pool rather than rewriting the same block.
Actually, given the X25-M's lack of TRIM support, using a log-structured filesystem, a write-anywhere filesystem, or a copy-on-write type system is actually a really bad use of the X25-M, since the X25-M will think the entire disk is in use. The X25-M is actually implemented to optimize for filesy
Re:1gb /boot? lvm? wtf... (Score:5, Interesting)
I use 1GB for /boot because I'm a kernel developer and I end up experimenting with a large number of kernels (yes, on my laptop --- I travel way to much, and a lot of my development time happens while I'm on an airplane). In addition, SystemTap requires compiling kernels with debuginfo enabled, which makes the resulting kernels gargantuan --- it's actually not that uncommon for me to fill my /boot partition and need to garbage collect old kernels. So yes, I really do need a 1GB for /boot.
As far as LVM, of course I use more than a single volume; separate LV's get used for test filesystems (I'm a filesystem developer, remember), but more importantly, the most important reason to use LVM is because it allows you to take snapshots of your live filesystem and then run e2fsck on the snapshot volume --- if the e2fsck is clean you can then drop the snapshot volume, and run "tune2fs -C 0 -T now /dev/XXX" on the file system. This eliminates boot-time fsck's, while still allowing me to make sure the file system is consistent. And because I'm running e2fsck on the snapshot, I can be reading e-mail or browsing the web while the e2fsck is running in the background. LVM is definitely worth the overhead (which isn't that much, in any case).