The New Linux Speed Trick 426
Brainsur quotes a story saying "
Linux kernel 2.6 introduces improved IO scheduling that can increase speed -- "sometimes by 1,000 percent or more, [more] often by 2x" -- for standard desktop workloads, and by as much as 15 percent on many database workloads, according to Andrew Morton of Open Source Development Labs. This increased speed is accomplished by minimizing the disk head movement during concurrent reads.
"
I've noticed it... (Score:5, Interesting)
Re:I've noticed it... (Score:3, Insightful)
From the sound of it you're talking about perceived speed for a desktop user, as opposed to measured server throughput. If this is the case, I imagine the biggest speed increase comes from the fact that (I believe) 2.6 offers far lower latency in the kernel, allowing it to
Slackware! (Score:3, Insightful)
However, I wouldn't even try that on RedHat or Mandrake without having the .config file and a list of distribution specific patches.
This was on a Celeron 1GHz laptop, and honestly, I couldn't tell the difference in speed beyond any custom compile. Custom meaning unnecessary device drivers are removed, and the ones that I need are compiled in (as opposed t
Cache? (Score:4, Interesting)
You're misunderstanding something... (Score:5, Informative)
cause processes to block while waiting for the data (and can thus stall processes for long amounts of time if not scheduled appropriately), whereas writes are typically fire-and-forget. This last bit means that you can usually just queue them up, return control to the user program, and perform the actual write at some more convenient time, i.e. later. Since reads (by the same process) are usually also heavily interdependent, it is also a win to schedule them early from that POV.
That's my understanding of it.
Re:Cache? (Score:2)
If you ever think about how inefficient it would be for the system to go read /bin/ls every time you typed the ls command you could see where caching is a damn good idea.
Doing read-ahead, write-behind and maintaining coherency isn't easy, from what little
Re:Cache? (Score:5, Informative)
Sure, and both Linux 2.4 and 2.6 do caching and read-ahead (reading more data than requested, hoping that the application will request the data in the future).
The I/O scheduler however lies beneath the cache layer. When it's decided that data must be read from or written to disk, the request is placed in a queue. The scheduler may reorder the queue in order to minimize head movements.
Also, 2.6 has the anticipatory I/O scheduler: after a read, the scheduler simply pauses for a (very) short period. This is done in the assumption that the application will request more data from the same general area on the disk. Even when other requests are in the I/O queue, requests to the area where the disk's heads are hovering will get priority.
While this increases latency (the time it takes for a request to be processed) a bit, throughput (the amount of data transfered in a time period) will also increase.
It did take a fair amount of experimenting and tuning in order to make the I/O scheduler work as well as it does now. However there still may be some corner cases where the new scheduler is much slower than the old.
SCSI (Score:4, Interesting)
Re:SCSI (Score:2, Insightful)
Re:SCSI (Score:5, Informative)
Expensive, yes. Aging, no. Ten years ago people said SCSI was the future. Now everyone runs it, they just don't know it.
IDE in its original form has never been able to keep up with a 10k RPM (or higher) disk.
I think what the parent post is alluding to is Tagged Queuing. Tagged Queueing allows you to group blocks together and tell the drive to write them in some priority. That sort of thing is used to guarantee journaling and such. Interestingly, the lack of this mechanism is why many IDE drives torch journalled fs's when they lose power during a write--they do buffering but without some sort of priority. You can imagine I was pretty torqued the first time I had to fsck an ext3 (or rebuild-tree on reiserfs) after a power failure.
The reason that the kernel helps even with the above technology is that the drive queue is easily filled. Even when you have a multimegabyte drive cache and a fast drive, large amounts of data spread over the disk can take a while to write out.
This scheduler is able to take into account Linux's entire internal disk cache (sometimes gigs of data in RAM) and schedule that before it hits the drives.
Re:SCSI (Score:3, Insightful)
If I'm writing a user-land program which memory maps a large file, modifies it in memory - then uses msync() to write to disk - what can be safely assumed?
Re:SCSI (Score:3, Insightful)
As for order of pending writes, I don't think you get to have a say on any particular writes, but you can sync after writing to commit everything so far.
See a
Re:SCSI (Score:3, Insightful)
Not sure what happens if you try this on a network file system, whether it forces the hosting computer to flush to disk, or if it only forces the local computer to flush to the host.
Depends on the server - you can request it, but the server isn't obligated to comply.
Re:SCSI (Score:5, Insightful)
ATAPI is SCSI-over-IDE however.
I wrote the IDE/ATA drivers for the Amiga. The Amiga SCSI drivers accepted "SCSIDirect" commands from applications. Internally, all IO commands were converted to SCSIDirect commands for execution. To implement ATA, I added a SCSIDirect->ATA translator (which wasn't that hard - about 3 weeks from start to working, booting system - and I implemented just about all SCSI commands even semi-reasonable (all of CCS I think, plus quite a bit).
Doing it this way made implementing support for ATAPI CDROMs (something I did as a contract after Commodore folded) Very Easy.
Re:SCSI (Score:3, Insightful)
Trying to do all the reordering in the OS (as suggested in several posts here) seems like a good idea, but ignores some issues:
Re:SCSI (Score:3, Informative)
Re:SCSI (Score:2, Informative)
I run WebGUI [plainblack.com] on this machine, which recieves some 3 and a quarter million hits per month. Nothing to raise the eye brows at; but check it: on this machine the average uptime value is some 0.80. My personal (p3) machine, running a BBS, mail, bittorent, and web service maintains a constant 1.3+.
I've guaged the importance of SCSI drives in the equation via a (sadly) messy, but soon to be SourceForg
Re:SCSI (Score:3, Insightful)
Re:SCSI (Score:3, Funny)
It's obviously been a long time since you used Windows.
Re:SCSI (Score:5, Informative)
Yeah, I think so. IIRC it's called tagged command queueing - the drive can have multiple requests pending and instead of doing them first come first served, they're fulfilled in order of estimated latency to that point.
I believe Western Digital's recent Raptor IDE drives have the same feature.
The benefit of this seems contingent upon having multiple requests pending, which AFAIK is hard on linux as there's no non-blocking file IO. To me, this reads like a workaround for that.
Re:SCSI (Score:3, Interesting)
This is different: the scheduler isn't trying to minimise head movement for a list of pending read requests (which is what elevator does), it's gathering statistics about the IO behaviour of each process and trying to guess in advance without being asked what each process will ask for next.
If it guesses right, the data will already be in cache when the process does a read() and the request will succeed instantly.
Re:SCSI (Score:3, Interesting)
If you have a 30 story building, you can either put in dumb elevators which fill 3/4ths of the building to meet demand, or you can put in a much smaller number of elevators using techniques like express elevators and using software which keeps track of usage patterns and puts elevators on floors before somebody even hits the button...
Re:SCSI (Score:3, Interesting)
Instead of pusing an up or down button and waiting for an elevator, you had a number pad with an LCD display. You punched in the number of the floor you wanted to go to and the LCD would display the letter of the elevator you were assigned to. There were no floor controls inside the elevator. The system
1,000 percent? (Score:2, Insightful)
Linux Devices has an article on the 2.6 network features here http://linuxdevices.com/articles/AT7885999771.html [linuxdevices.com]
Re:1,000 percent? (Score:4, Insightful)
I suppose that since database data is generally grouped together and read in a big chunk there's less room for improvment.
Re:1,000 percent? (Score:5, Informative)
15% = 1.15x
100% = 2x
200% = 3x
300% = 4x
900% = 10x
1000% = 11x
a % = (a+100)/100 x
Re:1,000 percent? (Score:5, Insightful)
Just as 50% is half, but 50% improvement is three halves as good.
Re:1,000 percent? (Score:3, Funny)
100% = 1/2 the time.
200% = 1/2 of 1/2 the time, which is 1/4 the time.
300% = 1/8 the time.
1000% = 1/1024 the time.
Which is a 1023/1024 improvement, or only 0.999x, so disk access is in fact slightly slower!
Yes, I'm really bad at maths.
Rik
fragmented information (Score:2)
It sounds like what your saying is that non database data on a disk is fragmented and that is why the head has to move all over the place.
Not too unreasonable. (Score:2)
Say, if a block is read on one end of the platter, then 10 subsequent reads are read in close proximity at the other end, followed by an 11th read at the beginning again, a predictive seeker could re-prioritize the 11th seek to be right after the first. That would cut
Re:1,000 percent? (Score:5, Informative)
Cool (Score:4, Informative)
"The anticipatory scheduling is so named because it anticipates processes doing several dependent reads. In theory, this should minimize the disk head movement. Without anticipation, the heads may have to seek back and forth under several loads, and there is a small delay before the head returns for a seek to see if the process requests another read. "
"The deadline scheduler has two additional scheduling queues that were not available to the 2.4 IO scheduler. The two new queues are a FIFO read queue and a FIFO write queue. This new multi-queue method allows for greater interactivity by giving the read requests a better deadline than write requests, thus ensuring that applications rarely will be delayed by read requests."
Nice, but this is making things more complex. I admit I'll just keep all kernel settings at wherever Mandrake sets them as. Will other people play about and specialise their system for the task that it does?
Re:Cool (Score:2, Insightful)
It's early, but did read/write heads suddenly develop intelligence while I was napping?
A.
Re:Cool (Score:2)
Re:Cool (Score:2)
Perhaps that is why the default setting is the one indicated for desktop users.
And yes, if I were using a Linux box for specific server tasks then I would tweak the settings to get a bit more performance out of it.
Re:Cool (Score:2, Insightful)
I think this would work to minimize the impact of a slow access drive in a heavily multitasking system too.
Re:Cool (Score:5, Informative)
Sorry for biting on the troll but I felt like explaining it.
Re:Cool (Score:3, Funny)
"offtopic to my bias"
"troll to my bias"
etc ;)
as the only way you get modded accurately is if you're in the same camp as the moderator. I'm clearly not.
Re:Cool (Score:3, Insightful)
In general, kernel hackers don't write pretty GUI's or design highly usable interfaces, and HCI experts don't optimise low-level scheduling algorithms. These are orthogonal parts of Linux, and your belief that improving the kernel's low-level efficiency somehow makes
Why not combine those two methods? (Score:5, Interesting)
Re:Why not combine those two methods? (Score:4, Insightful)
what would you have expected the kernel 2.8 to bring you ?
</joke>
Basically, I think this is like the windows system settings : you either privilegiate front end services (GUI) or back end services (apache, etc) but you cannot do both because some would be optimized for reactivity, the others to handle the workload... like a ferrari and a truck... this doesn't work nor excel in the same way.
Re:Why not combine those two methods? (Score:4, Informative)
Nice of you to point out the mistake like an ass, though. (Yes, just like I'm doing.)
-Rob, a Canadian in Finland
[ot] (Score:4, Funny)
Anyway, you found out that I indeed am not a native English speaker, hence the neologistications.
Re:Why not combine those two methods? (Score:5, Informative)
Amiga Disks (Score:5, Interesting)
When I had an Amiga (aroung '91ish), even though It was fully multitasking, I learnt to never open any app while another was loading. If you did, you could hear the disk head moving back and forward between two sectors on disk every half second or so, slowing both app launches to a crawl. Waiting until one loaded, and launching the second was many times faster.
I've always wondered why there wasn't something in the OS to force this behaviour, Ie, making sure that App 2 access to the disk is queued until app 1 has finished. Isn't this one of the reasons Windows takes ages to boot? (many processes all competing for the one disk resource?).
Re:Amiga Disks (Score:3, Informative)
Which version of Windows are you referring to? While risking to sound like a fan boy here, I must say that the OS load times for XP are quite fast compared to previous versions and to most vanilla linux distributions I've tried in the past (Mandrake 9.x, Redhat8/9). Whether or not this is in relation to resolving two processes arguing over access to read from the disk, I don't know. Does
Re:Amiga Disks (Score:3, Interesting)
IMHO Default windows config is kinda like a Redhat with everything and then some.
Start in safe mode and watch all the crap that tries to load - a ton of it is not needed.
If you tighten your install by removing a lot of the extra services, spyware, and a few performance tweaks - you'll see a major speed increase over all.
I use XP, but I don't like it much anymore...It's s
Re:Windows boots slowly??? (Score:3, Interesting)
Re:Amiga Disks (Score:2)
Re:Amiga Disks and CD-ROMs (Score:2, Informative)
Of course, this raises the point that aligning the data on a game CD or DVD for a console is a science in itself. PC game development is easy in comparison! (plonk ever
Re:Amiga Disks (Score:4, Informative)
Being multi-user complicates things even further. Sure, you are a single user on a desktop machine, and you double click on two programs in rapid succession, queuing them for loading one after the other may be the right thing to do. But what if those programs are actually being loaded by two different users? Can we completely lock out one user just because they started loading their program slightly later? Again, what if user A runs emacs, and a fraction of a second later, user B runs ls? Under your system, B effectively has to wait as long as it would take to load emacs, plus as long as would take to load ls?
You can't even realistically seperate the queues by user. In many situations, a single unix user may be running on behalf on many physical users (AKA human beings
I'm not saying that any of these problems are intractable (Linux is now doing a pretty fine job), just that they aren't as even remotely as trivial as queuing loads one after another.
Oh BTW, thanks for bringing back happy Amiga memories. Them were the days!
Re:Amiga Disks (Score:3, Informative)
AFAIK, the reason Windows used to take ages to boot was that drivers and services were started sequentially and no optimaztion was ever done for the boot process. Windows XP, OTOH, had a goal of less than 30 seconds for a cold
Re:Amiga Disks (Score:3, Interesting)
Re:Amiga Disks (Score:3, Insightful)
I've always wondered why there wasn't something in the OS to force this behaviour, Ie, making sure that App 2 access to the disk is queued until app 1 has finished. Isn't this one of the reasons Windows takes ages to boot? (many processes all competing for the one disk resource?).
You run into a problem where you don't know when app 1 has finished loading in order to start loading app 2. Why, because a loading app looks no different than a running application. You could possibly get around this by having
Re:Amiga Disks (Score:4, Interesting)
I've found the opposite (Score:3, Interesting)
Re:I've found the opposite (Score:2)
I've also seen a problem where the Web browser (either Mozilla or Firefox) pegs the CPU. Bad Javascript in a Webpage soemwhere? Never saw this on my old 2.4.23 kernel.
Re:I've found the opposite (Score:2)
Yes. I found that. Then I realised I had the hard disk in PIO only mode. A recompile with DMA support and it's smooth as silk.
Re:I've found the opposite (Score:5, Informative)
Schedule for Interactivity (Score:2, Insightful)
Surprise, the Mac has the same reactivity problem now thanks to its Unix (Mach) kernel, while the previous Mac OS 9 crashed regularly, couldn't multitask, but has a much snappier user-experience. Apple has been ad
Stolen from SCO (Score:3, Funny)
But how? (Score:2, Insightful)
I was always under the impression that modern hard drive designs hide the physical disk bits and pieces from the PC. So how can software predict where the heads are?
Re:But how? (Score:2, Informative)
Re:But how? (Score:3, Informative)
Anyway, simple sorting on LBA address will typically reduce head seeks to a large extent, resulting in most of the potential benefit. It i
Disk Transfer QoS (Score:4, Interesting)
Is anyone in the Linux world considering this ?
This is probably more applicable to the enterprise market, but surely any scheme of informing the scheduler about the expected disk transfer characteristics has to improve performance.
On the other hand, it might be just Sun trying to re-invent uses of buzz words to sell their products.
Re:Disk Transfer QoS (Score:4, Informative)
Two words: IRIX, XFS.
IRIX had some sort of "quality of service applied to disk accesses", as you wrote, thanks to XFS. The filesystem allows defining zones that have a "minimal throughput" configured. I can't say more about it because I know only by referrals of another people O:-)
XFS is available for Linux since 2.6.0 and 2.4.24, IIRC, and I think this feature is also available in the latest kernels. Though it's still experimental, IIRC.
Benchmark (Score:5, Informative)
The benchmark was made before 2.6.0, but I still think it shows the big difference from the 2.4 IO scheduler.
Quote:
Executive summary: the anticipatory scheduler is wiping the others off the map, and 2.4 is a disaster.
Retro is still cool ? (Score:4, Insightful)
All the wonderful stuff like disk seek optimisation, interleaved memory (Even MMU came to the moden computer about 15 years after everyone else had it) were technologies that made systems stand out from each other.
Because of the speed of things these days, lots of that tech has been largely ignored, until now when we're starting to hit hard performance barriers again. Now we have to invent the technology og the '70s all over again. It's nice to see all this stuff comming back though.
The Renaissance (Score:4, Interesting)
One programmer likened the 70-80s as The Dark Ages. There were cabals and secret voodoo that people sat on and didn't share and you ended up with an ignorant masses that only thought "this is as good as it gets". Hopefully this renaissance sticks because it doesn't matter how good or cool your technology is if you bury it for 20 years without another person knowing.
CFQ (Score:4, Informative)
With anticipatory or deadline, I'm experiencing awful skips with artsd under KDE 3.2 every time there is a heavy disk access, but it's [almost] completely gone with cfq.
To use it, compile a -mm kernel and add the 'elevator=cfq' to the kernel boot parameters through Lilo or Grub.
See this lwn article [lwn.net] for more info.
Real benefits... (Score:4, Insightful)
Let me start by claiming that optimizing desktop performanceis all about optimizing I/O patterns (contrary to what all Gentoo users think :P). My KDE startup is about three times as fast when I everything is in the disk cache, so it is clear where the bottleneck. (Just try logging in to KDE after boot, then log out and log in again.) A concentrated effort of
There has been a lot of discussion about this on the kde-optimize list (with Andrew Morton participating), so maybe we can hope that KDE 3.3 will offer some improvements.
As an aside, yes, we all hate the windows registry, but I think we should admit that for boot time optimization it is the right thing to do (having everything in one file that is layed out in one contigious block on the disk.)
Speed-ups (Score:3, Insightful)
Alternatively, have multiple read-heads on a single arm. 3 would be a good number. The idea here would be that you could pre-seek either side of the disk, before finishing a read by the currently-active arm.
Re:Speed-ups (Score:3, Informative)
You don't mean m
I think I've heard of this (Score:2, Funny)
Doesn't this involve a green marker, and tracing along the edge of the hard drive? Faster and less distortion?
Preemptive and Defragged? (Score:2)
Also, It sounds like that if Linux had a defrag utility that the data could store data on the disk the way it would be accessed. If the OS would watch to see how the data is being accessed, it could then re-arrange the data dynamically. Example - you access File A which accesses File B and File C, the OS would recognize this and re-arrange the data in that
Re:Preemptive and Defragged? (Score:3, Interesting)
Second, Linux doesn't need a defrag utility. Linux filesystems (Ext2 and Ext3) allocate files properly, using clustering and inodes. The need to defrag comes from the bad design of FAT, which works great on a 8088 processor with tiny files on a 1Meg drive, but is terribly inefficient on anything past a 386.
Of course, there does exist a 'defrag' utility
Databases and reliable commits (Score:4, Interesting)
This messing with the I/O queue may make things interesting for the journalling process which is kind of vital to integrity. File placement could become even more important for this (and also the placing of journal/log files).
The rest seems to just effectively be a modified elevator (wait a bit before moving).
Idea... (Score:2)
Re:Idea... (Score:3, Interesting)
Would it not be possible to write a very basic adaptive network that "learns" what the best values for these parameters are for each individual machine, ba
Kernel comparison on a SMP system (Score:4, Informative)
2.6.x faster in other ways, too (Score:4, Informative)
I don't know if it's due to SpeedStep support being in the kernel or what, but when I was running 2.4.x with the pre-emptible kernel patches, switching from wall power to battery power meant massive slowdowns, as though I had switched from a PIII-1GHz to a 100MHz Pentium classic. Simple commands like "ps" would take seconds to complete and screen redraws were visible. The whole system would feel like sludge. In spite of this fact, battery life was relatively poor. The combined effect (much slowed system, very short battery life) meant that it was difficult to get anything at all done on battery power.
Now with 2.6.x, when I switch to battery power, there is no perceptible slowdown whatsoever when compared to wall power, and battery life is much improved. Downside: suspending 2.6.x kills USB-uhci, so I've had to compile it as a module and hack up my suspend/resume scripts to reload it each time. But for the speed increase, it's well worth the trouble.
NPTL is a key component of the new speed (Score:3, Interesting)
NPTL brings an eight-fold improvement over its predecessor. Tests conducted by its authors have shown that Linux, with this new threading, can start and stop 100,000 threads simultaneously in about two seconds. This task took 15 minutes on the old threading model.
research background for anticipatory scheduling (Score:4, Informative)
another I/O speed trick: mount with noatime (Score:3, Informative)
If you don't care about last access times on your files, then you should consider mounting your filesystems with the noatime mount flag as in this /etc/fstab line:
Reading a file under noatime means that the kernel does not need to go back and update the last access time field of that file's inode. Sure, multiple reads over a span of a few seconds will only cause the in-core inode to be modified, but eventually that modified inode must be flushed out to disk. Why cause an extra write to the disk for a feature that you might not care about?
For example: think about those cron jobs / progs that scan the file tree (tmpwatch, updatedb, etc.). Unless you mount with the noatime option, your kernel must at least update the last access time fields of every directory's inode! Think about those /etc files that are
frequently read (hosts, hosts.allow, DIR_COLORS,
resolv.conf, etc.) or the dynamic shared libs
(libc.so.6, ld-linux.so.2, libdl.so.2, etc.)
that are frequently used by progs. Why
waste write-ops updating their last access
time fields?
Yes, the last access time field has some uses. However, the the cost of updating those last access timestamps, IMHO, is seldom worth the extra disk ops.
There are other advantages to using the noatime mount option ... however to
wind up this posting I'll just say that I
always mount my ext3 filesystems with the
noatime mount flag. I recommend that
you consider looking into this option if you
don't use it already.
How can I switch between them? (Score:3, Interesting)
This should be a mount option, not a boot option.
Re:Linux Speed (Or Lack Thereof) (Score:5, Funny)
And this would help my computer how?
Re:Anti-MS Patent (Score:3, Interesting)
The end-(Windows)-user benefits from it.
That's the price of freedom.
And any additions MS makes to the code must be made public.
So then everybody benefits.
Re:Anti-MS Patent (Score:4, Interesting)
so if MS make any improvements in their own implementation of the concept, then the code would not be made public and MS benefits and not everyone else.
to elaborate (and in some ways i believe this is what SCO are arguing), lets say i see an open source application that does something neat. it probably won't be patented because the author expects someone to contribute any modifications back. but lets so i don't because i'm a greedy commercial corporate and so i effectively copy the IDEAS behind the application. my code may look quite similar to theirs, but i certaintly have not infringed on the GPL (or have I - i'm no lawyer!).
so if this neat application had an "open source patent" in that anyone infringing on the patent would not be liable for millions, but rather they would be liable and forced to open up the source code of their particular implementation.
Oooh.... evil Microsoft must be thwarted! (Score:2, Insightful)
It's kind of sad that the free software advocates sometimes get so carried off by their pathological hatred for Microsoft and corporations that they don't see that they're about to become "the enemy" themselves.
Free is free. If you start to restrict the use and availability of your code by requiring the release of any modifications to the public, it's not free code anymore - no matter what RMS
Re:Anti-MS Patent (Score:4, Informative)
Re:Anti-MS Patent (Score:2)
Re:Anti-MS Patent (Score:5, Interesting)
Not to burst your bubble, but the NT scheduler already implements predictive disk I/O concepts.
Nice that Linux is finally catching up though...
Oh, come on... (Score:5, Interesting)
The NT scheduler has been O(1) like, eh, forever.
Our kernel produces far superior performance due to providing hooks for the COM layer
Yeah, whatever. There is no COM anywhere near the NT kernel, and the latest and greatest from Microsoft, the .NET framework, isn't even based on COM anymore
Nice troll...
Re:Oh, come on... (Score:3, Insightful)
This article is just another example of such a case. The anticipatory scheduler algorithm was not published until 2001. With Linux you get these sorts of benefits integrated in reasonble time, with Microsoft you have to wait between W2K an
Re:Our take on it from inside MSFT (Score:3, Funny)
I believe you, you must really work at Microsoft.
Re:Our take on it from inside MSFT (Score:3)
Zealotry is all fine and dandy, but delusional zealotry just lands people in jail.
You need help, buddy.
Re:I'm an end user (Score:2)
If you have an old computer, and are unwilling to upgrade its hardware, try "downgrading" your software to older versions.
I can browse the web fairly decently on my 10yr old 486, but it's running windows 3.11
Re:what's old is new again (Score:5, Informative)
The anticipatory scheduler tries to anticipate future requests (who would have guessed that?), and is relatively new [acm.org]
Re:_New_ Kernel? (Score:3, Interesting)
And if you look above to this [slashdot.org] post, you can all see a great deal of decent explanations of what 1000% increase actually means (11%).
Re:How can I set the boot parameters? (Score:3, Informative)
The anticipatory scheduler is the default for the vanilla 2.6 kernel.