Linux 3.0 Will Be Faster Than 2.6.39 179
sfcrazy writes "While we were thinking that the announcement of 3.x branch was nothing more than Linus' mood swing, it seems there is more to it. Linus wrote on the Linux Kernel Mailing List, '3.0 will still be noticeably faster than 2.6.39 due to the other changes made (ie the read-ahead), so yes, the regression itself is fixed.'"
Re:linux 3.0 (Score:4, Informative)
They're currently on 3.0 RC4. [kernel.org] So I imagine that what will and won't be in the release has pretty much solidified by this point.
prefetch() (Score:5, Informative)
According to LWN article about removing prefetch [lwn.net], the linux kernel 3.0.0 will have a bunch of prefetch() calls removed from the kernel.
Apparently they were supposed to provide hints to the CPU to prefetch the next item in linked lists, but the hardware does a superior job of it without the hints. Especially in the case of the next item being NULL, which was the majority of the cases.
A very small speedup to be sure, but it's not like there are many low hanging huge wins left.
Re:Faster? (Score:2, Informative)
Re:It should have compelling features (Score:1, Informative)
Wapner.
Re:Faster? (Score:5, Informative)
Yep, I was there (almost) - some of my first programs were written in IBM 1130 Assembler. :) The machine had 16Kx16bit core, a 1 MB single-platter disk with a one second mean access time. I managed to thrash the poor beast once by nesting too many macros. The key fact is that the control program (pretty close to what we now call a kernel) was on the outside of the disk, the macro-assembler was in the middle, and the user programs were on the inside (or vice versa - I don't recall now). With only 16K of memory, everything the machine did had to be overlaid - except for about 200 bytes or so (I don't recall the number) of code that stayed resident during a job, that basically just knew how to get the next piece off the disk. That could now be called a very primitive kernel.
I suppose this could be considered equivalent in some ways to a bootstrap loader except it continued bootstrapping the various pieces in throughout the process of running a job. Every piece of code had to be loaded over the previous piece in order to run, and each time what we would now call the machine state had to be written to disk. So for each macro call, the machine had to swap bits of kernel, assembler and user code in and out, moving from the inside, to the outside, to the middle, to the outside, to the inside, etc., rinse & repeat. With a one-second access time the 15 minute maximum run time was exceeded before the assembler even finished assembling my 10 or 15 punched cards into machine code. It was a very compact program, but I never did get to run it in its full macro-bedecked glory. I had to turn the program into 100 or so cards of non-macrofied assembler.
I also (much later) had the fun of entering entire programs into an early microcomputer by flipping front panel switches, pushing the 'step' button, flipping panel switches, etc. - make one mistake, push 'reset' and start over. Seymour Cray, when he was still at Control Data Corporation (CDC) was famous for being able to enter the entire 6000 word control program into the early CDC machines from memory using the front panel switches.
So I would say that until we started getting into time-sharing and such complexities, the idea of a kernel wasn't really relevant - there was little or nothing resident in the computer's memory. I think I could safely say that is primarily what a kernel does in a modern multitasking system - provides the environment by which tasks can move safely and efficiently through the system. And the operating system includes all those non-kernel tasks, such as accounting, access control, logging, the many utilities required to provide everything from I/O to temperature control.
Just to put a stamp on this, Wikipedia on Kernels [wikipedia.org]:
In computing, the kernel is the central component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components).[1] Usually as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls.
Operating system tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels execute all the operating system code in the same address space to increase the performance of the system, microkernels run most of the operating system services in user space as servers, aiming to improve maintainability and modularity of the operating system.[2] A range of possibilities exists between these two extremes.