Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Linux 2.6.17 Released 444

diegocgteleline.es writes "After almost three months, Linux 2.6.17 has been released. The changes include support for Sun Niagara CPUs, a new I/O mechanism called 'splice' which can improve the performance greatly for some applications, a scheduler domain optimized for multicore machines, driver for the widely used broadcom 43xx wifi chip (Apple's Airport Extreme and such), iptables support for the H.323 protocol, CCID2 support for DCCP, softmac layer for the wireless stack, block queue IO tracing, and many other changes listed at the changelog"
This discussion has been archived. No new comments can be posted.

Linux 2.6.17 Released

Comments Filter:
  • linux (Score:2, Funny)

    by Anonymous Coward
    But does it run linux ???
    • Re:linux (Score:3, Interesting)

      by moro_666 ( 414422 )
      Will ubuntu have it soon ?

      They have included much of the stuff into their version of 2.6.15.23,
      but ofcourse that doesn't have everything. The broadcom driver that came
      with ubuntu(same sources, maybe earlier version) has somesort of issues
      with my BCM4318 :( , so it just doesn't work, people claim it has something
      to do with soft interrupt stuff.

      ps. broadcom, next time make interrupts stiff.
      • Re:linux (Score:3, Informative)

        by ElleyKitten ( 715519 )
        The broadcom driver that came with ubuntu(same sources, maybe earlier version) has somesort of issues with my BCM4318 :( , so it just doesn't work,
        Try this. [ubuntu.com]
  • Really helped (Score:4, Interesting)

    by drsmack1 ( 698392 ) * on Sunday June 18, 2006 @11:54PM (#15559927)
    I have this now installed in my dual core AMD and the difference is noticable. X is noticable faster, as is my video editing stuff. Good work guys!
  • what (Score:2, Funny)

    by Anonymous Coward
    is this for computers?
  • by SynapseLapse ( 644398 ) on Monday June 19, 2006 @12:06AM (#15559953)
    I'm still pretty new to the Linux scene (So far I've done a FreeBSD, Ubuntu and Fedora Core 4 installation.), but I do have a question.
    Why are the network drivers part of the kernel? It seems like this would make it more difficult to adopt newer hardware types. Also, since most computers have 1-2 NICs at the most, wouldn't that clog up the kernel with tons of drivers for hardware you'll never use?
    • by shird ( 566377 ) on Monday June 19, 2006 @12:10AM (#15559963) Homepage Journal
      Modules... Only the modules (read: 'drivers') that are needed are loaded. It needs to be in the kernel because it accesses the hardware (the net card) at a fairly low level.
    • First, I'll get a nitpick out of the way: FreeBSD is not Linux.

      Second, usually you don't want to compile every driver into the kernel, so you wouldn't get that clutter. Best case scenario, you compile in only the specific driver you'll need. Worst case, you compile them as modules and load them at runtime.
      • Best case scenario, you compile in only the specific driver you'll need. Worst case, you compile them as modules and load them at runtime.

        I think you've got that mixed up. Unless you enjoy recompiling the kernel every time the hardware changes.
        • by Anonymous Coward
          Okay, so how frequently do you feel the need to swap out NICs, sound cards, video adapters, or anything else?

          In the grandparent's instance, his hardware may not change for months or years on end because, well, he dosen't want to shut his computers or servers down to experiment with random hardware... Because of this, it might make sense for him to compile the drivers directly into the kernel for a tiny boost in performance and memory utilization... That would make sense for embedded computers, too obviousl
        • I change the kernel more often than the hardware, to test the latest features, and compile all my drivers (network, filesystems etc) in, to make sure when the new kernel boots, I don't need to bother with my depmod / modules_install or anything like that. I even have enough memory to not notice a little bit taken up by a driver or two I'm not using. I like whatever I need to just be there, so whatever I'm doing with harddrives etc, doesn't matter. No more "crap, I can't access this filesystem without module
    • by caseih ( 160668 ) on Monday June 19, 2006 @12:35AM (#15560020)
      Even under windows drivers load into the kernel and normally become part of the kernel proper. Things under linux are similar, but differ from Windows in very important ways. First of all, the overriding philosophy behind the linux kernel is the GPL (well a modified version of the GPLv2) license for the source code. This states that the source code for the kernel and parts of the kernel should always be available. Also, for philosophical reasons associated with the licensing issue, Linus has said that he does not care as much about a binary stable driver API (ABI) in the kernel. Since the source code to the kernel is always available, if you want decent drivers, they should be placed in the kernel source code tree, since drivers really ought to be free and open. Unfortunately this means that a binary kernel driver from one version of the kernel may not work on another. This is done partially to encourage open source drivers and partially to prevent the kernel developers from being tied to design decisions that later prove unwise. But this does pose a problem for folks that want to implement their own third-party drivers in a propriety fashion. NVidia writes a special open source driver that implements a special, stable ABI that it's proprietary, closed-source video driver talks to to overcome this.

      Many have argued that Linus needs to stablize the kernel driver ABI. But on the other hand, by not doing so and encouraging drivers to be open source and in the kernel source tree brings us a large amount of stability that Windows just cannot achieve. Most windows stability problems are not caused by the kernel, which is as stable as Linux, but by third-party device drivers. Anyway it is a trade-off, and one that is hotly contested. Personally, everything I currently use has open source drivers that come with my kernel bundle (Fedora Core). They are loaded on demand, so they don't cause memory bloat. If I was to compile my own kernel, I could choose not to build many of the drivers, reducing the disk bloat too.

      One of the biggest things for me in this kernel release is the Broadcom wireless driver. Kudos to the team that clean-room reverse engineered the driver.
      • by drsmithy ( 35869 ) <drsmithy@NoSPAm.gmail.com> on Monday June 19, 2006 @01:03AM (#15560093)
        Many have argued that Linus needs to stablize the kernel driver ABI. But on the other hand, by not doing so and encouraging drivers to be open source and in the kernel source tree brings us a large amount of stability that Windows just cannot achieve.

        It's worth pointing out that pretty much every remotely mainstream OS *except* Linux manages to work (and work well) with a stable kernel ABI. Including ones considered at least - if not more - stable than Linux, even by Linux zealots, like FreeBSD and Solaris.

        • by ookaze ( 227977 ) on Monday June 19, 2006 @06:06AM (#15560633) Homepage
          It's worth pointing out that pretty much every remotely mainstream OS *except* Linux manages to work (and work well) with a stable kernel ABI. Including ones considered at least - if not more - stable than Linux, even by Linux zealots, like FreeBSD and Solaris.

          FreeBSD example then just proves that a stable ABI won't bring more drivers to Linux, thus destroying the GP argument that Linus needs to stabilize the kernel driver ABI.
        • by FireFury03 ( 653718 ) <slashdot.nexusuk@org> on Monday June 19, 2006 @08:39AM (#15560889) Homepage
          It's worth pointing out that pretty much every remotely mainstream OS *except* Linux manages to work (and work well) with a stable kernel ABI.

          Also worth pointing out that much of the stability trouble in Windows is caused by shoddy drivers - FOSS drivers are traditionally more stable than closed drivers (not least because when bugs are found, people with a vested interest in fixing them will often do so rather than waiting for the manufacturer to get their finger out).

          Whilest a stable ABI may result in more drivers being made available, I fear it could lead to a lot of "Windows quality" drivers. And if closed drivers are officially legitimised, many companies will refuse to release open drivers since there is very little in it for them. At the moment, many of the open drivers are there because the vendor believes that releasing a binary driver is legally dubious at best - legitimise binary drivers and this motivation goes away.

          Anyone who's dealt with bugs in the nVidia drivers will know of the problems of closed development - I've reported bugs that have taken years for nVidia to fix which I would've been happy to try and fix myself if only the code was open.
    • 2 options:

      1:
      Compile everything you need for your machine to run into the kernel... no more, no less... then you're good to go. No clutter, no loading at runtime... nothing.

      2:
      You have no idea what you actually need past boot(and root) FS, cpu, and hard drives. Compile everything else as a module(driver) to be loaded when you need it, and voila, no bloat to the kernel, but a few dozen MBs taken up on the HD.

      In the grand scheme of things, a few extra modules for network cards will cause you no tro
      • Most enterprise-class servers that have N network cards in them don't have N different _types_ of cards - they've got N cards of the same type, or maybe N-1 of the same and one of a different type (e.g. a GigE and N-1 100Mbps cards.) So you only need one or two different drivers.

        Laptops these days usually have two types of network interfaces - one wired and one wireless. Occasionally you'll have different types of wireless cards to plug in, e.g. an 802.11a vs. .11g or something.

    • Microkernel anyone? (Score:4, Informative)

      by argoff ( 142580 ) on Monday June 19, 2006 @12:46AM (#15560051)
      Why are the network drivers part of the kernel? It seems like this would make it more difficult to adopt newer hardware types. Also, since most computers have 1-2 NICs at the most, wouldn't that clog up the kernel with tons of drivers for hardware you'll never use?

      This is the essence of the Microkernel debate. http://en.wikipedia.org/wiki/Microkernel/ [wikipedia.org] The truth is that the Microkernel model probably is a better design, but in terms of when the Linux kernel was starting out - its implementation simply wasn't pratical. It didn't help that the people who thought they knew how to build a better kernel decided to try and intellectually brow-beat Linus into doing it instead of implementing it themselves and putting it under the GPL. This led to a lot of bitterness and resentment between the two camps. The HURD http://en.wikipedia.org/wiki/Hurd [wikipedia.org] project is a GPL microkernel project, but it simply wasn't managed as well as Linus managed Linux.

      I think over time, things eventually will move to a microkernel model even though there are other ways to emulate some of their security and flexability benefits - like xen http://en.wikipedia.org/wiki/Xen [wikipedia.org]

      • by ArbitraryConstant ( 763964 ) on Monday June 19, 2006 @01:24AM (#15560152) Homepage
        Why are the network drivers part of the kernel? It seems like this would make it more difficult to adopt newer hardware types. Also, since most computers have 1-2 NICs at the most, wouldn't that clog up the kernel with tons of drivers for hardware you'll never use?
        This is the essence of the Microkernel debate.
        It has nothing whatsoever to do with the microkernel debate. The two issues are completely orthogonal to each other. The microkernel proponents claim the driver should be a process running outside the kernel, which has nothing to do with the number of drivers you have sitting around, or how difficult it is to adopt newer hardware types.

        Either way, you've got a ton of drivers sitting around that you'll never use. They don't clog up the kernel, since the kernel image rarely contains many drivers. Instead, most Linux distros use modules that get loaded as needed. On a microkernel, they would be driver binaries that would get run as needed. They clog things up to exactly the same extent; they sit around on the hard drive doing nothing.

        Either way, it's hard to add new drivers to old kernels. This is not a result of the fact that drivers are in the kernel, but of the fact that Linus refuses to use a stable driver API. This would preclude driver compatibility between versions just as effectively on a microkernel as it does now.

        As I said, the two issues are unrelated.
        • by argoff ( 142580 )

          Either way, you've got a ton of drivers sitting around that you'll never use. They don't clog up the kernel, since the kernel image rarely contains many drivers. Instead, most Linux distros use modules that get loaded as needed. On a microkernel, they would be driver binaries that would get run as needed. They clog things up to exactly the same extent; they sit around on the hard drive doing nothing.

          The point is not a resource usage point, but a flexability point. If you want to add a new driver or even

      • "The truth is that the Microkernel model probably is a better design"

        Yeah right... the truth is probably something like an opinion.
    • So you've learned what RTFM means by now? :-) Ok, it's been a while since I've read up on kernel structure either.... but you _should_ do so. Linux is rather famously [coyotos.org] not a microkernel architecture [oreilly.com] that lets you partition off little pieces into user space - it's a big honkin' kernel plus loadable modules that let you add even more things. There are hardware-dependent and hardware-independent parts of the kernel. Device drivers inherently hardware-dependent, and sharing address space with the kernel ma
    • Hold on there. First off, nearly all drivers are dynamically loadable or can be compiled staticly into the kernel. Offhand, I can not think of a general purpose set-up where they are compiled in EXCEPT for the initial loading (as well as the boot from a cd deals). That means that most of the running kernels use dynamic kernels. Now, which ones are loaded? those that are needed. Sometimes a few extra, but rarely.

      Now, as to the network, it is divided into a number of sections (hardware drivers vs. logical h
    • since most computers have 1-2 NICs at the most, wouldn't that clog up the kernel with tons of drivers for hardware you'll never use?

      No. If you compile your kernel the way it should; ie editing config before compiling by removing un-needed driver support and other things you do not need, you won't have to worry about compiling modules or loading modules you will not need.

      On the other hand, there are generally two ways to go about this. When compiling modules and vmlinuz, you can let kernel decide what to l
  • Go Linux! (Score:5, Insightful)

    by Umbral Blot ( 737704 ) on Monday June 19, 2006 @12:07AM (#15559954) Homepage
    It's good to know that even in this day and age of faster and faster computers there are still people who care about speed and efficiency instead of simply waiting for hardware to solve their problems for them. I do have one tiny complaint though, and it is that some of the performance gains are only possible by using new system calls. This is bad for three reasons:
    1- More work for developers, some of whom may never learn about these faster calls.
    2- Old applications can't benefit
    3- Applications that wish to be backwards compatible can't benefit
    Obviously though it is necessary to write new functions on occassion; for example when the new function is worse than the old function is under some circumstances. It may be that all the new functionality is of this type, but I don't have enough information to know for sure.
    • Re:Go Linux! (Score:5, Informative)

      by freralqqvba ( 854326 ) on Monday June 19, 2006 @12:12AM (#15559970) Homepage
      sendfile(2) is now a call to splice() so programs that use the old syscall will benefit as well and without modificaiton.
      • Why not just call it sendfile then? If it can take more parameters, well that is what overloading is for.
        • Re:Go Linux! (Score:4, Informative)

          by ip_fired ( 730445 ) on Monday June 19, 2006 @12:42AM (#15560037) Homepage
          The kernel is written in C, and so are those system calls. I don't believe you can overload a C function.
          • Re:Go Linux! (Score:3, Interesting)

            by Umbral Blot ( 737704 )
            No you can't, although that fact had slipped my mind. How sad for C. The real question here then is: why isn't there a better language than C for creating OSs in? A real macro system and overloading would probably be nice for kernel dev.s everywhere.
            • Re:Go Linux! (Score:3, Interesting)

              by ip_fired ( 730445 )
              Because C is fast, and there is so much already written in C, that it would be a pain to move over. There are a lot of hacks in the linux kernel (they actually use GOTO statements! *gasp*). I can tell you it was a real eye-opener for me when I started looking at things there. I'm sure the reason nobody has moved over to a better language though is because of the massive amounts of work that would have to be redone.
              • That and (Score:5, Informative)

                by Sycraft-fu ( 314770 ) on Monday June 19, 2006 @03:29AM (#15560407)
                For kernel operations, you want everything pretty efficient. You want it as fast as possible and you don't want a lot of extra code hanging around. Unfortunately, the higher level a language you use, the more inefficency there is. For most programs it doesn't matter. They are either not the sort of thing that needs speed (like a word processor) or one where you can optimize the small part of the code that takes most of the time (like a game). However the kernel is a little different. Everything in there is time critical essentially.

                C is the best compramise. While assembly might give you the theoritical best code, it'll big a giant mess to try and totally unmaintainable. Might actually be slower and larger for it. C is pretty good because it's easy enough to generate deceant code in, but it isn't much higher up the abstraction chain so it compiles quite efficient.

                You have to remember that object orientation and such are all human creations. Processors don't think in objects, for that matter they don't really even think in functions. They think in memory locations, and jumps to those locations. Doing OO code means a whole messy layer the compiler has to go through to translate that in to something the processor actually understands.
                • Re:That and (Score:3, Insightful)

                  by k98sven ( 324383 )
                  Unfortunately, the higher level a language you use, the more inefficency there is.

                  That's complete nonsense. What do you support that on? A higher level language is only as inefficient as the compiler and/or libraries used. Which is just as true for a low-level language.

                  C is the best compramise. While assembly might give you the theoritical best code

                  As someone who's actually spent some years coding assembler, I'll tell you this: Hand-coded assembler is rarely ever better. And with the developments in process
              • by Dan Ost ( 415913 ) on Monday June 19, 2006 @09:52AM (#15561167)
                I agree that using GOTO is a bad idea when another control structure is adaquate,
                but, at least in C, there are times when using GOTO is the most natural and,
                unequivically, the best choice.

                Off the top of my head, I can think of two situations where using a GOTO is
                the best solution:

                1. breaking out of nested loops. In C, the break command can only break
                out of a single loop level. If you need to break out of 2 or more loops, you
                can play an ugly game of setting and checking state flags at each level
                of looping or you can simply create a label at the exit point and use
                GOTO to get there. (sometimes you can wrap your loops as a function call,
                but that's often the ugliest solution)

                2. shared cleanup code. In a function with multiple exit points, instead
                of doing cleanup at each exit point, it is often clearer to set your
                return value and then GOTO a label that handles all cleanup before
                returning.

                Be cautious when using GOTO, but don't be afraid of it. Learn to
                recognize when GOTO is appropriate and when it should be avoided.
            • There are plenty, and I suppose you could write your part of the kernel in whatever language you want as long as you weren't worried about it being part of the official distribution. But even if they suddenly started allowing other languages in the kernel, and AFAIK they don't, you'd still have to write your interfaces in straight C. It's the only language that basically every other language has bindings for.

            • by jrockway ( 229604 ) * <jon-nospam@jrock.us> on Monday June 19, 2006 @04:46AM (#15560526) Homepage Journal
              > A real macro system and overloading would probably be nice for kernel dev.s everywhere.

              Like LISP? That's what they used to use, but C was chosen for UNIX, and UNIX caught on big time, so C is the language now. I think it's about time to write an OS (kernel + tools) in LISP, so we can return to the good-old-days of Lisp machines.
            • Re:Go Linux! (Score:3, Informative)

              by SpinyNorman ( 33776 )
              Overloading is just syntactic sugar - it doesn't give you any fucntionality.

              There's no functional difference between using an overloaded name f(a), f(x, y), f(p, q, r) or three separate ones f_a(a), f_xy(x, y), f_pqr(p, q, r).

              If you want default arguments C has them, and if you want polymorphism then C has it too (function pointers).
          • Re:Go Linux! (Score:4, Informative)

            by waveclaw ( 43274 ) on Monday June 19, 2006 @04:32AM (#15560500) Homepage Journal
            The kernel is written in C, and so are those system calls. I don't believe you can overload a C function.


            There is no overloading going on here. Overloading is to create a new function with the same name, but taking different parameters.

            Ahem. The original function, sendfile(2), was rewritten to call splice() instead of doing something else.

            Everybody that wrote code that used the old function now has to deal with splice() running instead of the old function's logic.

            Just to hammer it home:
            Old - app -> sendfile(2) -> some logic -> return to app
            New - app -> sendfile(2) -> splice() -> splice's logic -> return to sendfile(2) -> return to app

            With the Linux kernel, as this exepmlifies, you can improve the original code and get everyone (well, those to lazy to revert the changes) to use it. In this case you have a fixed API (sendfile(2) which is well known and published) so you don't just want to tell everybody to recompile with called to splice().

            See the difference? Feel the difference.

            The kernel is GPL and thus the actual source code used to compile the binary kernel you use is available to you. With a closed source kernel you might be able to purchase an SDK with linkable binaries and some (probably undocumetned) header files. Programmers in this situation need things like function overloading and class inheritence just to do anything. One way of looking at the history of languages like C++ is as a technical solution to the ethical problem of closed source programming. Those languages focus on extending on the outside. With OSS you can usually replace, fix and improve on the inside. BSD and GNU differ on a the point of GNU wanting everyone to share the source to those fixes if they share the resulting binaries. But I digress.

            And I can't wait to see if this breaks something.
      • So, will we see even more ugly hacks^W^Wapplications that won't build anywhere else than on Linux?
        • In this case, the new system call was to avoid ugly hacks like FreeBSD's Copy On Write system (which is transparent, but even harder to use because of that and frequently unhelpful in adding performance).

          Linus's take on the whole thing is that if you want portability, you should just use the read(), write(), etc system calls, since they perform pretty well anyway. If you absoloutely must do something platform-specific for every ounce of performance, you should have a clean API to do it with.
    • There are two ways the new functionality can be used:
      1. by explicitly invoking the new calls directly in your program;
      2. indirectly, when the implementation of old calls changes to make use of the new functionality internally.
      Thus, even old programs are likely to benefit to some extent because the efficiency of the system generally will improve.
    • Yeah, but some are beginning to complain about stability and lack of code maintenance for parts of the kernel, due to the attention being given to new features but not old---bug fixes are less exciting and less likely to be addressed willingly.
    • by EmbeddedJanitor ( 597831 ) on Monday June 19, 2006 @12:36AM (#15560026)
      While the "who cares about software efficiency, the hw is getting faster" attitude might be OK for desktop PCs, it does not apply to handheld/mobile devices (which make up a huge, and ever growing, % of all Linux devices). Being able to use a slower CPU (or use a fast one very efficiently) makes for reduced power consumption == smaller devices == longer battery life. Nobody wants a cell phone with a 2 pound battery that only runs for 1 day.
    • Re:Go Linux! (Score:3, Insightful)

      by TCM ( 130219 )

      It's good to know that even in this day and age of faster and faster computers there are still people who care about speed and efficiency instead of simply waiting for hardware to solve their problems for them.

      Another way of saying this: It sucks to know that even in this day and age of faster and faster computers there are still people who cut corners and use specific hacks to gain speed instead of simply building clean and well-designed systems and let the hardware do the work.

      Just saying..

      • I really hadn't thought about it that way. I guess that without thinking I buy into the hacker ethic where speed and efficiency are valued above all else. However objectively you are probably right, and instead of celebrating speed I should be celebrating "today Linux reduced its code base by X-thousand lines and reduced its API by Y-hundred lines". Unfortunately no one ever seems to brag like that.
      • It's a rather clean and well-designed interface. It was proposed back around the Linux 2.0 era (what, 5 years ago) but Linux wasn't ready for it yet. Over the years the idea has settled down. This isn't some off-the-wall quick hack that was dreamed up last week.
      • Re:Go Linux! (Score:3, Insightful)

        by pclminion ( 145572 )

        Another way of saying this: It sucks to know that even in this day and age of faster and faster computers there are still people who cut corners and use specific hacks to gain speed instead of simply building clean and well-designed systems and let the hardware do the work.

        Why do you assume that all optimizations are hacks? Lifting an invariant calculation out of a loop can potentially make things MUCH faster, yet is hardly a "hack." Or how about strength-reducing "2 * x" into "x + x," is that a hack? S

    • Re:Go Linux! (Score:5, Informative)

      by pavon ( 30274 ) on Monday June 19, 2006 @01:34AM (#15560180)
      Obviously though it is necessary to write new functions on occassion; for example when the new function is worse than the old function is under some circumstances.

      That is exactly why it was done. More information about can be found at kerneltrap: here [kerneltrap.org], and here [kerneltrap.org]. It was also previously on slashdot [slashdot.org], although you would be best to skip that - it has more misinformation than the other kind.

      In short, all the known ways of implementing zero-copy within the existing API's cause the most common usage cases of those API to be slower than they are now. Therefore, it made more sense to export this new API for the applications where speed is critical.

      In the the first kernaltrap article, Linus also explains why splice is different from sendfile, contrary to the posts here claiming they are essentially the same.

    • Re:Go Linux! (Score:3, Informative)

      by iabervon ( 1971 )
      In this case, the new syscall is because a specific situation can be optimized compared to using the existing functions, but the more efficient function only works at all for certain special (but important) cases. In this case, the optimization is that copying data from outside of the program to outside of the program is more efficient if the data doesn't have to go through the program; obviously, this can't be used for the common case where the program is trying to use the data. The case that it helps a lo
  • by TrueKonrads ( 580974 ) on Monday June 19, 2006 @12:17AM (#15559983)
    Changelog [mirrordot.org]
  • by doti ( 966971 ) on Monday June 19, 2006 @12:28AM (#15560005) Homepage
    Some stuff I found interesting on the human-friendly changelog [kernelnewbies.org].

    Block queue IO tracing support (blktrace). This allows users to see any traffic happening on a block device queue. In other words, you can get very detailed stadistics of what your disks are doing. User space support tools available in: git://brick.kernel.dk/data/git/blktrace.git

    New /proc file /proc/self/mountstats, where mounted file systems can export information (configuration options, performance counters, and so on)

    Introduce the splice(), tee() and vmsplice() system calls, a new I/O method.
    The idea behind splice is the availability of a in-kernel buffer that the user has control over, where "splice()" moves data to/from the buffer from/to an arbitrary file descriptor, while "tee()" copies the data in one buffer to another, ie: it "duplicates" it. The in-buffer however is implemented as a set of reference-counted pointers which the kernel copies around without actually copying the data. So while tee() "duplicates" the in-kernel buffer, in practice it doesn't copy the data but increments the reference pointers, avoiding extra copies of the data. In the same way, splice() can move data from one end to another, but instead of bringing the data from the source to the process' memory and sending back to the destination it just moves it avoiding the extra copy. This new scheme can be used anywhere where a process needs to send something from one end to another, but it doesn't need to touch or even look at the data, just forward it: Avoiding extra copies of data means you don't waste time copying data around (huge performance improvement). For example, you could forward data that comes from a MPEG-4 hardware encoder, and tee() it to duplicate the stream, and write one of the streams to disk, and the other one to a socket for a real-time network broadcast. Again, all without actually physically copying it around in memory.
  • Where is 2.7? (Score:5, Insightful)

    by Anonymous Coward on Monday June 19, 2006 @12:36AM (#15560025)
    A hell of a lot of this stuff seems to me to be the sort of code that should be going into the 2.7 stream, not 2.6. The earliest days of Linux had revisions X.Y.Z. If Y was even, it was a "stable" branch, and could generally be considered safe for production work. If Y was odd, it was a "development" branch, and could break things badly.

    This was a major boon for Linux: if you needed the bleeding edge, you could get it, whilst acknowledging the risks in doing so. If you needed something stable, again, you could get it. Now? It seems that the supposedly stable kernel is right out there on the bleeding edge ...
    • It was decided to hold off on doing a development branch. The thought was that developers would help stabilize the 2.6 work. In some areas, this approach has helped. In others, it has gotten worse.
      • IIRC, the idea was to do away with the idea of a stable kernel, do all the development in one place and let the distro maintainers deal with stability issues. You're not really supposed to be building your own kernel anymore.
    • Re:Where is 2.7? (Score:5, Informative)

      by x2A ( 858210 ) on Monday June 19, 2006 @02:21AM (#15560278)
      The stable/development branches might be a nice idea in theory, but in practice it doesn't work. Distros would ship, for example, a "stable" 2.4.xx kernel, except it wouldn't actually be that. They would spot nice features in the 2.5 kernel that they wanted to offer their users, and so back-port them... and any other nice patches floating around the net while they're at it. The result being that the kernels that ship with distros were so heavily modified, that stability (from one machine to another) went right out of the window. You couldn't go to kernel.org and download an updated kernel, as without all the patches, it wouldn't work. So you had to stick to the distro's kernels.

      So instead, the 2.6 goal is to have development/stable parts of the cycle, rather than seperate branches. Roughtly: patches that could break things get submitted at the beginning of the cycle, and -pre1/-pre2 tarballs are released. If you want bleeding edge, you go here. Release candidates are released, where developers get chance to fix bugs etc in the code. Then, any code that's still [known to be] buggy gets dropped for the final release (eg, 2.6.17). The developer can work on it, and try add it again during subsequent cycles. When it works, it can be included in a final release.

      During this cycle, security and other urgent bug fixes take place in the ultra-stable branch, with version such as 2.6.16.1, 2.6.16.2.

      (This is the rough idea I believe, there could be some slight inaccuracies in how it actually takes place, I haven't followed it 100%, but this should be close enough to get the right idea).

    • Re:Where is 2.7? (Score:5, Informative)

      by iabervon ( 1971 ) on Monday June 19, 2006 @02:25AM (#15560288) Homepage Journal
      That was the theory. But in practice, if Y was even, the kernel was obsolete, while if Y was odd, the kernel was broken. Except, of course, 2.even.0, which was actually stable, but broke compatibility with the previous kernel that worked. And occasionally, 2.even was kept up-to-date because nobody could use 2.odd for development, because it didn't work at all. You could tell that the old model didn't actually work, because no distribution shipped any kernel that used that model; they all shipped 2.even with an arbitrary set of patches (generally hundreds) from 2.odd and elsewhere. With the new model, distros are shipping kernels with only a few patches, and those patches are getting merged upstream.

      The stable kernels aren't remotely on the bleeding edge; they contain only features which have been tested over the past three months, after being filtered out of the bleeding-edge development as being things that have already stabilized and stand a good chance of being proven in three months. It's effectively very similar, except the development series isn't left known-broken and the stabilization process happens on a quick schedule, with stuff that isn't ready pushed off to the next cycle rather than delaying the current cycle. Also, the version numbers change by less (development gets -mm, -rc, or -git; stable series change the third digit by one instead of the second by two; and bugfix releases change the fourth digit instead of the third).
    • Re:Where is 2.7? (Score:5, Interesting)

      by 10Ghz ( 453478 ) on Monday June 19, 2006 @03:00AM (#15560356)
      This was a major boon for Linux


      Or bane. The "old way" meant that the vanilla-kernel (kernel offered by kernel.org) was stable. But new features took a LONG time to appear in the vanilla-kernel. But users and distros still wanted those advanced features that were not part of the kernel (yet). What happened was that distros offered their own vendor-kernels, that were VERY different from vanilla-kernel. Distros then spent their time and energy fixing their own vendor-kernels, instead of vanilla-kernel.

      This new system changes things so that new features are added to the vanilla-kernel, which means that the difference between vanilla and vendor-kernels is not that big. The distributors can focus on stabilizing the kernel, instead of adding new features to it. And porting those fixes to vanilla is a lot easier than porting changes in the old system. This means that if you want to use REALLY stable kernel, you should use the vendor-kernel.

      In short: this new system means that things progress a lot faster for everyone, with new features appearing in the kernel. And we can still have the stability we want if we use the tested and patched vendor-kernels.
  • Woohoo, Broadcom 4300 drivers! I hope they work. ...I wish this had been brought to my attention before 1 A.M.
    • Woohoo, Broadcom 4300 drivers! I hope they work. ...I wish this had been brought to my attention before 1 A.M.

      I'm somewhat shocked that nobody else has pointed out the new Broadcom 43xx/Airport Extreme support. That's the one thing that grabbed my attention in the whole paragraph. Not having support for Apple's built-in wireless hardware has been a showstopper for a lot of people to even consider trying out Linux on a Mac, especially the portables. This driver will open up several million possible new computers for Linux to be installed on, since at this point the wireless hardware was about the last incompatible piece of hardware on the Mac side. This is a very big deal for anyone with Mac hardware or anyone planning to buy a Mac, and for all the geeks who are already running Linux on their Mac.

      Very cool.

    • Haven't tried the release of 2.6.17 yet, but rcX versions required extracting the firmware for your Broadcom card from a binary such as bcmwl5.sys (Windows driver). The tool bcm43xx-fwcutter [berlios.de] does this.

      I'm not an Ubuntu guy, but this reference [ubuntuforums.org] might be useful to anybody trying to make the new Broadcom Wifi driver work in Linux. Very easy steps, and most non-Ubuntu users should find it easy to adapt for their specific distros.
  • Sounds good (Score:5, Funny)

    by goodenoughnickname ( 874664 ) on Monday June 19, 2006 @01:08AM (#15560109)
    Sounds good -- how much does it cost?

    Sincerely,
    The New Guy
  • by Animats ( 122034 ) on Monday June 19, 2006 @01:17AM (#15560139) Homepage

    The "splice" system call seems to be an answer to one of Microsoft's bad ideas - serving web pages from the kernel. At one point, Microsoft was claiming that an "enterprise operating system" had to be able to do that. So now Linux has a comparable "zero copy" facility.

    "Zero copy" tends to be overrated. It makes some benchmarks look good, but it's only useful if your system is mostly doing very dumb I/O bound stuff. In environments where web pages have to be ground through some engine like PHP before they go out, it won't help much.

    The usual effect of adding "zero copy" to something is that the performance goes up a little, the complexity goes up a lot, and the number of crashes increases.

    • not like that (Score:5, Informative)

      by r00t ( 33219 ) on Monday June 19, 2006 @01:23AM (#15560151) Journal
      This is really just a way for app code to manipulate data without needing to have it copied or memory-mapped.

      Linus refused the FreeBSD-style zero-copy because it is often a lose on SMP and with modern hardware. Page table and TLB updates have huge costs on modern hardware.

      If you do like the Microsoft way, use Red Hat's kernel. The in-kernel server works very well.

      • Re:not like that (Score:3, Informative)

        by master_p ( 608214 )
        "This is really just a way for app code to manipulate data without needing to have it copied or memory-mapped."

        I think you are wrong. Splice'd data are not processed by userland at all: they are piped from one file to the other at kernel level by page copying.
        • Re:not like that (Score:3, Informative)

          by cnettel ( 836611 )
          Your parent is right. The user mode code can control what happens to data, without ever mapping it to its own memory. You are right in that it's not processed, but that's not what the original post said.
    • "Zero copy" tends to be overrated. It makes some benchmarks look good, but it's only useful if your system is mostly doing very dumb I/O bound stuff. In environments where web pages have to be ground through some engine like PHP before they go out, it won't help much.

      On the contrary, there are many cases in a dynamic serving system where you can determine that, after some point, the rest of the operation merely involves copying data from a file or buffer out to the network. Or, similarly, that a large

    • The "splice" system call seems to be an answer to one of Microsoft's bad ideas - serving web pages from the kernel

      WHat is this nonsense ? The khttp in-kernel web server was implemented on Linux first, then copied by MS.
      IIRC it isn't even in 2.6 kernels anymore.

      So now Linux has a comparable "zero copy" facility

      Linux already had a zero-copy facility, splice is just a new improved one.
      What are you talking about ?

      "Zero copy" tends to be overrated. It makes some benchmarks look good, but it's only useful if your
  • After reading the article, I just downloaded 2.6.17 from kernel.org (AMD64 processor desktop and a laptop with the a Broadcomm BCM4301 wifi card) I see neither a broadcomm driver in the kernel "Wireless LAN drivers" section, nor any scheduler other than:
    • Anticipatory
    • Deadline
    • Default I/O scheduler: Anticipatory, Deadline, CFQ, or No-Op

    Am I missing something here? So are the mentioned changes part of a release-candidate (unstable is at RC-2) or am I missing something?

    • by tomstdenis ( 446163 ) <tomstdenis.gmail@com> on Monday June 19, 2006 @02:16AM (#15560266) Homepage
      You listed I/O schedulers. I think the multi-core bit talks about a PROCESS scheduler. Two different things. Linux already has specific support for Intel's HTT bullshit and understands NUMA. Understanding multi-core is a good move up.

      If you have a 2P dual-core setup the best performance for two independent tasks would be spread to both chips. Specially in the AMD camp. That means each task gets a full memory bus to themselves. The trick is to pick up when two tasks have shared memory between each other and schedule that for one chip. Specially on the Intel side of things with their massive shared L2 cache.

      Tom
  • OK, so I use Linux on a PC desktop but I don't actually run it on my Mac. That said, I'm glad support for AirPort Extreme has finally come. It has long been impractical to run Linux on Mac laptops without an extra wireless USB dongle for 802.11 due to the lack of driver for that Broadcom chipset. This update is a huge deal for anyone interested in running Linux on a Mac made in the last few years.
  • DCCP (Score:3, Interesting)

    by caluml ( 551744 ) <slashdot&spamgoeshere,calum,org> on Monday June 19, 2006 @06:53AM (#15560684) Homepage
    DCCP - a kind of UDPv2
    Any applications out there using it yet?
  • S-ATA hotplug (Score:3, Interesting)

    by Dolda2000 ( 759023 ) <fredrik&dolda2000,com> on Monday June 19, 2006 @07:57AM (#15560788) Homepage
    Looking through the ChangeLog, I still see no S-ATA hotplug. I've been waiting more or less since the day S-ATA support was introduced in Linux to be able to add new drives without rebooting, and I just cannot understand how such a thing can take so long. I mean, I'm sure that the kernel developers have priorities and stuff, but I would think that adding S-ATA hotplug ought to be simple and important enough not to take more than a year to even get started...? I don't mean it as a complaint, I just find it really weird. Is it just much harder than I think, or is there no particular reason for it not having been done?

With all the fancy scientists in the world, why can't they just once build a nuclear balm?

Working...