Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Linux

Linux Kernel 2.6.31 Released 374

diegocgteleline.es writes "The Linux kernel v2.6.31 has been released. Besides the desktop improvements and USB 3.0 support mentioned some days ago, there is an equivalent of FUSE for character devices that can be used for proxying OSS sound through ALSA, new tools for using hardware performance counters, readahead improvements, ATI Radeon KMS, Intel's Wireless Multicomm 3200 support, gcov support, a memory checker and a memory leak detector, a reimplementation of inotify and dnotify on top of a new filesystem notification infrastructure, btrfs improvements, support for IEEE 802.15.4, IPv4 over Firewire, new drivers and small improvements. The full list of changes can be found here."
This discussion has been archived. No new comments can be posted.

Linux Kernel 2.6.31 Released

Comments Filter:
  • Linux audio (Score:3, Insightful)

    by Anonymous Coward on Thursday September 10, 2009 @08:49AM (#29377211)

    there is an equivalent of FUSE for character devices that can be used for proxying OSS sound through ALSA

    That quote shows how much of a train wreck Linux audio is.

    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Thursday September 10, 2009 @09:14AM (#29377471)
      Comment removed based on user account deletion
      • Re: (Score:3, Insightful)

        Yes, because userspace sound daemons were invented by ALSA. We didn't have these with OSS, not at all....

        • Re:Linux audio (Score:4, Informative)

          by TheRaven64 ( 641858 ) on Thursday September 10, 2009 @10:28AM (#29378383) Journal
          You don't need them with OSS on FreeBSD and Solaris (for example), or on Linux with the out-of-tree OSS 4 implementation because they supported sound mixing so the kernel actually does what it is meant to do - lets the userspace apps ignore the differences between hardware implementations. The mixing is either done in hardware if the hardware supports it or software if it doesn't.
          • Re:Linux audio (Score:4, Insightful)

            by diegocgteleline.es ( 653730 ) on Thursday September 10, 2009 @11:50AM (#29379453)

            You don't need them with OSS on FreeBSD and Solaris (for example), or on Linux with the out-of-tree OSS 4 implementation

            You don't need them in ALSA either, because dmix is implemented in the ALSA library, not as a userspace daemon.

            It's amazing the increible amount of FUD that has been spread about these topics...

        • Re: (Score:3, Interesting)

          by Carewolf ( 581105 )

          There were only need on Linux OSS because Linus refused to do audio mixing in the kernel. This means the resource sharing and hardware abstraction the kernel _should_ be doing was delegated to user-space.

      • Re:Linux audio (Score:5, Informative)

        by bcmm ( 768152 ) on Thursday September 10, 2009 @09:41AM (#29377793)

        amen. OSS, alsa, pulseaudio, for christsake just give me sound that works without having a million handler processes.

        So just use ALSA!

        The situation on Linux is that there used to be OSS, and now there is ALSA. ALSA works fine, for pretty much everybody. There are a few legacy apps which use OSS because no one is updating them, and obviously, it would be nice if they would play nice. Pulseaudio is a bit strange, but nothing requires it's use, and IMHO there is no real reason for it to be used unless you want to do somewhat strange things (that you generally can't do on any other type of OS). Don't use pulseaudio if you don't want to; if your distro forces it on you, use a sane one.

        This scary graph [adobe.com] and related ideas tends to get mentioned in connection with this: this conflates libraries, sound servers, and drivers to some extent. One could draw a similar graph for windows, featuring programs using the Quicktime library, the WMP library, MME, DirectSound, WASAPI and various other APIs and libraries (and I haven't even gone into the changes to the audio driver model). WMP would have plenty of in arrows from applications using its libraries, and plenty of out arrows because it supports more than one API. And don't forget that there are still legacy applications which need to be the only app playing audio, just like on Linux.

        Here is why I can't be bothered to learn enough about the driver layer to give examples: "UAA is intended to be a complete replacement for developing WDM Audio Drivers; however, in some cases it may be necessary for an otherwise UAA-compliant audio device to expose capabilities that cannot be done through UAA. Windows will continue to fully support audio drivers that use the PortCls and AVStream drivers. [wikipedia.org]

        Audio technology has evolved, lots. Having backward compatibility requires that things get slightly complex. Everybody is doing this. I think Linux is doing it rather well, although certain distros make some odd choices.

        OSS was okay.

        OSS made it impossible to play more than one stream at once on a lot of hardware.

        • Re:Linux audio (Score:5, Interesting)

          by walshy007 ( 906710 ) on Thursday September 10, 2009 @10:01AM (#29378033)

          OSS made it impossible to play more than one stream at once on a lot of hardware.

          With a standard configuration, alsa does also, you have to load the dmix module in your config to act as a software mixer on cards that don't do hardware mixing (most on board bits).

          This is where the userspace demons enter it all, most of them just started out as another layer that does software mixing, but every man and his dog came up with his own invention.

          As for just using alsa, that's great if you don't mind not having certain functionality, some of the sound demons do add some nice features (jack is the only one I've found worth using though). It could be argued the driver layer shouldn't have to deal with some of that advanced functionality though, another reason why these demons were made.

        • Re: (Score:3, Interesting)

          by Timmmm ( 636430 )

          OSS made it impossible to play more than one stream at once on a lot of hardware.

          That was OSS 3. OSS 4 apparently allows you to do this on all hardware and is apparently much nicer than ALSA. It's also open source again. I read a good article about this situation a while ago but can't find it now.

          • Re:Linux audio (Score:5, Insightful)

            by TheRaven64 ( 641858 ) on Thursday September 10, 2009 @10:35AM (#29378451) Journal
            OSS 3 did on FreeBSD too. It's not a technical limitation of the hardware or the interfaces, it's a symptom of the NIH mentality on Linux. FreeBSD has supported software sound mixing with OSS since around 2000. I you want to play sound, just open /dev/dsp and write data there (on FreeBSD 4 and earlier you had a different device node for each virtual channel, so you needed to tell xmms, aRts and so on to each use a different one, with 5 and later the kernel does this for you). The problem with Linux sound is that, when 4Front decided to make the next OSS release proprietary, they decided to deprecate it in the kernel, rather than just maintain the open source fork. The FreeBSD folks kept developing their version and adding features, maintaining parity with the proprietary version. Now OSS4 is open source it's merged into OpenSolaris and FreeBSD has pulled in the relevant features ALSA looks both dated and nonportable, but the Linux devs have invested a lot in it so they don't want to throw it away.
        • Re: (Score:3, Interesting)

          ALSA works fine, for pretty much everybody.

          Recently upgraded my motherboard to a new Gigabyte model which had on board HDMI and other audio for HDCP viewing. Needless to say, the standard ALSA packages for Ubuntu failed rather miserably to work. After several days of fighting with connectors, config files, reboots, re-installations and silent refusal to work, I only managed to get sound working by compiling ALSA from source. Of course, this now means that ALSA must be recompiled every time I upgrade the ke

      • Re: (Score:3, Informative)

        by walshy007 ( 906710 )

        OSS was okay

        It really wasn't, depending on your needs of course. If oss were good enough alsa would not have been invented.

        Pretty much all cards are handled by alsa in the kernel back end of things, that is pretty standardised etc, the whole problem is the sound server or userspace demon that handles mixing and other bits. PulseAudio was a band aid with horrible latency, Only professional apps tend to support jack. aRts and esd at least seem to have died out when most popular kde and gnome distro's both went to pulseau

      • Please define "works". After that, imagine that some people have other needs and what the definition of "works" is in their case. Think about multiseat setups, have one users X session play audio on the front speakers and another users X session play on the rear speakers, and they have separate master volume controls. Think about using one microphone to record to several different applications at the same time. Think about logging in to a remote computer and the audio of the applications you start play on y

      • It still is. I use it on archlinux after alsa farked up unexpectedly and it works fine.

        Well, actually, not quite, the mixer is broken. But I like to pretend it isn't.

      • OSS is the unix way (Score:3, Informative)

        by jabjoe ( 1042100 )
        Here we go again. I see the normal old OSS arguments that only apply to the old OSS Linux has.
        If ALSA is so great, why did it never get copied out side of Linux?

        Anyone else prefer having proper file interfaces for things like, Unix should do?
        If I want to write sound I write to /dev/dsp1 if I want to read sound I read from /dev/dsp1.
        If I want to write sound out to a second sound card, I write to /dev/dsp2. Nice simple device addressing system.
        Now I use ASLA, because the Linux support is all geared that
    • Re: (Score:3, Insightful)

      by Per Wigren ( 5315 )

      I agree but the situation is getting better and better. Pretty much every distribution has standardized on Pulseaudio and while it caused lots of problems in the beginning, and it still causes some problems on certain setups (especially with legacy, badly coded applications/games/emulators), it is a good API and it IS the future of Linux desktop audio, whether you like it or not. When this transitional period we are currently in is over, everyone will be much better off.

      • it IS the future of Linux desktop audio, whether you like it or not

        As someone who does pro audio production (and has to reboot into OS X to do most it properly) that sounds like a threat to me. We've waited long enough, can we please just get back to OSS. There is no good reason not to at this point.

        • Re:Linux audio (Score:5, Insightful)

          by Per Wigren ( 5315 ) on Thursday September 10, 2009 @09:53AM (#29377925) Homepage

          PA is for desktop audio. For pro audio production you'll run JACK and have PA output its audio to JACK instead of directly to ALSA. That way your pro audio apps will get their super low latency and all of the apps that can get away with 50ms latency will play through PA to JACK. You get the best of both worlds.

      • Re:Linux audio (Score:4, Insightful)

        by someone1234 ( 830754 ) on Thursday September 10, 2009 @09:44AM (#29377817)

        For some time Alsa was the "new tech". Now PulseAudio. By the time it stabilizes, there will be something else.

      • Re: (Score:3, Interesting)

        by walshy007 ( 906710 )

        it is a good API and it IS the future of Linux desktop audio,

        The future of linux audio it may be.. but good is questionable, no person using their linux machine for synths or midi would touch pulseaudio with a ten foot pole, jack is far superior with a lot less latency, but only applications designed for pro audio use tend to utilize it.

        When this transitional period we are currently in is over, everyone will be much better off.

        The latency incurred by pulse audio is horrendous, for youtube or movies that's fine, for gaming it's questionable, for music production it's nasty. These days completely removing pulseaudio and getting it all going again is quite an

        • Re: (Score:3, Insightful)

          by Per Wigren ( 5315 )

          As I wrote above: For pro audio production you'll run JACK and have PA output its audio to JACK instead of directly to ALSA. That way your pro audio apps will get their super low latency and all of the apps that can get away with 30-50ms latency will play through PA to JACK, at the same time even.

          With the very latest versions of Pulseaudio combined with a realtime kernel (soon to be merged into the mainline kernel), Pulseaudio won't give you much latency at all. It also uses MUCH less CPU than JACK so it's

        • Re:Linux audio (Score:4, Interesting)

          by bluefoxlucid ( 723572 ) on Thursday September 10, 2009 @11:34AM (#29379227) Homepage Journal

          What IS the PA latency, and the Jack latency? Jack seems idiotic; just use the sound card directly. Seriously, consider this: JACK -> ALSA, you can go directly to ALSA anyway (I haven't had a sound server for years). Do the mixing in your application and output it to ALSA. If real-time performance is an issue, don't run multiple apps at once. Record separate tracks versus a (monitored) metronome(!) when doing music, and then merge them with Audacity or GLAME.

          "Professional" audio amateurs seem to all be n00bs, using their recording device (computer) for playback whereas real "professionals" use monitors, metronomes, visual cues, and master tracks. You monitor your metronome, monitor yourself, monitor the playback track, whatever; and record a separate track. Then later you digitally merge those together. QED. Whatever stupidity relies entirely on your computer being able to low-latency its way out of a paper bag for you to get any work done is a huge engineering error.

      • by Hatta ( 162192 ) * on Thursday September 10, 2009 @10:06AM (#29378089) Journal

        it is a good API and it IS the future of Linux desktop audio,

        It may be a good API, but it's not a good implementation. But yeah, I can agree that glitchy, high latency audio is the future of Linux desktop audio.

        • Re: (Score:3, Funny)

          by Per Wigren ( 5315 )

          You know what? Maybe in the future Jack will implement the Pulseaudio API and be able to function as a drop-in replacement to Pulseaudio. It's not THAT unfeasible. Also, the PA implementation is getting better and the latest versions don't have that high latency if run on a -rt kernel with realtime privileges. A bit buggy under certain conditions, yes, but that will be fixed in the future.

          • Re:Linux audio (Score:4, Insightful)

            by Desler ( 1608317 ) on Thursday September 10, 2009 @12:51PM (#29380153)

            A bit buggy under certain conditions, yes, but that will be fixed in the future.

            Except this is the exact excuse we get countless times when audio, video, etc don't work in Linux. Just give us more time! We swear it'll work in the future! Then you wait 6 months and all that previous work is scrapped and something new is built. Then we're told again: Just give us more time! We swear it'll work in the future! Lather, rinse, repeat.

      • Re:Linux audio (Score:5, Insightful)

        by impaledsunset ( 1337701 ) on Thursday September 10, 2009 @10:14AM (#29378171)

        "Pretty much every distribution has standardized on Pulseaudio" is the very definition of regress. What you said was getting better and better? I installed Debian unstable on my laptop, with KDE desktop, and it also installed and enabled this trainwreck called "PulseAudio", which serves as only purpose to disable the audio of an already working system. Sound has worked for me in Linux since forever, never had any problems with it until PulseAudio came around.

        During the early days I had been using a sound card with hardware mixing. Back then even Windows wasn't coping well with several streams and a card supporting only one, so what OSS offered back then was good enough for me, and on par with other operating systems. Then came ALSA, which offered dmix and dsnoop to do it software. Now, dsnoop has never worked for me, but I don't know any other operating system that supports such a feature so I guess I don't have much ground to complain.

        Then PulseAudio came around, and that is the first time when I had any problem with sound on Linux. Sound started to be skippy, jumpy, choppy, and not working in some applications. Why would anyone think that PulseAudio would be a good idea? Now, don't get me wrong, I like PulseAudio, I even use it for some tasks. Namely playing music from my laptop on the soundcard of my desktop. But thanks to the brilliant idea that PulseAudio should be used everywhere I couldn't really do that anymore, because I had to eradicate PulseAudio to have sound again, so I couldn't use it for *my* needs. Fuck me.

        Disclaimer: I'm not sure I'm chronologically correct above, the sound might have been in a better state in Windows than in Linux during OSS times, I just mean that I was already used to being able to play only *one* sound at a time when I first came to use Linux, so it seemed pretty normal thing to me.

        • Yes, this transitional period is pretty harsh to us who wants to run some older software that use bad coding practices. :( Thankfully, it will only get better as applications are fixed and backwards compatibility interfaces, like CUSE+ossp mentioned in the article summary, get better.

    • Yo dawg (Score:3, Funny)

      We heard you like your audio to work, so we put a sound API in your sound API, so you can have silence whilst you listen!

    • Re: (Score:3, Funny)

      by sarhjinian ( 94086 )

      It's no worse than video is, really. The four-second PulseAudio lag* matches nicely with the lack-of-vsync-based tearing in X.**

      Actually, I take that back: video is worse. At least with PulseAudio I can see how it's eventually supposed to work if it didn't crash periodically. The clusterf_ck that is video playback doesn't look like it'll get fixed anytime soon, what with the six-party fight between all the various components.

      You can really tell that the bills for Linux's development are being paid by ser

    • Re:Linux audio (Score:4, Insightful)

      by jw867 ( 97358 ) on Thursday September 10, 2009 @09:50AM (#29377893)

      What bothers me here is that I read "Oh, change this, do that turn this knob and sound will work for you." Then it works until there's a new kernel update (I use Ubuntu) and it breaks again. Or it just stops working after too many applications use it.

      Then you read how fabulous PulseAudio is and how wonderful it is, but it just plain does not work. By working, it should work every time, all the the time without knob turning. It's embarrassing that in this area, Windows 95 is superior to Linux in almost every respect.

      All this effort is put into chrome polishing the kernel for faster SMP with 64 CPU systems and the dang box can't even play music without having some sort of brain failure.

    • by amn108 ( 1231606 )

      Why mod this insightful? Just because ALSA proxies OSS, does not mean it HAS to. It is your choice, and choice is part of Linux philosophy. ALSA works fine with its own hardware drivers, without OSS involved at all. Which it usually does too. You are complaining that somebody gave you an option to use a soundcard with OSS-only mixer with ALSA applications. Where is the logic in that? It is like complaining that PulseAudio should be removed and buried because you don't use it, even though many find it conven

  • IPv4 over Firewire? (Score:4, Interesting)

    by 0100010001010011 ( 652467 ) on Thursday September 10, 2009 @08:52AM (#29377245)

    I guess I really wasn't into linux until the last 3-4 years, but hasn't OS X done this since the start? And I think my XP machine at work tries to use Firewire as a network adapter.

    What took so long, honest question.

    • Re: (Score:3, Insightful)

      by FinchWorld ( 845331 )
      It never took off? I've only ever used firewire for networking once, and that was for the sheer novelty of seeing if it could be done.
      • by muckracer ( 1204794 ) on Thursday September 10, 2009 @09:14AM (#29377479)

        Networking over USB would be awesome. Link 2 PC's with USB cable and voila! Hell, even being able to mount an internal drive that way on the other machine would be cool. Anything like that in the works (haven't checked)?

        • by Lemming Mark ( 849014 ) on Thursday September 10, 2009 @09:44AM (#29377819) Homepage

          How was the parent modded troll, it's completely valid!

          It's a good idea. There have been networking over USB devices (by which I mean plugging both machines USB ports into the device, not "merely" a USB ethernet adaptor). The problem with doing this with USB, rather than Firewire is that USB has a really strong concept of "host" and "device". The cables are made to only plug into certain combinations of endpoints because, sadly, only certain combinations of endpoints can possibly work. You can't plug the host controller of one PC into another, since they're only expecting to talk to devices, not another controller. This is in contrast to Firewire, which is peer-to-peer and (in principle) anything could talk to anything over it.

          The unfortunate consequence is that you don't just get to do networking over a nice, cheap cable as you do with Firewire. You actually need a little device box in between so that both hosts can believe they're talking to a peripheral, not another host. This approach, on its own, wouldn't let you plug in "remote" devices either so you'd have to set some other protocol up (plenty of existing options here) to talk to devices at the other end. You have to be a bit careful because most devices would barf horribly if there are multiple users - uncontrolled shared access to a disk device is a good way to lose all your data, for instance.

          Although it's fun to do IP over Firewire, I'm not familiar with exactly how it's implemented. What intrigues me is the prospect of running increasingly sophisticated high-performance protocols over Firewire. As I understand it you can basically get remote DMA access to the "other end's" memory. This obviously has severe security implications but it could be quite nice in a mutually-trusting cluster. There are various protocols (e.g. used by Infiniband) for having communications over remote DMA. I wonder if anyone could put together an "infiniband lite" that just ran over Firewire. It'd be cool, though I don't know if it would be particularly useful ;-) (plus it would lack the user accessible networking Infiniband has)

          • by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Thursday September 10, 2009 @11:23AM (#29379085)

            [...] only certain combinations of endpoints can possibly work. You can't plug the host controller of one PC into another, since they're only expecting to talk to devices, not another controller.

            Not so with Linux. You can enable the "USB gadget" driver in the kernel. Now if you have a device connector in your system, it can act like any other device. That is how Linux on small devices connects to their hosts via USB. And actually, the way they communicate is plain and simple TCP/IP. :)

            • Re: (Score:3, Informative)

              Yes, that's true, I should have mentioned that. But you still can't plug a normal host controller into another, regardless of the software stack you're running. You need special hardware that most PCs won't have that implements the device end of the channel. I think that it's something of a shame that PCs don't include this hardware but I imagine it wouldn't be that useful, given they all include ethernet ports these days.

    • by uhmmmm ( 512629 ) <uhmmmm@gmCOUGARail.com minus cat> on Thursday September 10, 2009 @09:14AM (#29377473) Homepage

      I haven't looked at the official changelog or the code yet, but I'm just as confused as you about that item. Moreso perhaps, as I have used IPv4 over firewire with two linux machines before. That was probably 5 years ago or so.

      • Re: (Score:2, Informative)

        by Motormouz ( 648619 )
        They've added IPv4 to the new firewire stack. You know, the one they put in the kernel some time ago in favor of the old one and caused some headaches to firewire users.
    • by Anonymous Coward on Thursday September 10, 2009 @09:23AM (#29377563)

      From the changelog

      *The new firewire driver stack is no longer considered experimental, and distributors are encouraged to migrate from the ieee1394 stack to the firewire stack
      *Added IP networking to the new FireWire driver stack

      It does add up. It's just added to the new stack, old stack has it already.

  • 70% drivers! (Score:3, Interesting)

    by millwall ( 622730 ) on Thursday September 10, 2009 @09:05AM (#29377363)

    Lots and lots of driver work. Over 70% of all of the 2.6.30 to 2.6.31 patch is under drivers/, and there's another 6%+ in firmware/ and sound/. That's not entirely unusual, but it does seem to be growing. My rough rule of thumb used to be "50% drivers, 50% everything else", but that's clearly not true any more (and hasn't been for a while - we've been 60%+ since after 2.6.27

    I personally think this is a real pity. So much time is being spent on getting drivers implemented that new features and other kinds enhancements are being pushed back.

    • Re:70% drivers! (Score:5, Insightful)

      by von_rick ( 944421 ) on Thursday September 10, 2009 @09:13AM (#29377455) Homepage
      Enhancements should come as a part of the OS, not the kernel. The main function of a kernel is to get along with all the hardware devices on the system. Drivers should be given a high priority.
      • Re: (Score:3, Insightful)

        by MBGMorden ( 803437 )

        I'd argue that drivers should be modular and have no business being directly in the kernel in the first place - but that's just me.

        • Re:70% drivers! (Score:4, Informative)

          by schon ( 31600 ) on Thursday September 10, 2009 @09:54AM (#29377947)

          I'd argue that drivers should be modular and have no business being directly in the kernel in the first place - but that's just me.

          $ find /lib/modules/2.6.27.7-smp/kernel/drivers/ -type f|wc -l
          1499

          Looks like you're in luck!

          • Re:70% drivers! (Score:4, Insightful)

            by MBGMorden ( 803437 ) on Thursday September 10, 2009 @10:03AM (#29378061)

            Great - now if I compile these lovely drivers will they work on my buddy's (or more importantly, a user's) system running kernel 2.6.1? 2.6.22? 2.6.31? 2.4.5?

            Dividing the source and binary out into separate files doesn't make it modular. The infrastructure to move the binaries around needs to be in place so that a driver can be loaded with little regard as to kernel version.

            • by gnud ( 934243 )
              That's a valid point of view, but it directly contradicts OPs argument for less focus on drivers and more on other enhancements.
            • Re:70% drivers! (Score:5, Insightful)

              by PitaBred ( 632671 ) <slashdot&pitabred,dyndns,org> on Thursday September 10, 2009 @11:14AM (#29378965) Homepage
              Keeping binary compatibility limits infrastructure improvements that can be made. You're limited in what you can do to the kernel because of drivers that are sitting out there that expect binary compatibility. If we have drivers that can be loaded with little regard to the kernel version, we get into the quagmire that is Windows, where devices over 7 years old have a low chance of working. My scanner hasn't worked in Windows since XP SP1. It still works perfectly in the brand-spankingest new Linux distros.
            • Re: (Score:3, Informative)

              by JohnFluxx ( 413620 )

              Ah so now you're arguing that they should freeze the interface, and prevent any more improvements.

              Read http://www.mjmwired.net/kernel/Documentation/stable_api_nonsense.txt [mjmwired.net]

        • Re:70% drivers! (Score:5, Insightful)

          by Kjella ( 173770 ) on Thursday September 10, 2009 @10:07AM (#29378091) Homepage

          I'd argue that drivers should be modular and have no business being directly in the kernel in the first place - but that's just me.

          I don't anyone ever argued that drivers should not be modular, in fact that's why there's kernel modules. I'm guessing you're talking about one of the two general flamewars:

          1) Monolithic kernel or microkernel
          2) Stable ABI for drivers

          The first is about making the kernel into a big message-passing daemon, which it turns out has a performance penalty and ultimately doesn't have big enough benefits because a kernel panic and a major subsystem hang/crash both are ugly and if the hardware is left in a borked state it might not really help.

          The other is a stable ABI, which has been suggested about 234,533,458 times to date. My only real comment to that is that seeing how crappy many Windows drivers are, do you honestly want them making blobs for a 1% operating system which will get about as much priority, support and bugfixes? Drivers based on specs or donated source almost always suck less.

          • Re: (Score:3, Interesting)

            by TheRaven64 ( 641858 )

            The first is about making the kernel into a big message-passing daemon, which it turns out has a performance penalty

            More accurately, it has a performance penalty on uniprocessor systems. On SMP systems, using a system such as the one used by Xen with lockless ring buffers in shared memory, it provides a performance gain.

            The other is a stable ABI, which has been suggested about 234,533,458 times to date. My only real comment to that is that seeing how crappy many Windows drivers are, do you honestly want them making blobs for a 1% operating system which will get about as much priority, support and bugfixes?

            A stable ABI is less important than a stable API (which Linux doesn't have either), which reduces the overhead of maintaining device drivers a lot and makes regressions much harder to introduce accidentally.

        • by Timmmm ( 636430 )

          I totally agree, but it seems the linux devs don't want to have to maintain a stable driver ABI. I think that's a pretty silly position to take; hopefully they'll change their minds at some point.

    • Enhancements like what?
    • Re:70% drivers! (Score:4, Insightful)

      by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Thursday September 10, 2009 @09:16AM (#29377495) Homepage Journal

      I personally think this is a real pity. So much time is being spent on getting drivers implemented that new features and other kinds enhancements are being pushed back.

      I would assume that the people writing drivers and the people doing core stuff are not the same people, so there's no "pushed back". Ideally you'd have driver writers employed by all the various hardware manufacturers, while core stuff likely only interests a much smaller group of companies that live higher in the stack (probably just system and support vendors).

    • Re: (Score:3, Insightful)

      by jellomizer ( 103300 )

      Well driver problems are the real problem with Linux. It always has been. When push come to shove comparing Linux with other OS's even the Linux Zealots admit that it is a driver problem. Most kernel features will not directly effect us like driver issues. Once Linux fixes its driver problems then it should focus on getting more features in... However in the mean time, the kernel should be improved on what the kernel is supposed to do Manage Hardware interface with software. And Drivers help with the

      • Re:70% drivers! (Score:5, Insightful)

        by coolsnowmen ( 695297 ) on Thursday September 10, 2009 @12:59PM (#29380225)

        Insightful?! You couldn't be more clueless.

        How do you propose to fix the driver problem? The only way that gets fixed is when every hardware manufacturer writes their own drivers. That would only happen if Linux attained something like 10% market share.

        In recent history (the last year or two) the majority (50.1-60%) of all commits to the kernel are drivers/driver update.

        Also you forget that there isn't some company that dictates what work gets done on the kernel. There are many developers who work on areas they are want to work in. Are you telling me that linux should reject the FS brtfs because there is a non-name piece of hardware that isn't working yet?

        Most kernel features will not directly effect us like driver issues.

        Wrong again, My new hardware which I bought off newegg last week works fine in linux (yes I do a quick google search to make sure anyone isn't bitching about something major not working, but anyone who uses linux knows to do that). Because it works, any feature such as a file-system, scheduler improvement, or desktop memory management in low memory situations will improve my experience much more than adding a driver that I won't ever need or use.

      • Re:70% drivers! (Score:5, Informative)

        by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Thursday September 10, 2009 @01:56PM (#29380779)

        There may have been a driver problem years ago, but today the problem is pretty much limited to graphics. And DVB, but the competition is doing just as badly there. Overall, Linux driver support is more complete than at least that of Windows Vista.

    • Re:70% drivers! (Score:4, Informative)

      by walshy007 ( 906710 ) on Thursday September 10, 2009 @09:21AM (#29377541)

      Not really, the driver people aren't really the same as those who would be researching new and exciting ways to do what we already do. For quite a long time now driver development has been the majority of what the linux kernel development is.

      Of course, every now and then they make something new like mac80211, but all that really achieves is more efficient code re-use and testing, which is always good but is still just driver development.

      All the simple things an operating system kernel has to do hasn't changed over the last ten or so years, just the hardware has. Operating system theory was pretty much perfected back in the 60's

    • Re:70% drivers! (Score:5, Insightful)

      by MrHanky ( 141717 ) on Thursday September 10, 2009 @09:22AM (#29377547) Homepage Journal

      What evidence have you got that suggests driver development means other development is pushed back? Do you think the EXT4 developers take time off to write device drivers?

      Lots of driver development means Linux has lots of driver developers. That probably suggests that hardware manufacturers actually try to get their stuff supported.

    • Re:70% drivers! (Score:4, Informative)

      by Anonymous Coward on Thursday September 10, 2009 @09:28AM (#29377611)

      The fact that, with a modern Linux distro, I can plug in pretty much any hardware at least a year old and have it just work no questions asked is a pretty damn spiffy feature.

    • "I personally think this is a real pity. So much time is being spent on getting drivers implemented that new features and other kinds enhancements are being pushed back"

      I though the main bugaboo was the lack of hardware support in Linux. What other features and enhancements are being neglected?
  • by Lemming Mark ( 849014 ) on Thursday September 10, 2009 @09:52AM (#29377923) Homepage

    Before we all moved on to worrying about PulseAudio it was traditional for us to complain about legacy apps using OSS, the difficulties associated with wrapping them, the nastiness associated with OSS emulation being implemented in the kernel, etc. Those apps won't have gone away.

    Previous attempts to emulate OSS using ALSA have included the aoss tool, which I believe did some mildly ungodly tricks to intercept calls that would usually go to the OSS APIs. It didn't always work, for me, as it depends on what the (often weird and proprietary) app is doing to access the OSS API in the first place. PulseAudio has to provide a tool to help you redirect legacy OSS apps to talk to PulseAudio instead. It's all Made Of Ick.

    CUSE (character devices in userspace) allows a userspace program to provide a character device node in /dev and implement it using custom code, rather than relying on an in-kernel driver. When apps open the device node they'll *really* be talking to the userspace daemon implementing the device emulation, rather than to an in-kernel driver (though, of course, the kernel will be involved in relaying the communications through the device interface). This is very similar to what FUSE does for filesystems. The neat thing here is that weird tricks to catch OSS accesses by applications are not needed - the OSS device can simply be "faked" by the real sound daemon. Because it's implemented at device level, it doesn't matter what nasty hacks the OSS application is doing to access the soundcard - you'll *always* be able to grab its sound output from the fake device and do the right thing. No more running legacy apps with an OSS-related wrapper - and no more having the wrapper fail to work!

    The end result should be that sound Just Works, even for awkward proprietary apps. CUSE will not automagically fix this on it's own, though - we need to wait for the sound daemons like PulseAudio to catch up and implement the emulation. This might also allow OSS emulation to be removed from the kernel, which AFAIK also supports some variant of OSS-on-ALSA.

    • by raddan ( 519638 ) *
      So... from what I gather from your post, you can't just graft an OSS API on top of ALSA (or PulseAudio for that matter) because ALSA and PulseAudio run in userspace? Why are we putting sound daemons in userspace anyway?
      • Re: (Score:3, Interesting)

        by TheRaven64 ( 641858 )

        Putting sound mixing in userspace has advantages and disadvantages. The first advantage is that it moves code out of the kernel, which is usually a good idea because bugs in the kernel can crash the entire system. The other advantage is that it can use the normal scheduler. This is less important now, but sound mixing used to be very processor-intensive compared with the total system load and if it's in the kernel it can't be easily preempted by userspace tasks. The big disadvantage is that, rather than

  • Yay! (Score:5, Funny)

    by British ( 51765 ) <british1500@gmail.com> on Thursday September 10, 2009 @09:54AM (#29377945) Homepage Journal

    Improved acronym support!
    Numbers higher than the last version!
    greater infusion processor link array warp drive systems!

  • How long until the APIs provided by CUSE are used to implement an arbitrary-character-devices-over-network protocol? That would be pretty cool and useful. Should be doable, from what I understand of how it works.

    The description of the in-kernel changes on LWN's article on the subject (http://lwn.net/Articles/308445/) made it sound like the infrastructure could also be used for stuff like network filesystems whose /dev contains *remote* character devices (currently NFS device nodes are always serviced by l

  • LinuxPPS made it into the kernel: http://wiki.enneenne.com/index.php/LinuxPPS_support [enneenne.com]
    The LinuxPPS project is an implementation of the Pulse Per Second (PPS) API for GNU/Linux version 2.6.
  • native xfi support (Score:4, Informative)

    by hyperion2010 ( 1587241 ) on Thursday September 10, 2009 @11:52AM (#29379479)

    Finally ALSA adds in kernel support for creative X-Fi after 4 years, fuck creative.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...