Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Operating Systems Linux

Linux Kernel 2.6.31 Released 374

diegocgteleline.es writes "The Linux kernel v2.6.31 has been released. Besides the desktop improvements and USB 3.0 support mentioned some days ago, there is an equivalent of FUSE for character devices that can be used for proxying OSS sound through ALSA, new tools for using hardware performance counters, readahead improvements, ATI Radeon KMS, Intel's Wireless Multicomm 3200 support, gcov support, a memory checker and a memory leak detector, a reimplementation of inotify and dnotify on top of a new filesystem notification infrastructure, btrfs improvements, support for IEEE 802.15.4, IPv4 over Firewire, new drivers and small improvements. The full list of changes can be found here."
This discussion has been archived. No new comments can be posted.

Linux Kernel 2.6.31 Released

Comments Filter:
  • IPv4 over Firewire? (Score:4, Interesting)

    by 0100010001010011 ( 652467 ) on Thursday September 10, 2009 @08:52AM (#29377245)

    I guess I really wasn't into linux until the last 3-4 years, but hasn't OS X done this since the start? And I think my XP machine at work tries to use Firewire as a network adapter.

    What took so long, honest question.

  • 70% drivers! (Score:3, Interesting)

    by millwall ( 622730 ) on Thursday September 10, 2009 @09:05AM (#29377363)

    Lots and lots of driver work. Over 70% of all of the 2.6.30 to 2.6.31 patch is under drivers/, and there's another 6%+ in firmware/ and sound/. That's not entirely unusual, but it does seem to be growing. My rough rule of thumb used to be "50% drivers, 50% everything else", but that's clearly not true any more (and hasn't been for a while - we've been 60%+ since after 2.6.27

    I personally think this is a real pity. So much time is being spent on getting drivers implemented that new features and other kinds enhancements are being pushed back.

  • by muckracer ( 1204794 ) on Thursday September 10, 2009 @09:14AM (#29377479)

    Networking over USB would be awesome. Link 2 PC's with USB cable and voila! Hell, even being able to mount an internal drive that way on the other machine would be cool. Anything like that in the works (haven't checked)?

  • by Lemming Mark ( 849014 ) on Thursday September 10, 2009 @09:44AM (#29377819) Homepage

    How was the parent modded troll, it's completely valid!

    It's a good idea. There have been networking over USB devices (by which I mean plugging both machines USB ports into the device, not "merely" a USB ethernet adaptor). The problem with doing this with USB, rather than Firewire is that USB has a really strong concept of "host" and "device". The cables are made to only plug into certain combinations of endpoints because, sadly, only certain combinations of endpoints can possibly work. You can't plug the host controller of one PC into another, since they're only expecting to talk to devices, not another controller. This is in contrast to Firewire, which is peer-to-peer and (in principle) anything could talk to anything over it.

    The unfortunate consequence is that you don't just get to do networking over a nice, cheap cable as you do with Firewire. You actually need a little device box in between so that both hosts can believe they're talking to a peripheral, not another host. This approach, on its own, wouldn't let you plug in "remote" devices either so you'd have to set some other protocol up (plenty of existing options here) to talk to devices at the other end. You have to be a bit careful because most devices would barf horribly if there are multiple users - uncontrolled shared access to a disk device is a good way to lose all your data, for instance.

    Although it's fun to do IP over Firewire, I'm not familiar with exactly how it's implemented. What intrigues me is the prospect of running increasingly sophisticated high-performance protocols over Firewire. As I understand it you can basically get remote DMA access to the "other end's" memory. This obviously has severe security implications but it could be quite nice in a mutually-trusting cluster. There are various protocols (e.g. used by Infiniband) for having communications over remote DMA. I wonder if anyone could put together an "infiniband lite" that just ran over Firewire. It'd be cool, though I don't know if it would be particularly useful ;-) (plus it would lack the user accessible networking Infiniband has)

  • Re:Linux audio (Score:3, Interesting)

    by walshy007 ( 906710 ) on Thursday September 10, 2009 @09:52AM (#29377921)

    it is a good API and it IS the future of Linux desktop audio,

    The future of linux audio it may be.. but good is questionable, no person using their linux machine for synths or midi would touch pulseaudio with a ten foot pole, jack is far superior with a lot less latency, but only applications designed for pro audio use tend to utilize it.

    When this transitional period we are currently in is over, everyone will be much better off.

    The latency incurred by pulse audio is horrendous, for youtube or movies that's fine, for gaming it's questionable, for music production it's nasty. These days completely removing pulseaudio and getting it all going again is quite an effort.

    I can't imagine everyone being much better off, only those who want sound who don't care about the latency or from the music peoples perspective functionality.(jack can do a lot of things pulse can't do)

  • Support for what? (Score:5, Interesting)

    by saleenS281 ( 859657 ) on Thursday September 10, 2009 @09:55AM (#29377969) Homepage
    Support for what? A quick search of newegg tells me I can't buy a motherboard, add-on card, or peripheral that supports USB 3.0 today. What exactly was windows 7 going to support? An unreleased chipset?

    From your own article:
    Jeff Ravencraft of Intel said that he expects the final specification to be announced in San Jose, Calif., on November 17.

    Wait, so I'm supposed to be upset that Microsoft didn't ship experimental drivers for an unratified standard in their new OS?
  • Re:Linux audio (Score:1, Interesting)

    by Anonymous Coward on Thursday September 10, 2009 @09:57AM (#29377989)

    This scary graph [adobe.com] and related ideas tends to get mentioned in connection with this: this conflates libraries, sound servers, and drivers to some extent. One could draw a similar graph for windows, featuring programs using the Quicktime library, the WMP library, MME, DirectSound, WASAPI and various other APIs and libraries (and I haven't even gone into the changes to the audio driver model). WMP would have plenty of in arrows from applications using its libraries, and plenty of out arrows because it supports more than one API. And don't forget that there are still legacy applications which need to be the only app playing audio, just like on Linux.

    Except Windows apps usually don't break just because there is a new flavor of the month in terms of audio. You can't argue that this is a good thing. The only reason it is OK in Linux is because most apps have source available. As usual, Windows makes the less pure choice (in terms of engineering), Linux does the right thing by abandoning the technology.

    The fact that it is 2009 and there are still audio issues on Linux is telling, however.

  • Re:Linux audio (Score:5, Interesting)

    by walshy007 ( 906710 ) on Thursday September 10, 2009 @10:01AM (#29378033)

    OSS made it impossible to play more than one stream at once on a lot of hardware.

    With a standard configuration, alsa does also, you have to load the dmix module in your config to act as a software mixer on cards that don't do hardware mixing (most on board bits).

    This is where the userspace demons enter it all, most of them just started out as another layer that does software mixing, but every man and his dog came up with his own invention.

    As for just using alsa, that's great if you don't mind not having certain functionality, some of the sound demons do add some nice features (jack is the only one I've found worth using though). It could be argued the driver layer shouldn't have to deal with some of that advanced functionality though, another reason why these demons were made.

  • Re:Linux audio (Score:3, Interesting)

    by Timmmm ( 636430 ) on Thursday September 10, 2009 @10:22AM (#29378277)

    OSS made it impossible to play more than one stream at once on a lot of hardware.

    That was OSS 3. OSS 4 apparently allows you to do this on all hardware and is apparently much nicer than ALSA. It's also open source again. I read a good article about this situation a while ago but can't find it now.

  • Re:Linux audio (Score:3, Interesting)

    by ObsessiveMathsFreak ( 773371 ) <obsessivemathsfreak.eircom@net> on Thursday September 10, 2009 @10:27AM (#29378373) Homepage Journal

    ALSA works fine, for pretty much everybody.

    Recently upgraded my motherboard to a new Gigabyte model which had on board HDMI and other audio for HDCP viewing. Needless to say, the standard ALSA packages for Ubuntu failed rather miserably to work. After several days of fighting with connectors, config files, reboots, re-installations and silent refusal to work, I only managed to get sound working by compiling ALSA from source. Of course, this now means that ALSA must be recompiled every time I upgrade the kernel. And I honestly can't hear any difference between the older OSS drivers and the ALSA ones.

    Having to compile from source constitutes a major failure of any general purpose FOSS software.

    OK, I'm appreciative of the fact that hardware manufacturers are a major problem in this area, as they have refused utterly to either release drivers or specs. However, the same concerns applied to monitors, network cards and graphics cards only a few years ago, yet these problems are largely behind us. There is one remaining important question here; Would these problems still exist if we had stuck with OSS? If the answer is no, then the move to ALSA has been a dreadful mistake.

  • by fuzzyfuzzyfungus ( 1223518 ) on Thursday September 10, 2009 @10:30AM (#29378397) Journal
    For tape-based systems, or for situations where a chain of components are expecting a DV stream to be arriving on schedule, you really can't beat firewire. And, for that reason, nearly any PC being used for DV editing will have firewire onboard. Nicer motherboards have it standard, PCI/PCIe expansion cards are cheap if yours doesn't.

    However, the trend in camera tech, at least at the consumer level, is making that increasingly irrelevant. Flash and HDD based camcorders are gradually devouring DV camcorders in the lower end market. Pretty much all the HDD or flash based cameras(at least the ones that cost less than the computer they are connected to) just show up as USB mass storage devices, with one or more video files on them. Drag and drop and go. Unlike DV, where the transfer requires that X megabits per second make it from point a to point b, on time, or you'll get glitches, mass storage just requires that all the bits get from point a to point b before the user gets bored. USB still isn't quite as good as firewire at doing that; but the difference in performance is small, and the difference in price/convenience is large.

    Once you get away from the real time streaming requirements of DV, to which firewire is well suited, transferring video is just a special case of connecting an external hard drive. Firewire is better there; but only modestly, which isn't really good enough to survive on the price sensitive end of things.
  • Re:Support for what? (Score:4, Interesting)

    by jbeaupre ( 752124 ) on Thursday September 10, 2009 @10:37AM (#29378481)
    Don't be upset with MS, be happy for Linux. In fact, with support built in, Linux will frequently be used to develop and test the hardware. So some of the early USB 3 products will be de facto optimized to run with Linux.
  • Re:Linux audio (Score:3, Interesting)

    by Carewolf ( 581105 ) on Thursday September 10, 2009 @10:37AM (#29378487) Homepage

    There were only need on Linux OSS because Linus refused to do audio mixing in the kernel. This means the resource sharing and hardware abstraction the kernel _should_ be doing was delegated to user-space.

  • Re:70% drivers! (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Thursday September 10, 2009 @10:48AM (#29378589) Journal

    The first is about making the kernel into a big message-passing daemon, which it turns out has a performance penalty

    More accurately, it has a performance penalty on uniprocessor systems. On SMP systems, using a system such as the one used by Xen with lockless ring buffers in shared memory, it provides a performance gain.

    The other is a stable ABI, which has been suggested about 234,533,458 times to date. My only real comment to that is that seeing how crappy many Windows drivers are, do you honestly want them making blobs for a 1% operating system which will get about as much priority, support and bugfixes?

    A stable ABI is less important than a stable API (which Linux doesn't have either), which reduces the overhead of maintaining device drivers a lot and makes regressions much harder to introduce accidentally.

  • by gabebear ( 251933 ) on Thursday September 10, 2009 @10:56AM (#29378697) Homepage Journal
    USB3 is going to be an expensive upgrade. The only controller chip currently available are $15/chip http://en.wikipedia.org/wiki/Universal_Serial_Bus#Availability [wikipedia.org] The thicker more expensive cables required to make USB3 work are also a problem. USB1.2->USB2.0 technically wanted you to have higher-quality cables, but you didn't really need to.

    eSATA seems like a much better solution.

    I hope Apple starts putting eSATA/USB combined ports on their laptops soon.

  • by TheRaven64 ( 641858 ) on Thursday September 10, 2009 @11:10AM (#29378893) Journal

    Putting sound mixing in userspace has advantages and disadvantages. The first advantage is that it moves code out of the kernel, which is usually a good idea because bugs in the kernel can crash the entire system. The other advantage is that it can use the normal scheduler. This is less important now, but sound mixing used to be very processor-intensive compared with the total system load and if it's in the kernel it can't be easily preempted by userspace tasks. The big disadvantage is that, rather than copying the sound from userspace to kernelspace and then copying the mixed stream to the device, you need to copy it from userspace to userspace (if it's done via a pipe, you need to copy it from userspace to kernelspace then from kernelspace to userspace) and then copy the mixed stream from userspace to the kernel then from the kernel to the device. This adds latency and CPU load from all of the copying.

    On a modern system, the amount of latency and CPU load added for userspace mixing is generally not large enough to matter. The other advantage of doing it in the kernel is largely cosmetic. It's the kernel's job to hide hardware differences from userspace software. If you implement mixing in the kernel, it is much easier to hide whether it's hardware or software mixing from the userspace software and remove the software mixer from the stack entirely when it's not needed. The new stuff in Linux removes this.

    Note that the distinction between kernelspace and userspace isn't always clear-cut here. Some kernels run their device drivers, or parts of their device drivers, at a lower privilege level so, although they share the kernel's address space, they don't have access to all of it. On IA32, the obvious place for sound mixing is in ring-2, with the userspace code in ring 3, the driver in ring 1 and the kernel in ring 0. Unfortunately, this design would be completely non-portable so no one has bothered implementing it (except, possibly, OS/2).

  • Re:Linux audio (Score:4, Interesting)

    by bluefoxlucid ( 723572 ) on Thursday September 10, 2009 @11:34AM (#29379227) Homepage Journal

    What IS the PA latency, and the Jack latency? Jack seems idiotic; just use the sound card directly. Seriously, consider this: JACK -> ALSA, you can go directly to ALSA anyway (I haven't had a sound server for years). Do the mixing in your application and output it to ALSA. If real-time performance is an issue, don't run multiple apps at once. Record separate tracks versus a (monitored) metronome(!) when doing music, and then merge them with Audacity or GLAME.

    "Professional" audio amateurs seem to all be n00bs, using their recording device (computer) for playback whereas real "professionals" use monitors, metronomes, visual cues, and master tracks. You monitor your metronome, monitor yourself, monitor the playback track, whatever; and record a separate track. Then later you digitally merge those together. QED. Whatever stupidity relies entirely on your computer being able to low-latency its way out of a paper bag for you to get any work done is a huge engineering error.

  • Re:Linux audio (Score:3, Interesting)

    by setagllib ( 753300 ) on Thursday September 10, 2009 @11:49AM (#29379441)

    The real fix would be to make PulseAudio use OpenAL optionally, so that cards that have accelerated mixing can be made to use it. I don't see the point though - not only are modern CPUs more than powerful enough to do it in userspace, they can't possibly have per-card defects while doing it.

    Now that we do have PulseAudio it's best to trim as much fat and necrotic code from the kernel as possible. If the remaining realtime issues can be resolved, for which there is much experimental literature, it'll be perfect.

  • by Lemming Mark ( 849014 ) on Thursday September 10, 2009 @12:19PM (#29379831) Homepage

    It's not going to replace things like Jack (and OSS4 if that's available to you) but I don't think it's trying to.

    It's trying to replace those weird LD_PRELOAD wrappers you have to use to make OSS-only apps speak to ALSA / PulseAudio. CUSE should be used to remove the need for LD_PRELOAD wrappers, making it more robust and simpler to use legacy OSS apps that can't use ALSA or PulseAudio directly. Regardless of what you think about replacing OSS, the current situation is pretty much pessimal: tricking applications into talking to the sound daemon, except sometimes it doesn't work.

    If you're in the situation - right or wrong - of having a sound daemon, there should at least be a well-supported way of routing the sound from all apps into it. FUSE is slow partly because of more context switches and data copying. The context switches and (potentially) data copying are already required when a userspace daemon provides OSS emulation, so I can't see that CUSE will make this situation worse.

    AFAICS, using CUSE to provide OSS emulation is pretty much unambiguously better for those already using a userspace sound server. For those who aren't, it doesn't affect them any more than the previous wrapper-based techniques did.

  • Re:Linux audio (Score:3, Interesting)

    by Hatta ( 162192 ) * on Thursday September 10, 2009 @03:38PM (#29381991) Journal

    Yet they are curiously silent when commercial entities release derivative works under their own license.

  • Re:Linux audio (Score:3, Interesting)

    by TheBig1 ( 966884 ) on Thursday September 10, 2009 @06:23PM (#29383783) Homepage
    Exactly - I was having latency issues with some drum kit software I wrote, and found the problem to be pulse audio. It was adding a couple hundred milliseconds to the latency; this is completely unacceptable. After removing pulse audio from the system, everything was much better.

    I like the features of pulse audio, especially per-application volume control, but it is not worth 200 ms latency to get it.

    Cheers

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...