Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source Linux

Linux 2.6.34 Released 268

diegocg writes "Linux 2.6.34 has been released. This version adds two new filesystem, the distributed filesystem Ceph and LogFS, a filesystem for flash devices. Other features are a driver for almost-native KVM network performance, the VMware balloon driver, the 'kprobes jump' optimization for dynamic probes, new perf features (the 'perf lock' tool, cross-platform analysis support), several Btrfs improvements, RCU lockdep, Generalized TTL Security Mechanism (RFC 5082) and private VLAN proxy arp (RFC 3069) support, asynchronous suspend/resume, several new drivers and many other small improvements. See the full changelog here."
This discussion has been archived. No new comments can be posted.

Linux 2.6.34 Released

Comments Filter:
  • by deathcow ( 455995 ) * on Monday May 17, 2010 @04:19AM (#32234844)

    > a filesystem for flash devices

    here we go again, unless we stop supporting flash, Apple has refused to distribute dual-boot Linux enabled iPads

  • KVM (Score:5, Interesting)

    by VTI9600 ( 1143169 ) on Monday May 17, 2010 @04:22AM (#32234862)

    Other features are a driver for almost-native KVM network performance

    KVM is fantastic virtualization technology, yet Xen gets all the hype these days. Why? Paravirtualization is pretty cool stuff, but seriously, what CPU's are made without some type of hardware-assisted virtualization support?

    • The CPU in my Acer Aspire 5930 is one example: http://ark.intel.com/Product.aspx?id=36750 [intel.com]

    • Re:KVM (Score:5, Informative)

      by IBBoard ( 1128019 ) on Monday May 17, 2010 @04:51AM (#32234938) Homepage

      Erm, quite a lot? Intel use it as one of their distinguishing factors between upper and lower tier chips (albeit one that they put in data sheets but don't make overly obvious).

      • Re:KVM (Score:5, Informative)

        by VTI9600 ( 1143169 ) on Monday May 17, 2010 @05:19AM (#32235042)

        I'm sure you are disappointed that your 200Mhz Pentium Pro doesn't support vt-x, but the rest of the world owns (or will soon purchase) processors that do. To see what I mean, just go to newegg.com. 63 out of the 76 (83%) desktop-class [newegg.com] processors they sell have virtualization technology built in. 78 out of the 80 (98%) of the server-class [newegg.com] (ones that really matter) processors they sell support it.

        And, if you still don't believe me, check out this page [wikipedia.org] on Wikipedia for a list of the Intel processors that support VT-X. Among the crapload of processors listed, you'll notice that 100% of their newest, i3, i5 and i7 processors have virtualization support.

        • Re: (Score:3, Informative)

          by Calinous ( 985536 )

          If you want an inexpensive chip, you should carefully check Intel's support for virtualization - by example, some of the E7400 and E7500 had it, some didn't. Same for E5400 and E5300 (some have it, some don't).

        • Re:KVM (Score:5, Informative)

          by icebraining ( 1313345 ) on Monday May 17, 2010 @06:04AM (#32235262) Homepage

          Yeah, but then you have dick moves like my CPU supporting VT (AMD Neo) but being disabled by the Bios with no option to enable it. Thanks HP!

          • by alen ( 225700 )

            my athlon 64 3200 had VT. no need to enable it in the BIOS like with intel either. it just works

            • I get the impression that the issue with the parent is regarding the motherboard, not the processor. HP have a habit of having their own bespoke boards in their desktop machines, more often than not with custom BIOS. Unless you have an HP computer (or Dell, Acer, $vendor machine) your situation will more than likely differ to theirs.
          • Re:KVM (Score:4, Informative)

            by oatworm ( 969674 ) on Monday May 17, 2010 @11:44AM (#32239292) Homepage
            We have a bunch of HPs in the office that I thought had this problem too. However, it turns out HP hides the VT-X enable flag under the Security Options in the BIOS (I can only imagine how that makes any sense, but whatever), at least on their desktop machines. Could be worth a look.
    • Re:KVM (Score:5, Informative)

      by 1s44c ( 552956 ) on Monday May 17, 2010 @05:08AM (#32235006)

      Other features are a driver for almost-native KVM network performance

      KVM is fantastic virtualization technology, yet Xen gets all the hype these days. Why? Paravirtualization is pretty cool stuff, but seriously, what CPU's are made without some type of hardware-assisted virtualization support?

      Xen doesn't get all the hype. From what I've seen everyone is ditching xen and redhat is leading the way. Not that I mean to imply that xen deserves to get ditched, it's great too.

      • Re: (Score:3, Interesting)

        by arth1 ( 260657 )

        I think Redhat's dropping of Xen for KVM is as much politics as anything else. In the eyes of business, Xen = Citrix, and if going for Xen, why not go for Citrix?

        Personally, I'm very pleased with Xen except for the qemu IO performance. Setting the host's block device schedulers to noop (for linux guests) or deadline (for Windows guests) helps, but high host IO load still makes it very hard to do advertised features like instant failover using an NFS-hosted container.

        • Personally, I'm very pleased with Xen except for the qemu IO performance. Setting the host's block device schedulers to noop (for linux guests) or deadline (for Windows guests) helps, but high host IO load still makes it very hard to do advertised features like instant failover using an NFS-hosted container.

          The solution to this would be to use PV drivers for HVM DomUs. This effectively closes the gap in performance between Paravirtualized DomUs and Fully Virtualized DomUs. Commercial XenServer provides them, and people routinely Build them. They are a bit of a pain to install on vanilla Xen DomUs, for they are not signed, and require a boot argument to be added to windows (/GPLPV) but they work as advertised.

    • I would like to return your implicit question: What does KVM has that Xen has not? Why is everyone suddendly switching to it?

    • by shallot ( 172865 )

      Other features are a driver for almost-native KVM network performance

      KVM is fantastic virtualization technology, yet Xen gets all the hype these days. Why? Paravirtualization is pretty cool stuff, but seriously, what CPU's are made without some type of hardware-assisted virtualization support?

      Er, it's KVM that gets all the hype these days because it's still got some novelty. Xen just has the users, because it's simply more mature.

    • by jon3k ( 691256 )
      full featured management tools. kvm is woefully behind the curve. until you make it approachable by administrators used to the xen management tools and vmware virtual infrastructure client you won't see it catch on like other hypervisors have. sad but true.
    • Atoms. Old ones. The Opteron 285's in my dual socket server at home, for example.
    • KVM is fantastic virtualization technology, yet Xen gets all the hype these days. Why?

      Well... in part because the open source world had embraced and promoted it for a long time. Corporations buying up open source projects and using them as a base platform for their commercial products is a problem. Or is it not? I'm not sure I understood exactly what happened with Xen and open source.

  • by mrpacmanjel ( 38218 ) on Monday May 17, 2010 @04:24AM (#32234866)

    Is the RT2500-based chipset working reliably now?

    The developers switched to a new driver model because it's "better".

    If "better" means once-working wifi chipset becomes grossly unstable, previous drivers are considered "legacy" hence will not compile on kernels later that 2.6.29 and current drivers are as stable as a "one-legged man playing football".

    A few years later and 2.6.34 is released - is it working yet?

    Considering the RT2500 chipset is present many wifi products the current state of "stability" is woefully inadequate.

    (and don't get me started on f***ed up i845 drivers for xorg! - worked fine under previous kernels & xorg an update later by both - graphics performance royally screwed and many crashes)

    Apart from that - happy Linux user for over 10 years!

    • by dbIII ( 701233 ) on Monday May 17, 2010 @04:41AM (#32234920)

      Is the RT2500-based chipset working reliably now?

      Here's how the dismal state of support for that chipset was explained to me.
      The answer is probably that mine has worked for years and yours hasn't. The really annoying thing is a lot of slightly different things have come out under that name and even under MS Windows if you don't use the driver that came with it you are stuffed - a driver for another undocumented variant won't help.

    • by Chrisq ( 894406 )
      Though the RT2500 support has been slow in coming it has been stable for over a year now.
    • by HateBreeder ( 656491 ) on Monday May 17, 2010 @04:59AM (#32234982)

      The problem is that the RT2500 chipset is proprietary, closed-source that's "maintained" by a Taiwanese manufacturer who doesn't care about his users at all and only wants to sell cheap hardware and as much of it as possible.

      Why would you get quality, polished drivers that are updated to support newer paradigms in newer kernels if the manufacturer isn't cooperating?

      I think it's magic that these drivers work at all.

      Next time, buy better kit with a reputable mfr that cares about linux support.

      • by mrpacmanjel ( 38218 ) on Monday May 17, 2010 @05:37AM (#32235136)

        The problem for me is that the "legacy" drivers were rock solid and I never thought about it until kernel 2.6.30 & greater were released.

        My wifi was ultra-reliable under the "legacy" drivers.

        Since the newer drivers were released I have had nothing but problems.

        What changed between old and new drivers?

        • Would you use windows 98 drivers with windows 7?

          They most likely would not work.

          You would download newer versions of the drivers... right? you wouldn't complain to microsoft that your win98 drivers aren't working anymore, would you?

          So why do you think the same situation should just magically work in Linux?

          • Re: (Score:2, Interesting)

            by AvitarX ( 172628 )

            Because as a Linux user I have become quite accustomed to things "just working".

            Unfortunately, wireless destroys that notion, causing irritation (though my thinkpad T500 shows me how it is when the stuff does work, and it's great).

            • Re: (Score:3, Informative)

              by mrpacmanjel ( 38218 )

              Generally Linux works very well.

              For me the two biggest problems seems to be wifi & graphics cards.

              ATI decided my r300-based card was legacy and discontinued it via the closed-sources drivers. I'm screwed(thankfully the open source drivers are ok but nowhere near as fast).

              RT2500 - I could download the source of the serialmonkey drivers and compile them. Great it works fine and did that with every distro upgrade.

              Then these drivers were abandoned and all focus is now on the in-kernel version and stability

              • Re: (Score:3, Interesting)

                >ATI decided my r300-based card was legacy and discontinued it via the closed-sources drivers. I'm screwed(thankfully the open source drivers are ok but nowhere near as fast).

                ATI/AMD abandoned those cards not only because they where old, but also specifically because the OSS drivers for them were complete, if not that fast. They wouldn't have dropped them otherwise.

        • What changed between old and new drivers?

          What part of "open source" is confusing you?

      • by stsp ( 979375 ) on Monday May 17, 2010 @07:34AM (#32235708) Homepage

        The problem is that the RT2500 chipset is proprietary, closed-source that's "maintained" by a Taiwanese manufacturer who doesn't care about his users at all and only wants to sell cheap hardware and as much of it as possible.

        Well, actually, Ralink has for a long time been providing documentation to open source developers writing drivers for their devices, without requiring an NDA.

      • by Blakey Rat ( 99501 ) on Monday May 17, 2010 @09:00AM (#32236482)

        Except they did work, and worked better, in the last version. The kernel maintainers swapped out the working version for a flakey version, and now have made enough changes that the working version won't work even if you compile it in manually.

        Did it occur to you to actually read the post you were replying to? This was in all there, not behind a link or anything.

        • My point was that the kernel should move ahead and have the driver developers keep up - not the other way around.

          • Well, those driver developers are obviously doing a bang-up job is the quality of the driver goes down over time. Kudos, anonymous heroes! Continue to make our software worse! Soon you will have banished the scourge of usable, stable software forever!

            • That's the other point... they're not anonymous. the manufacturer keeps the specs closed, the chip is undocumented... you rely on the manufacturer's software engineers to write your drivers.. and the manufacturer doesn't care about new kernel versions. they made it work once with the 'current' thing when the chip came out - from then you're on your own.

              do you bother reading anything before you post?

  • Excellent (Score:5, Funny)

    by Anonymous Coward on Monday May 17, 2010 @04:31AM (#32234884)

    With releases like these, it's no wonder M$ is getting worried. Been running this kernel a while now on our production servers (even from before it it was tagged release, I like running bleeding edge in order to get the most performance from my company's hardware investment) and save from a few data corruptions issues, it's been rock stable! I have to play with the new KVM support later on one of the servers with the least amount of customers on it (couple of hundreds), looks nice!

    Sadly... it looks like my company is looking at going with Windoze for a few important servers because of a few outtages. I know it was because of faulty hardware, because I had just compiled a custom kernel for those servers with just the right flags needed (I want to get the most performance!) but this must have triggered a hardware bug because the kernel worked fine on my work laptop. Sigh...

    Anyway, keep up the good work!

    • It looks like my company is looking at going with Windoze for a few important servers because of a few outtages. I know it was because of faulty hardware, because I had just compiled a custom kernel for those servers with just the right flags needed (I want to get the most performance!) but this must have triggered a hardware bug because the kernel worked fine on my work laptop...

      Help Wanted, Male.

    • Re:Excellent (Score:4, Insightful)

      by 1s44c ( 552956 ) on Monday May 17, 2010 @05:16AM (#32235034)

      You don't put bleeding edge or custom kernels on production servers without seriously heavy testing. You would not run production stuff on a windows beta release would you? It's the same thing.

      Stick to proper releases of good distributions and customize as little as possible. You will get a system many times more stable than anything MS has ever come up with.

      • by solanum ( 80810 )

        Whooooossssshhhhhh!!!! That was the grand-parent post going over your head.

        Now why was the parent marked insightful exactly?

        • Re:Excellent (Score:4, Informative)

          by L4t3r4lu5 ( 1216702 ) on Monday May 17, 2010 @08:14AM (#32236060)
          Because regardless of what the grandparent said, the above post is insightful. It's also interesting to those who know nothing of Linux but do know of Windows servers falling over because of a mandatory patch, for example. For essential systems, a working stable configuration does indeed make more sense than a cutting edge potentially buggy one.

          I hate to trot out Ubuntu as an example, but why do you think they have Long Term Support releases? High availability production servers are not expected to run Ubuntu Server 9.10; It has a lot of patches which may break features which worked in previous versions (just look at the list of dependencies removed when you upgrade). I would expect a significant number of those servers to be running 8.04 LTS, and to potentially upgrade to 10.04.1 when it becomes available (the LTS version of LL still being relatively new and untested).
      • I think the OP was being sarcastic... And I hope you were too also with your last sentence.
    • Re: (Score:3, Insightful)

      by shish ( 588640 )
      Why are all the replies to this comment seeming to take it seriously? :-|
      • Re: (Score:3, Informative)

        Because I've heard the exact same thing from people who actually believe it and have done it at their job. It is a comment made by a young, inexperienced person (I can't call them an administrator) who doesn't have the experience to understand the problems with doing this.

        • Re: (Score:3, Insightful)

          by Kjella ( 173770 )

          Because I've heard the exact same thing from people who actually believe it and have done it at their job. It is a comment made by a young, inexperienced person (I can't call them an administrator) who doesn't have the experience to understand the problems with doing this.

          What, can't be. According to slashdot all Linux administrators are born as black belt Linux experts and Windows administrators are all people that got lucky bumbling through their MSCE exam. Usually in comparison where five incompetent Windows administrators could be replaced with one competent Linux administrator, even though you could probably replace five incompetents with one competent one in general.

          • Re:Excellent (Score:4, Insightful)

            by 1s44c ( 552956 ) on Monday May 17, 2010 @07:01AM (#32235538)

            What, can't be. According to slashdot all Linux administrators are born as black belt Linux experts and Windows administrators are all people that got lucky bumbling through their MSCE exam.

            Noone is born a black belt at anything. You have to work at it. There are inexperienced Linux admins just like there are inexperienced windows admins. The ones who can't or don't want to learn end up on windows eventually.

            • Re:Excellent (Score:4, Insightful)

              by Blakey Rat ( 99501 ) on Monday May 17, 2010 @09:08AM (#32236570)

              Wow, two wooshes in one thread.

              Also:

              The ones who can't or don't want to learn end up on windows eventually.

              That's the dumbest fucking thing I have ever read in my life.

              You're seriously delusional if you think that Windows servers are inferior to Linux servers in any way. (Well, "any" way is an overstatement, but any practical way.) If you're doing communication, Exchange is great. If you're doing filesharing/single sign-on, Active Directory is also great. IIS is as good, or better, than Apache at all benchmarks, and has more features. Decent support for technologies like OLAP pretty much only exist on Windows at the moment.

              Nobody's going to argue that Windows is cheaper than Linux. But arguing that it's worse, that's harder to make-- unless of course you know fuck-all about Windows and just repeat FUD on Slashdot all day.

              • Re: (Score:3, Informative)

                by Simetrical ( 1047518 )

                You're seriously delusional if you think that Windows servers are inferior to Linux servers in any way. (Well, "any" way is an overstatement, but any practical way.)

                You're seriously delusional if you think that Windows servers are as good as Linux servers in every practical way. Linux has advantages beyond cost. For some large organizations, like Google, being able to tinker with the software is essential. For other organizations, some particular feature of Linux might be essential: like DRDB, or support for some application, or good software RAID (much better than Windows from what I've heard), or (soon) btrfs, or performance on their particular workload.

                If Wind

      • Re: (Score:3, Informative)

        by arth1 ( 260657 )

        Why are all the replies to this comment seeming to take it seriously? :-|

        Because (a) it is Monday morning, and (b) Sturgeon's law applies to /. posters too.
        And, unfortunately, (c) there are idiots like that out there. But they generally don't change their posting prefs to AC when bragging about their latest folly...

        • by 1s44c ( 552956 )

          (c) there are idiots like that out there. But they generally don't change their posting prefs to AC when bragging about their latest folly...

          The real idiots never figured out how to log in.

      • by 1s44c ( 552956 )

        Why are all the replies to this comment seeming to take it seriously? :-|

        Because some of us have had to clean up the mess left by people like him. The world is full of people who do really stupid things whilst thinking they are doing a good job.

    • Re: (Score:2, Informative)

      Huh? You are running unreleased kernels, you _admit_ that you have "data corruptions issues" and you claim "rock stable"?

      What idiot runs beta kernels on production servers? I'm glad you aren't working for me, because I'd fire your ass for doing this.

      Production servers are NOT the place to run beta kernels.

      And you are complaining because your company is going with Windows "because of a few outages"? How do you know that it wasn't a kernel bug triggered by that hardware configuration? Your laptop has di

      • Re: (Score:3, Insightful)

        by Blakey Rat ( 99501 )

        Is this a new "Humorless Monday" holiday I haven't yet heard about? Or do kernel people really take themselves this seriously?

        Christ. You're like the fifth reply who didn't get that the parent was an EXTREMELY OBVIOUS JOKE. Laugh, stupid.

  • GPU switching (Score:3, Insightful)

    by TheLink ( 130905 ) on Monday May 17, 2010 @04:41AM (#32234914) Journal
    > Some laptops have two GPUs, a low-power and inefficient GPU and a high-power and powerful GPU. Users should be able to switch to one or another at runtime. In this version, Linux adds support for this feature. You need to restart X, though.

    How do you restart X without affecting all your GUI apps? If you can't restart X without bringing down your GUI apps, I don't see the point for the target audience.

    For some people, "only having to restart X" will only save a bit of time over rebooting the whole laptop, reconfiguring bios etc.
    • Re:GPU switching (Score:4, Informative)

      by TeknoHog ( 164938 ) on Monday May 17, 2010 @04:51AM (#32234942) Homepage Journal

      How do you restart X without affecting all your GUI apps? If you can't restart X without bringing down your GUI apps, I don't see the point for the target audience.

      If you are using something like Gnome or KDE, it can probably save your GUI session. Individual applications will have to deal with their contents, but many of them already do that. At least Firefox and Openoffice can restore their sessions after being terminated.

      • Re: (Score:2, Informative)

        If you are using something like Gnome or KDE, it can probably save your GUI session. Individual applications will have to deal with their contents, but many of them already do that. At least Firefox and Openoffice can restore their sessions after being terminated.

        In KDE, System Settings -> Advanced -> Session Manager -> On Login, Restore Manually Saved Session. After that, you can save your session state from the logout menu or, alternatively, using a shellscript that loops every 30s or so and

        • Details details (Score:3, Interesting)

          by TheLink ( 130905 )
          When I last used KDE years ago, that didn't work so well. For instance, my ssh connections weren't restored. And even for KDE apps the session saving thing didn't save everything.

          You may ask "why should SSH connections be restored?" and I'll reply: why should my apps and connections go down in the first place just because X goes down?

          Heck even in windows when I kill "explorer.exe" my apps still keep running. I know it's not the same thing, but who cares when all the fanatics keep saying Linux is so stable,
    • Re: (Score:2, Informative)

      by skynexus ( 778600 )

      If you can't restart X without bringing down your GUI apps, I don't see the point for the target audience.

      For some people, "only having to restart X" will only save a bit of time over rebooting the whole laptop, reconfiguring bios etc.

      Not all laptops have a BIOS configuration that allows you to choose the GPU (ASUS UL series for instance). On mine, I had to change the SATA operation mode to have the second GPU work, but this in turn meant a severe performance degradation on my SSD. Without that (deficient) improvisation, I would not have been able to use the second GPU at all!

      Besides, logging out of your desktop and then logging in again is surely better than what you suggest?

      • by TheLink ( 130905 )

        > Besides, logging out of your desktop and then logging in again is surely better than what you suggest?

        Where did I suggest people do that? To me such a kernel feature is useless till the rest of the "Linux Desktop" bunch work together and produce something like this:

        http://www.anandtech.com/show/3709/gfxcardstatus-brings-2010-macbook-pro-gpu-switching [anandtech.com]

        Yes the kernel bunch probably have to do this feature first, but for decades the X server going down has caused X applications to lose unsaved data and bas

    • Re:GPU switching (Score:4, Insightful)

      by Kjella ( 173770 ) on Monday May 17, 2010 @05:27AM (#32235088) Homepage

      Good question, but wrong project. The kernel is only responsible for initializing, suspending, resuming and lately modesetting of the hardware and it seems that is possible now. There probably needs to be some userspace code to pull information from one GPU and load it into the other but that's for the xorg server to do. They're probably working on it but it won't be in a Linux (the kernel) release announcement.

      • by ettlz ( 639203 )

        The kernel is only responsible for initializing, suspending, resuming and lately modesetting of the hardware and it seems that is possible now.

        Plus managing GPU memory allocation. But yes, this is probably something to be added to XRandR, or some other protocol extension. (What would happen to normal, non-X virtual consoles, though? This might require some more stuff in the kernel.)

    • by ledow ( 319597 )

      I'm not saying this is the solution, nor how it should be done, but you could conceivably run a "remote" X-Windows session on a virtual buffer on the laptop - connecting to it with another X-Windows client on the same machine - and then, when you "switch" GPU's, the restart of X-Windows will only affect the "client" viewing the real X-session but be transparent to the user because they'll reconnect to their original session.

      It's not a huge stretch of the imagination that the virtual buffer can pass off nece

    • I agree that's not as useful as it could be, if it was able to do it on the fly easily.

      But it could theoretically (I've never tried it) be done using Xmove [wikipedia.org], which "allows the movement of X Window System applications between different displays and the persistence of X applications across X server restarts".

      xmove lets the client disconnect from its current X server, and connect to a new one, at any time. The transition is completely transparent to the client. xmove works by acting as a proxy between the clien

    • There's experimental support for 'hotswitching' called 'PRIME' (for obvious reasons :) ).

      See here: http://airlied.livejournal.com/71734.html [livejournal.com]

  • I put an openSUSE Build Service version of the .34RC kernel on my new desktop because it fully supported the new Core i5 I'd just installed. Down-side was that there weren't any pre-built nVidia drivers because it wasn't a final kernel yet. Hopefully nVidia will start building the drivers in their repo so that I can move to a repo for my drivers :)

  • by Advocadus Diaboli ( 323784 ) on Monday May 17, 2010 @05:19AM (#32235044)

    that this kernel already got device IDs for next years Intel hardware. This is something completely new, since Intel so far had a much more closed policy and wouldn't have told device IDs prior to the chipset release.

    Now there is a really good chance that driver code will make it into the distribution kernels until the new hardware will be released for mass production. So the chances that brand new hardware will work without any flaws in 2011 are higher than ever before.

    Thanks to Intel for this change in their policy. This was a small step for Intel (since everybody "knows" that they will release new chips every year) but a giant leap for providing Linux hardware compatibility right "out-of-the-box".

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday May 17, 2010 @07:11AM (#32235588) Journal
    I'm always amused by at least one strange juxtaposition of the big-serious-enterprise-server stuff that the corporate devs are most interested in and the oddball hobby projects that can get included as well, so long as they follow the kernel process.

    In this case, I think it was all the "multi-petabyte scaleable filesystem, esoteric btrfs improvements, kernel virtualization networking stuff, gamecon: add rumble support for N64 pads" that did it.
    • Maybe a stupid question, but why does rumble support for N64 pads have anything that goes in the kernel? Doesn't Linux have any kind of abstraction layer for game controllers?

      • I'm not sure exactly. The trend for modern oddball devices seems to be doing them in userspace with a libUSB driver. That isn't really a huge issue with game controllers, though, because virtually all of the modern ones are either USB HID or bluetooth HID, possibly with funny connectors(or, in the case of the Xbox360, some proprietary wireless protocol; but the PC dongle is, I believe, USB HID). For such modern devices, Linux does effectively have abstraction layers, either USB HID, or libUSB + userspace co
      • by Enleth ( 947766 )

        That went into evdev, I guess, which is an abstraction layer for all input devices. Most probably just a 0 changed to 1 in some device ID table to tell evdev to relay rumble events to the pad (which might not be the default to, e.g., prevent dumb pads from locking up). Just a guess, though.

  • does it run Linux?

    Oh wait!

    • by DeBaas ( 470886 )

      does it run Linux?

      Yes it does, and if your systems supports it can even run Windows....

  • So version 2.6 is is not at fix level 34. Will there ever be another minor or even mayor version?

    • From what I understand, the next major change to the kernel (in terms of HUGE changes) would be 2.7. Although people change their minds and Linus might say to put it at 3.0.

  • And three of my four wifi cards still probably don't work :-(

It is contrary to reasoning to say that there is a vacuum or space in which there is absolutely nothing. -- Descartes

Working...