Forgot your password?
typodupeerror
Linux Software

Should Linux Have a Binary Kernel Driver Layer? 944

Posted by Zonk
from the abstraction dept.
zerojoker writes "The discussion is not new but was heated up by a blog entry from Greg Kroah-Hartman: Three OSDL Japan members, namely Fujitsu, NEC and Hitachi are pushing for a stable Kernel driver layer/API, so that driver developers wouldn't need to put their drivers into the main kernel tree. GKH has several points against such an idea." What do you think?
This discussion has been archived. No new comments can be posted.

Should Linux Have a Binary Kernel Driver Layer?

Comments Filter:
  • by Anonymous Coward on Tuesday November 08, 2005 @02:47PM (#13981316)
    Linux, with the stability of Windows prior to the advent of certified drivers.
    • by KiloByte (825081) on Tuesday November 08, 2005 @03:12PM (#13981615)
      Driver certification means just that Microsoft received the driver, and agreed to confirm that the driver comes from one of its business partners, and not from a suspicious open-source hacker. You don't even have a guarantee that the driver is free from rootkits.

      The company isn't really interested in fixing any issues with the drivers -- if you have problems with a bug, you have already paid them all the money they are going to get for that particular piece of hardware.

      An example: the !@#$%^&* nVidia proprietary X driver. On some older cards, it will cause kernel oopses, within my usage patterns around once per several hours. Is there anything we can do to fix the problem? I'm not a master kernel hacker, but I do have some rudimentary skills there -- I would have at least some chances to make a fix myself; if I wouldn't succeed, reporting a data-loss bug would make us have it fixed by someone with more knowledge in no time.
      On my current desktop, switching to text mode the first time X is run after a boot puts the console into 80x25 mode without even doing a TIOCSWINSZ. Somehow, if I kill the X server, reset the video mode then start X again, everything is fine, until the next boot/resume. What can I do to fix this annoyance? Begging to nVidia does nyet work.
      • by larkost (79011)
        On the driver certification part: it does only certify that the driver came from a Microsoft business partner... but the characterization of why this is a bit off. It is to certify that this is not from an unknown third party. It is done for the same reasons that most linux iso's are distributed with a MD5 hash next to them (which most people ignore anyways): it allows you to be sure of where the code is coming from, and that it is not coming from a nefarious third party.

        I do wish that linux and MacOS X has
      • by Markus Registrada (642224) on Tuesday November 08, 2005 @04:36PM (#13982606)
        We already have a stable binary driver interface. It's called NDIS. Just about everything that's wrong with it (and ndiswrapper) would also be wrong with a Linux-only binary spec, and other things besides. That said, anyone complaining about NDIS drivers not working on non-ia32 hosts need only attach an ia32 emulator to NDIS. Again, if that doesn't sound very nice, any binary driver interface is likely to be almost equally, er, not-nice.

        This is an inflammatory issue, but a stable interface doesn't necessarily open the kernel up to proprietary drivers. It's a matter of licensing. Any third party could introduce a GPLed abstraction layer. There are big practical advantages to being able to take a GPLed driver's object file and plug it in to any old kernel. In that case there would be no really fundamental reliability or debuggability problems. The remaining problem would be the increasing mismatch between the abstraction presented to the driver and the abstraction supported by the kernel as it develops.

        It would be good to separate the discussion into two, one for inflammatory license- and/or ideology-related culs-de-sac, and another technical, to address legitimate needs for stability in drivers that are not (yet) in the kernel tree.

  • Good idea! (Score:5, Funny)

    by Anonymous Coward on Tuesday November 08, 2005 @02:48PM (#13981336)
    Lets put it in Kernal 2.7.
    Oh wait... OOPS!
  • by scsirob (246572) on Tuesday November 08, 2005 @02:48PM (#13981338)
    Having a kernel API for drivers allows developers to stay away from the mainstream kernel. This will enhance the stability of the kernel in general and also allow hardware vendors to support Linux with less effort.
    • by IAmTheDave (746256) <basenamedave-sd.yahoo@com> on Tuesday November 08, 2005 @02:56PM (#13981430) Homepage Journal
      Having a kernel API for drivers allows developers to stay away from the mainstream kernel. This will enhance the stability of the kernel in general and also allow hardware vendors to support Linux with less effort.

      I am not a linux contributor, but I would think you'd kinda want to guard access to the kernel kinda closely. I mean, sure, anyone can fork it or grab a copy to putz around with, but contributing back into the kernel - that's gotta be just about as stable as a piece of code can be.

      Despite some loss in efficiency, I've always been an advocate of abstracted access. To many of the pieces of software we write at my job do we add a logical API, so that we don't always have to open the main code branch every time we want to add a feature.

      Driver developers hardly equal kernel developers. Keeping the two logically seperated makes sense - not to mention that driver developers are hardly the only ones that would benefit from this API.

      • by Bloater (12932)
        > I am not a linux contributor, but I would think you'd kinda want to guard access to the kernel kinda closely. I mean, sure, anyone can fork it or grab a copy to putz around with, but contributing back into the kernel - that's gotta be just about as stable as a piece of code can be.

        A stable binary driver API that doesn't mean putting the driver into the kernel is already there. It is called the syscall interface. If you have a stable module ABI as proposed (over and over again, about once a month for th
      • by BoldAndBusted (679561) on Tuesday November 08, 2005 @03:22PM (#13981734) Homepage

        I am not a Kernel Developer, but I know some. ;) I guess that my open question is how this would benefit kernel users? Yes, I see it would reduce the workload of kernel devs. Yes, I see it would allow driver developers to not have to go through the kernel code vetting process. But, the kernel code vetting process is what is a strong benefit of using Linux, from a user perspective, as I know that the code is well tested by an army of users and developers.

        Once you push driver development out of the kernel, yet give access to kernel internals in this way, you introduce a level of uncertainty in so far as stability and robustness is concerned. One must question why these big comapnies are pushing for this, but most human kernel devs are not.

    • by kmmatthews (779425) <krism@mailsnare.net> on Tuesday November 08, 2005 @02:58PM (#13981455) Homepage Journal
      No, because we'd (at best!) get the same crap drivers that windows gets. Childish crap spewed out by the lowest bidder, and it destablizes the entire system. Not to mention that no open source coder can _fix_ the damn thing then.

      Look at nvidias drivers on linux! Always well behind other drivers, and filled with bugs because we have to wait for nvidia to get off thier asses and fix the damn thing.

      What happens in 10 years when you're trying to use that binary driver to recover data from an ancient device? If it was an open source driver, you could fix it to work with your system; binary, you're going to have a lot more work to do.

      • by HotNeedleOfInquiry (598897) on Tuesday November 08, 2005 @03:10PM (#13981593)
        If a company is developing an embedded Linux ap for their own hardware. All of a sudden, all of the communications with the board-specific hardware is being done through binary drivers, resulting in an effectively closed system.

        No more hacking WRT54G's for you, chump.
      • by MobyDisk (75490) on Tuesday November 08, 2005 @03:48PM (#13982055) Homepage
        Having a stable API for writing drivers is a completely separate issue from having binary-only drivers. The article summary makes them seem like one thing. Having a stable API would seriously improve the quality of the drivers because developers would not have to constantly modify drivers just to keep them working properly.

        Many drivers suffer quality issues or are just abandoned because drivers must be changed to work with each new kernel. That is time that could be spent stabilizing them, improving them, or writing new drivers. It is probably the single biggest hurdle to getting a broad range of driver support for Linux.

        For some drivers we already do have an API. For example, sound cards can use ALSA instead of coding directly to the kernel. Driver quality and quantity improved significantly because of ALSA. The same thing goes with SANE. The only problem there is that SCSI and USB support has been lagging.
      • That is simply false. The NVIDIA drivers are the *only* decent 3D drivers available for Linux at the moment. No other drivers even come close in terms of features, performance and stability with 3D applications.

        Sure, it takes NVIDIA some time to catch up to the latest kernel changes, but they do a damn decent job considering the kernel is a moving target. If that were not the case, their drivers, which are already the best available, would be even better.

        Many hardware manufacturers are not in a posit

      • by IamTheRealMike (537420) <mike@plan99.net> on Tuesday November 08, 2005 @04:16PM (#13982348) Homepage
        Look at nvidias drivers on linux! Always well behind other drivers, and filled with bugs because we have to wait for nvidia to get off thier asses and fix the damn thing.

        Uh, yes. Because everybody knows that being open source is a magical recipe for never having any bugs.

        Writing video drivers is hard dude, and nVidia employ several experienced people who do it full time. I've seen them discuss technical issues in public before and have no doubt in my mind that they are experts.

        Given how many open source drivers are buggy as hell (i810 audio, hello) there must be another explanation. Here's one try.

        I hypothesise that what determines the quality of a driver is the number and quality of developers working on them. Closed source drivers sometimes suffer because the quality and/or quantity of developers writing those drivers isn't good enough. Being open source means that theoretically the quantity and quality of developers is unbounded. However, note that this is theory - being open doesn't actually imply that you will suddenly get legions of experts in video hardware writing your drivers for you. It merely makes it possible.

        To be honest, I have serious doubts that we'd have such good drivers if nVidia GPLd them tomorrow and simultaneously fired all of their Linux developers. Being closed doesn't mean you can't have good people working on it, and being open doesn't mean you will. It simply alters the bounds of the possible.

    • by Anonymous Coward
      Wrong, wrong, wrong wrong wrong.

      Drivers run as kernel-level code.

      Closed source code linked into the kernel gets kernel-level privileges. Alone, that is a scary thing.

      Worse, binary drivers mean support for only the platforms that vendors choose to support. Users of less popular architectures miss out.
    • Did you bother to read either the FA or any of the articles to which it linked? At least read GKH's take on all this binary driver nonsense [linux.no]. If his insightful comments on the issue do not change your mind, fine.

      GKH raises good points about how a stable binary driver interface will open the floodgates to both security problems and to update/maintenance problems. As it stands right now, Linux kernel developers can quickly respond to threats because they are able to fix all instances of a given problem, in

    • Already done (Score:3, Informative)

      by jd (1658)
      There's the Connector layer, sysfs and fuse. The userspace-in-kernel patches also help. These allow binary-only drivers in userspace with significant kernel access.

      For graphics, GGI and KGI would allow direct binary-only drivers to be written that applications can use, again without modifying the kernel.

      Not sure whether you could make any use of the ABI/IBCS work for drivers, but they certainly allow "foreign" binaries to run under Linux, without anything foreign being put in the kernel itself.

      In other word

  • No Thanks! (Score:5, Interesting)

    by Shads (4567) <[shadus] [at] [shadus.org]> on Tuesday November 08, 2005 @02:48PM (#13981339) Homepage Journal
    No thanks, this is just a great way to promote closed source inside the linux kernel and to make debugging problems totally impossible.
    • Re:No Thanks! (Score:3, Insightful)

      by isometrick (817436)
      I would say yes to a formal driver API, but I also think source for the drivers should still be released. The two concepts (formal API, open source) aren't mutually exclusive, but I guess being able to use closed source drivers is, in reality, why these companies are pushing for a formal API.
      • Re:No Thanks! (Score:4, Insightful)

        by pe1rxq (141710) on Tuesday November 08, 2005 @03:12PM (#13981610) Homepage Journal
        The idea is not just a formal API, but a formal API that doesn't change for a long (VERY VERY long if it is to be usefull at all) time.
        The problem is that you will be stuck with the mistakes for the same time.
        You can't improve the kernel anymore because you have to keep supporting those mistakes.
        You would have to add huge glue layers to your kernels to keep emulating thos mistakes.
        It is a bad idea. It has only advantages to those driver writers who want to stay far away from the kernel and want to keep using their crap for a long time. The kernel and the kernel writers are not going to get better from it (they only get extra work supporting lazy driver writers) why should they support it?

        Linux development works very easy: do it yourself or convince someone else to do it.
        You can convince other people either by motivating them with proper reasoning (and /. is not the place to do it) or by throwing money at them.
        Nobody has been throwing enough money yet to get the kernel developpers to do it. (And very likely nobody ever will throw enough)

        Jeroen
    • by Speare (84249) on Tuesday November 08, 2005 @02:56PM (#13981432) Homepage Journal
      Come on, can't we see through this Cathedral-driven charade? There's no rational thought behind Intelligent Drivers. It's all just a dogmatic rehash of the same old Closed Source thinking forced upon our Open Source kernel laboratories! I say, send these Intelligent Drivers ideologues back to Kansas where they came from!
      • by b1t r0t (216468) on Tuesday November 08, 2005 @03:01PM (#13981495)
        Come on, can't we see through this Cathedral-driven charade? There's no rational thought behind Intelligent Drivers. It's all just a dogmatic rehash of the same old Closed Source thinking forced upon our Open Source kernel laboratories! I say, send these Intelligent Drivers ideologues back to Kansas where they came from!

        What, you want to support Flying Spaghetti Code Drivers or something?

    • Re:No Thanks! (Score:5, Insightful)

      by IamTheRealMike (537420) <mike@plan99.net> on Tuesday November 08, 2005 @03:29PM (#13981814) Homepage
      Three points.

      • For many companies, there are no benefits to going open source, and only downsides. The pragmatist recognises this economic reality, the zealot (and Greg KG is a notorious one) doesn't want to hear it. For instance, let's say nVidia GPLd their driver and got it accepted into the main tree. This gives them a competitive disadvantage because ATI or other companies can now look at how their drivers work with much less effort. It doesn't save them any work, because the community would still expect them to maintain the drivers and add functionality to them (and indeed, for cards that are new to the market or in development, they'd be the only ones who could). And there's no guarantee the drivers would be accepted anyway - simply using the wrong coding style is enough to cause problems in the kernel project, let alone doing the sort of things the nVidia drivers do (for instance they used to use a custom TLS scheme).

        Because of this, I'm 100% not convinced making binary driver developers lives harder changes anything. Are large businesses (the type who make hardware that's difficult to reverse engineer) likely to say, hey, gosh, you know this Greg KH guy kind of doesn't like closed drivers, maybe we should open them up to please him? Nope. They'll just work around the difficulties or not provide drivers at all.

      • It's not "impossible" to debug binary software. Please. This is a crappy excuse kernel developers regularly pull out of their asses to confuse people who haven't done much software development. What they actually mean is, "I don't really want to" (and if unloading the driver makes the crash disappear, that's a pretty good way to shift the problem onto the driver manufacturers anyway).

        I've been a Wine developer for years and have spent many hours doing this impossible thing of which you speak, and your average copy of MS Word or Steam is a LOT larger than your average driver. Yes, it's hard. No, it's not impossible. I've heard various excuses as to why kernel development is just different!! to userland software development, and don't buy it. Yes, having to reboot when a crash occurs is a royal pain in the ass, but so is not being able to get a backtrace because the game you're investigating treats any attempt at attaching a debugger as an attempt to hax0r its copy protection. Different space, different challenges. It's still possible.

      • A stable binary module interface would help open source developers too (even if it only existed for a single 2.X series at a time), as new software which relies upon kernel modules or drivers would become much easier to distribute and install for non-technical end users. Take for instance ZeroInstall [sf.net]. The main developer, Thomas Leonard, since abandoned the original (imho quite elegant) approach of using a networked VFS because it required users to install a kernel driver, a task which proved hard to impossible for many non-developers. So the whole thing was rewritten to do things a different way and avoid the kernel, despite that coming with some disadvantages. We've also looked at using a kernel module to fix various speed issues for Wine in the past, but the difficulty of getting even minor patches accepted into the kernel let alone major new subsystems nixed that. If there had been a stable module interface we could have skipped all that and gone straight to improving things for end users without having to worry about what kind of indenting or list structure we should be using.
  • Bad (Score:3, Insightful)

    by Anonymous Coward on Tuesday November 08, 2005 @02:49PM (#13981342)
    Microsoft could license code into the drivers or otherwise maneuver the driver makers to license their IP from MS, the drivers could form a layer between Linux and the hardware, Microsoft could then pull the run from under Linux.

    Don't go there, it protects Linux from getting tripped up, and devalues any hardware that doesn't support Linux.

    Don't underestimate the important of driver support for Linux, you practically can't make any server component without a good solid Linux driver.
  • Amen! (Score:4, Insightful)

    by dusanv (256645) on Tuesday November 08, 2005 @02:49PM (#13981343)
    I have been waiting for that for a long time. The lack of a stable interface is hampering adoption of Linux. Not all of the manufacturers are willing to open source drivers or for that matter to continuously change them as the APIs change. This is very welcome but unfortunately, I think they'll fail. There is just too much politics surrounding Linux these days.
    • Re:Amen! (Score:5, Informative)

      by Krach42 (227798) on Tuesday November 08, 2005 @02:57PM (#13981442) Homepage Journal
      This is very welcome but unfortunately, I think they'll fail. There is just too much politics surrounding Linux these days.

      It is not welcome. Linux is about Open Source, and allowing people to link-in binary closed drivers goes against this.

      Too much politics surrounding Linux? Where have you been? It has been the policy of the Linux kernel for a long time that it would never stablize a binary driver interface, in order to prevent people from not making their drivers open source.

      The idea behind Linux is that an Operating System should be Open, and Free (as in speech), and that nothing should hinder this. Binary drivers are exactly this sort of hinderance.

      You may be upset that you don't have drivers for product XY because that company doesn't want to play along, but if you're trying to change the way the world does software, you can't go "ok, just because we *really* want your drivers, we're going to bend the rules for you."
      • It is not welcome. Linux is about Open Source, and allowing people to link-in binary closed drivers goes against this.

        Bypassing the dogma of the above, there are numerous pragmatic reasons why this would be better for linux, even if you don't include support for binary third-party drivers.

        • Developing drivers against a stable API is much simpler and more time efficient than developing against a moving target.
        • It is much easier for a part-time open source developer to support one API than a moving target
        • It would make life easier part-time developers of experimental drivers. Right now, if your driver isn't in the kernel, there is no guarentee it will even build against the next stable sub-version of the current kernel. It is a huge pain in the ass.
        • It would make life easier for users. If I download a driver from sourceforge for my webcam, I don't have to download a new version and rebuild when a new kernel is released.

        Sure, some of these are extreme cases. You can usually get away with just re-compiling the driver, and occasionally, you can even use the binary from the existing version.

        The point is you should *always* be able to do this wihtin the same major kernel version. There is no technical reason, aside form the politicis of not wanting to ever allow binary drivers, to not have a stable driver API.

        Imagine if the Mozilla plugin API changed with every new version of Firefox. And look at all the complaints when a new Firefox version doesn't work with all the old extentions. It is the exact same.

      • by Sycraft-fu (314770) on Tuesday November 08, 2005 @03:17PM (#13981672)
        If you are going to take the strategy of "We will do things to attempt to force you to do thigns our way," don't be supprised if the response of companies is "Fine, then we will ignore you."

        The simple fact of the matter is that most companies are not willing to go open source, for software or drivers. You can argue that's a bad thing, but it is the reality of the situation. So, if open source is out in their book, either because of contractual obligations or mentality or whatever, they are left with two choices:

        1) Do Linux drivers, and update them every time the interface changes, which can be as often as every minor kernel revision.

        2) Ignore Linux, and let the community write the drivers if they want.

        The problem is that Linux is a bit player. They are larger than the other bit players, but they are still tiny, less than 10%. Given that the continous rewrites can get expensive, the choice for many will simply be not to write the driver.

        So if you are ok with that, then great, but don't get mad at companies when they won't play by your rules. Are they being unaccomidating? Sure, but so are you.

        In the end, it comes down to needing to make a decision of what you want Linux to be. If you want Linux to try and become the next big thing in OSes and start to really make an entrance in the home market, standardisation is needed. Standard APIs, standard UIs, inter-version consistencies, etc. In essence, it needs to become more like OS-X. Now if you are ok with Linux being more of a geek/server OS then that's not necessary, but you can't demand the world change around you.
    • by Sturm (914) on Tuesday November 08, 2005 @03:13PM (#13981622) Journal
      Linux was, is and hopefully will always be "open". I don't want closed drivers in the kernel (even via an API layer) any more than I want a Sony rootkit masquerading as DRM.
      It isn't about "politics". It's about policy and philosophy.
      If the hardware doesn't work with Linux, don't buy the hardware/pester the vendor for an open driver, or don't run Linux.
  • by rRaminrodt (250095) on Tuesday November 08, 2005 @02:49PM (#13981351) Homepage Journal
    http://www.kroah.com/log/2005/11/07/#osdl_gkai2 [kroah.com]

    Some misunderstandings were made. But of course, if they posted this link, there'd be no point to posting TFA or the arguments that will almost certainly follow. :-)
  • by s20451 (410424) on Tuesday November 08, 2005 @02:50PM (#13981357) Journal
    I gave up Linux mostly because I was tired of getting punished for having new hardware, which is often unsupported. Especially on laptops.

    If you don't force the manufacturers to include their driver source in the kernel, you might get them to release actual drivers for their new hardware.
  • No (Score:5, Insightful)

    by kmmatthews (779425) <krism@mailsnare.net> on Tuesday November 08, 2005 @02:51PM (#13981366) Homepage Journal
    It's a bad idea because what happens when the driver ABI changes? You have to wait umpteen months for the company to get off thier asses and fix it - like nVidia.

    It also precludes anyone else from fixing bugs in the broken, half assed crap most corporates spit out these days.

    • Re:No (Score:3, Interesting)

      by mad.frog (525085)
      It's a bad idea because what happens when the driver ABI changes?

      Then you go and fire the developer(s) who called it a "stable" ABI.

      By definition, a "stable" ABI should change very rarely, and provide backwards compatibility.

      Without this, it ain't stable.
    • Re:No (Score:3, Insightful)

      by j1m+5n0w (749199)

      It's a bad idea because what happens when the driver ABI changes? You have to wait umpteen months for the company to get off thier asses and fix it - like nVidia.

      It seems unlikely at this point that Nvidia will ever open-source their video card drivers. (They might not even be able to legally - it's not uncommon for commercial software to contain code from third parties.) Assuming that this is the case, a stable ABI would make Nvidia's task much easier and would probably result in higher quality Linux d

  • by gonar (78767) <sparkalicious@verizon.n3.14et minus pi> on Tuesday November 08, 2005 @02:51PM (#13981372) Homepage
    one of the main problems for getting device manufacturers to support linux is the fact that they either have to release a new version of their driver every time the linux kernel changes some esoteric internal API, or be badmouthed for not having good linux support.

    would it really hurt so much to guarantee a stable DKI? doesn't have to freeze the whole kernel, just a subset of functions that will be guaranteed to work as they do now in perpetuity.

    backwards compatibility is just as important to driver writers as it is to app writers.

    doesn't even have to be binary backwards compatible, source level would be sufficient for most.
    • But what's the problem with hardware manufacturers? Their profits are in the HARDWARE, not the software.
    • by Kjella (173770)
      one of the main problems for getting device manufacturers to support linux is the fact that they either have to release a new version of their driver every time the linux kernel changes some esoteric internal API, or be badmouthed for not having good linux support.

      Show me one example of a piece of hardware where the specifications are available (not under some absurd NDA), that has no or poor Linux support. The kernel/driver developers seem more than willing to write the driver and keep up with the "esoteri
    • by Skapare (16644)

      The real problem is this need to have unique drivers for every instance of hardware. Part of that problem is that manufacturers keep wanting to put their code in the host system for various reason. One common reason is to use the host CPU for work they are too cheap to do in the device they are selling. And another that has come out is their desire to access the host system itself (and something in the kernel involves a huge amount of access).

      This practice is bad because it compromises the integrity, re

  • by A nonymous Coward (7548) on Tuesday November 08, 2005 @02:51PM (#13981377)
    One of Linux's great strengths is the flexibility of changing to meet new needs and not being hobbled by rigid backwards compatibility. This can only be done if all source is open and anyone can update drivers to meet new needs. When someone comes up with a patch to streamline a certain minor part of the kernel, it frequently has repercussions elsewhere in kernel land. It is these small changes which have made linux better and better with breathtaking speed. A "stable" binary API removes the possibility of keeping everything up to date and would dramatically show down the adoption of new features and general improvements.

    Continual refactoring is worth far more than some supposed binary API which prevents changes. Get rid of binary drivers! If companies are so paranoid that they want binary drivers, then the hell with them. Linux can advance better without that baggage.
  • Of two minds (Score:4, Interesting)

    by Kelson (129150) * on Tuesday November 08, 2005 @02:52PM (#13981382) Homepage Journal
    As someone who has tried to install various Linux distributions on RAID cards, and has had difficulty getting installers to use even third-party open-source drivers*, I'd love a binary driver API.

    As someone who supports free software, and has struggled with NVIDIA's video drivers (and they're at least trying to meet us halfway by making it as easy as possible to install their closed-source driver under the current system) I can see the negative consequences of encouraging binary-only drivers.

    *Example: Promise SX6000. Old cards work with I20, newer ones use their own interface. An open source driver is available, at least for the 2.4 kernel, but good luck if you want to get your installer's kernel to use it. Unless you can create a driver disk, a byzantine task in itself, you're stuck with a few outdated versions of Red Hat, SuSE, and I think TurboLinux.
  • by Animats (122034) on Tuesday November 08, 2005 @02:54PM (#13981403) Homepage
    Drivers outside the kernel should be fully supported, at least for USB, FireWire, and printer devices. There's no reason for trusted drivers for any of those devices, since the interaction with memory-mapped and DMA hardware is at a lower, generic level.

    Actually, all drivers should be outside the kernel, as in QNX [qnx.com], and now Minix 3. But it's probably too late to do that to Linux.

  • These companies want a binary layer so they can build binary drivers.

    What people tend to forget about this is that it's a bad idea- from most every perspective.

    The Linux kernel was written as a Free Softwate alternative to the existing *nix systems.

    We have thousands of drivers in the kernel from a combination of development efforts. Sometimes a driver is written by an independant kernel developer, and sometimes it's written from the company producing the hardware, working alongside the community.

    What these companies want is to be able to have thier cake without giving back to the community. This is a very slippery slope at the least, and illegal at best, since these sorts of links to binary kernel drivers have been long known to be illegal to distribute alongside the kernel (unless special previsions are made, such as a userland driver).

    Also, binary drivers have been known to be buggy and essentially removie the kernel developers from a position where they have control over the kernel as a whole project. I won't even go into the issues associated with a possible security hole in a binary driver, or a binary driver with, for example, spyware in it.

    The arguement for it is, of course, that this might mean more drivers. This is a test of our strength as a community. Doing the right thing is harder. It means we won't have all the hardware at all times, and certainly not the newest thing. But we retain control over our computers.

    It's hard to say no, but this looks like a clear case where we have to.
  • Hell, no! (Score:5, Informative)

    by cortana (588495) <<sam> <at> <robots.org.uk>> on Tuesday November 08, 2005 @02:58PM (#13981451) Homepage
    Please read The Linux Kernel Driver Interface (all of your questions answered and then some) [linux.no] by the same author before commenting...
    • Re:Hell, no! (Score:3, Insightful)

      by TheRaven64 (641858)
      Well done, you've just invented the microkernel. Now go back in time to the creation of Linux, and tell Linus that he should make Linux a microkernel. To make sure he listens, you should pretend to be a respected operating systems researcher like Andy Tanenbaum.

      Oh, wait, it's been done. He didn't listen.

  • Userspace, anyone? (Score:5, Insightful)

    by ettlz (639203) on Tuesday November 08, 2005 @02:58PM (#13981460) Journal
    IANAKH, but couldn't more drivers be moved into userspace (or other lower rings) --- especially for things like USB printers and miscellaneous gizmos? I think it would also be nice to not bundle thousands of drivers and support for architectures I don't have with the kernel. The kernel itself could provide a very minimal layer of hardware protection (like an exokernel?) and there'd be libraries exporting generic abstractions for particular classes of hardware. Is the context-switching penalty really so great these days? Educate me!
    • by oglueck (235089) on Tuesday November 08, 2005 @03:44PM (#13982008) Homepage
      For USB and Firewire that's already done. Both busses define like device classes and protocols those use. The actual devices need either no special drivers or those can be implemented in user space. Printer drivers (CUPS) are completely user space. Scanner drivers for Sane are completely user space.
    • by 0xABADC0DA (867955)
      Instead of a binary kernel layer what linux needs is a bytecode interpreter layer... a Java-esque langage for running processes in the kernel address space.

      For a simple "find ~ | ..." over 20% of the real time is wasted just between system call overhead (~1200 cycles) and actually copying data in/out of kernel mode. That doesn't even include the time setting up the copyin/out (it has to add the instruction address to a list for the interrupt handler in case there's a fault, which might have to do a lock I
    • by diegocgteleline.es (653730) on Tuesday November 08, 2005 @04:08PM (#13982259)
      It's already happening for scanners and cameras (see libusb) and serial/parallel port drivers (you don't need to insert a kernel driver for your parallel port printer, do you?). You don't really need a microkernel to write driver in userspace, just export the neccesary infrastructure in a sane way.
  • by poszi (698272) on Tuesday November 08, 2005 @02:59PM (#13981474)
    I've recently installed Linux on an iBook. No 3D acceleration, no wireless, no modem. All three drivers seem to exist (for Broadcom wireless in an ndiswrapper form) as a binary driver in the i386 world. But for PPC you are out of luck. ATI drivers and Broadcom wireless are being developed independently but they are not finished yet.

    In my opinion, binary drivers are worse than no drivers at all because they release the pressure on the manufacturer. They can say they support Linux which in case of binary drivers is simply not true.

  • Absolutely (Score:5, Insightful)

    by WombatControl (74685) on Tuesday November 08, 2005 @03:08PM (#13981563)

    One of Linux's biggest problems is the lack of device drivers for common devices, especially newer video cards. Let's face it, companies like ATI and NVIDIA aren't going to release fully open-source drivers. It would be wonderful if they would, but it would also be wonderful if we had flying cars.

    Having a stable binary driver interface would make it easier for hardware manufacturers to embrace Linux, give things like wireless chipsets more usability on Linux and drive further adoption of Linux as a viable competitor to more proprietary solutions

    The perfect is the enemy of the good, and the more Linux gains a foothold the better it is for open source. Insisting that device manufacturers need to have on-staff kernel hackers in order to keep ahead of a frequently-changing kernel makes it that much harder for manufacturers to support Linux as a viable alternative.

    Provided Linux can have a stable binary driver infrastructure that doesn't harm stability, it would greatly help in the adoption of Linux worldwide.

    • Re:Absolutely (Score:3, Informative)

      Having a stable binary driver interface would make it easier for hardware manufacturers to embrace Linux

      You might aswell read the greg's blog. Linux can't have a stable binary interface unless you: 1) lose performance (WDM-like interfaces come with a cost) or 2) lose freedom to configure your kernel (you don't allow to change some of the current kernel options like ej: regparm)

  • Define "Stable" (Score:4, Insightful)

    by mad.frog (525085) <[moc.knilknirc] [ta] [nevets]> on Tuesday November 08, 2005 @03:18PM (#13981683)
    I did RTFA, and the author seems to have a different definition of "stable" than I do:

    Assuming that we had a stable kernel source interface for the kernel, a binary interface would naturally happen too, right? Wrong. Please consider the following facts about the Linux kernel:

    • Depending on the version of the C compiler you use, different kernel data structures will contain different alignment of structures, and possibly include different functions in different ways (putting functions inline or not.) The individual function organization isn't that important, but the different data structure padding is very important.
    • Depending on what kernel build options you select, a wide range of different things can be assumed by the kernel:
      • different structures can contain different fields
      • Some functions may not be implemented at all, (i.e. some locks compile away to nothing for non-SMP builds.)
      • Parameter passing of variables from function to function can be done in different ways (the CONFIG_REGPARM option controls this.)
      • Memory within the kernel can be aligned in different ways, depending on the build options.

    But see, the thing is... a "stable" binary interface requires that structures used specify padding, alignment, and fields to be fixed! If these can vary, then by definition , it's not stable. Ditto the variations that depend on kernel build options.

    Now, if you want to make the case that it's not possible / practical to make an interface that can cover all of these conditions adequately, well, by all means, do so (though I'd say that the hundreds of existing operating systems with binary interfaces show that this isn't the case in the general sense).

    But what I see here is a relatively weak technical argument that is being used to justify an ideological decision.

  • by Julian Morrison (5575) on Tuesday November 08, 2005 @03:30PM (#13981823)
    Ignoring for a moment, the ideological points, lets consider a frequently raised practical point: the idea that an API would either get out of sync with the kernel, or be a drag on innovation.

    I agree that when the Linux kernel was young and untried, standardizing a binary API was bound to become a millstone in a short period of time, as the kernel internals were in a constant state of churn and iterative improvement. Nowadays though, surely, the kernel has been "shaken down" enough that it could afford to commit to binary APIs that are stable at least throughout each minor version number?

    Returning to ideology, I can see how a stable binary API would be useful even to open source hardware. How much easier is it, to say "drop this file under /lib/modules/binary/" than "you need to recompile and reinstall the whole kernel with this patch"?. For obscure hardware which will never be in the official kernel, it would be nice to have the ability to easily use it with any linux distro. No need for a slew of precompiled RPMs, DEBs, and a user-unfriendly source tarball, just one driver binary and you're ready to roll.

    In itself, that says nothing, either pro or anti, about the availability of driver source.
  • by bored (40072) on Tuesday November 08, 2005 @03:38PM (#13981931)
    Firstly, I get paid to do driver development. Secondly, I have never worked for a company that willingly said, ok lets open source the drivers we spend thousands of dollars writing and optimizing.

    That said, this whole discussion is as silly as the one about not putting a kernel debugger in the main kernel source code. Frankly, linux desperatly needs both a kernel debugger, and an ABI to be a REAL alternative for many customers. It also needs the ABI for driver developers so that we can write a single driver and expect it to work on the dozens of flavors of linux we are expected to support. Saying that everyone should opensource their drivers is like saying food should be free. It isn't going to to happen and wishing for it, won't make it happen any sooner. In the interm, almost all the hardware out there has better support on windows (Our sysadmin can't even get support for major linux distributions, from major hardware vendors, even when they have little linux logos on their hardware and websites) the windows drivers tend, to accually work, and they almost always have better features sets. This isn't going to change as long as the "opensource" community treats hardware vendors that think they have IP in the driver as second class citizens.

    Oh, and for people who don't accually work for hardware companies that ship drivers, driver development is often times an expensive process, not because the software engineers are expensive, but because the hardware and software needs to be tested and certified in particular enviroments. There are orders of magnitude more linux distributions this makes it cost orders of magnitude more to test and support than a half dozen windows enviroments, most of which can be tested at microsoft, or one of the major OEM's if the hardware isn't avialable onsite. Putting 10x the money into a market that may be less than 1/10 of sales is not always a good idea, especially when resources are limited. Creating an proper ABI helps to solve this problem.

    That said, if the damn linux kernel accually had a real architecture, it could support an ABI, and even isolate itself from rogue drivers. As it is, the kernel arch is pretty much non existant and just a pile of code that tends to behave like a real kernel, except when you try to do something a little outside of the mainstream desktop or small web server enviroment. This was fine when the whole kernel was just a few hundred thousand lines, but given its current size its getting massivly unmaintainable. This is proven by the fact that linux system stability seems to have gotten really bad over the last few years. Getting to a stable system, takes a lot of vendor testing by the likes of Suse, Redhat, etc.

    Lastly, the tainted concept works fine for the kernel developers, why not carry it forwared so that any binary driver simply marks the kernel as having a binary module loaded, and uses the standard abstract interfaces instead of linking against all kinds of unneeded kernel crap that just provides the posibility to screw something up.

    • by ledow (319597) on Tuesday November 08, 2005 @04:59PM (#13982870) Homepage
      First, I think you're missing the fact that, overall, Linux doesn't care that you can't put your binary-only drivers on it.

      Linus has said publically many times that the reason that there is not ABI is because he doesn't want one. Binary drivers for *anything* end up screwing stuff up. When they do, there is NOTHING anyone but the original author can do. The code stagnates, users get shut out without any help and nobody is any the wiser as to how that hardware actually worked in the first place.

      That's WHY there is stuff like the kernel module license tainting, so that the kernel developers look at a problem, see that you have the massive unknown of a in-kernel binary loaded and can instantly filter your report out. They don't care that your binary driver doesn't work. They can't help you.

      Additionally, setting anything into a static position means that development of it ends and stagnates. You'll never get a static interface that you can use to extend the drivers when new features come along. You end up with all sorts of kludges and interface versioning to try to take account of new things.

      Linux is developed as an independent operating system, not Windows 2006. No-one wants to make you use it if you don't want to. I doubt Linux was ever intended as anything other than a "pure", almost theoretical, system; that is, one that can be constantly redesigned from the ground up to the way it should have been, not kludged to make it fit your eight-year-old driver (which the author is no longer available to update) for a mouse that happens to still use the old interface.

      "Frankly, linux desperatly needs both a kernel debugger, and an ABI to be a REAL alternative for many customers."

      Whoa, magic word customers. Linux doesn't have customers. Your company may have customers. There's no obligation on Linux to help you get/keep your customers. People use Linux because they want to. How often does Linus appear on your telly begging you to buy into Linux? Never. Because he doesn't care if you do or not. However, I do imagine it feels pretty nice to him that you do want to use it.

      "It also needs the ABI for driver developers so that we can write a single driver and expect it to work on the dozens of flavors of linux we are expected to support."

      *You* are expected to support whatever you decide to make. Unfortunately, the linux kernel developers are expected to support YOU, your hardware and everyone else in the world. They don't because they cannot and have no reason to. Even if they had your complete source code, they cannot be expected to maintain your driver for you (which is what will happen when your company goes bust / gets bored with OS).

      Sometimes the best-written drivers in the world are not taken into the kernel because they don't quite fit and the maintainance involved in keeping them in the kernel is too difficult. Your driver, if it is to have any support in the kernel, needs to be able to be updated on any kernel-coder's whim in order to make the whole a better system. You can't do that with binaries, you can't even do it with stable interfaces. You have to have the source.

      The kernel coders have never promised that your stuff will always work (unless it is designed to run purely from userspace... several times Linus has says that userspace interfaces will not MUCH change over time.). They haven't because they cannot.

      The nature of the system is changes to bring improvements, from the interrupt system to the IDE interfaces, from the schedulers to the userspace interfaces such as sysfs or procfs, Linux changes and evolves over time and they cannot guarantee that anything other than userspace syscalls and the like will not be broken, changed or improved between one kernel release and the next.

      When the linux kernel people discover a new way to write drivers that sees enhancements across the board, chances are that they are going to break any of your "single driver" models. That's why they won't give you one. Them im
      • by bored (40072) on Tuesday November 08, 2005 @07:10PM (#13984031)
        Oh, where do I start... I could talk about every one of your points.. lets just pick a few.. Firstly, the lack of hardware support for things like grandma's web cam, and caching SATA controllers hurts Linux much more than it hurts adaptec or joe blow's web cam co. If linux is to be just used by hackers for hackers that's probably fine. That isn't the impression I get from "linux people" who are constantly whining about lack of hardware support.

        Additionally, setting anything into a static position means that development of it ends and stagnates.

        Really? Could have fooled me... Windows can still run DOS and Windows applications from 20 years ago, linux can still run application from 10 years ago. When an API is created, usually there are hooks for extending it in the future. I wouldn't say either one has stagnated, over the course of the last 10 years. If that were the case windows wouldn't run on my quad 64-bit athlon.

        *You* are expected to support whatever you decide to make. Unfortunately, the linux kernel developers are expected to support YOU, your hardware and everyone else in the world.

        But its our brand which gets damaged, when plugged into a linux box, and it works at 1/4 speed, or looses data because the developer who implemented the driver forgot some important edge case.

        They really don't care about you and your binary drivers

        No, but the weekly calls by linux users asking for them says that Linus and friends are only a small part of the community.

        Stick a Knoppix CD in a computer and see how much of the hardware is supported by default. 90% if not more? Include binary-only winmodem/USB ADSL/philips webcam etc. drivers and you get close to 99%.
        Sure, linux supports a lot of hardware, but there is a lot it doesn't support, or supports in a seriously half ass way. I would venture a guess that about 40% of the drivers in the kernel don't actually work based on my experience running linux since the early '90s. When I say don't actually work, i'm saying the driver cannot recover from rare hardware error conditions (like FC cable pulls for example), doesn't fully work. Doesn't work in all situations, for example SCSI adapters that work fine with harddrives but don't work with tape drives because they cannot deal with the rare condition of a tape drive returning check conditions and data at the same time. Or, works fine at some fraction of its rated capability. This isn't counting the fact that the kernel developers often break functionality in the kernel drivers when they change stuff and it doesn't get discovered for 6 months, because the developer making the change didn't test the driver that was affected. Now, your average slashdot weeny doesn't see this because they have standard whitebox or dell machines, that are the same as the other 90% of slashdot weenies. In real life there is a vast amount of strange hardware out there, professional audio cards, hardware encryption engines, 4 port fiber channel cards, ficon, 10G ethernet cards, large disk arrays, big memory systems, 10's of different types of system management interfaces for monitoring things like, system ECC soft errors, redundant power supplies status, etc. The list goes on. In many cases the linux driver was written and tested a year ago, a bunch of people are using it in a production environment and it has been broken for 6 months in the mainstream kernels. I was responsible for fixing a number of race conditions in the VMM a couple of years ago, that only occurred with a SMP box that had more than a couple G of RAM (a rarity back then). This was suppose to work, but it didn't because the people with 6G of ram didn't test it with SMP applications and the people with >2 CPU's didn't test with more than a few hundred megs of RAM. Recently I tried bench marking a 3TB disk array in linux, only to discover that it didn't work in over half of the file systems I tried (XFS, Reiser, JFS, ext2, ext3) and in at least one case worked but was so slow it was unusabl
  • by Khashishi (775369) on Tuesday November 08, 2005 @03:45PM (#13982012) Journal
    This is necessary if we ever want Linux to be ready for the desktop. The ability to have driver modules is certainly more advanced than having everything compiled into the kernel, but it's severely lacking in many regards. The module has to be custom made for each kernel, making binary distribution useless because there are 2^100 kernels out there. So unless the manufacturer open sources the driver, they can't make a driver for Linux. Or you could go with an open source interface to a closed source driver.

    Here's a sample of what I put up with. I downloaded Agnula Demudi 1.2.1 and installed it with the 2.6 kernel. I was ready to install some Nvidia drivers. But after some searching, I couldn't find any binary driver interfaces compatible with my kernel. Fine, I can compile my own. So I download the interface sources and launch module-assistant. It complains that riva driver support in my kernel conflicts with the nvidia driver, and I need to recompile the kernel. (I then went through the joy of trying to find the hidden demudi sources and figuring out how to patche them and configure them, ultimately failing to compile it, but this is getting away from the topic.) Finally, I said screw it.

    You might blame the distro, but it's really the kernel at fault here. Recompiling the kernel to support a driver is NOT something that a user should have to do. Windows does not require you to recompile your kernel to install drivers.
  • by Anonymous Coward on Tuesday November 08, 2005 @03:46PM (#13982030)
    First of all, I completely understand that drivers with source are better than binary-only kernel drivers. However, the fact of the matter is that companies do not feel comfortable freely releasing their intellectual property. Given the choice between a piece of hardware in a laptop which doesn't work with Linux at all and one that works with a binary-only driver, I would rather have the binary-only driver.

    As long as Linux kernel developers complain that binary-only drivers are "illegal", Linux will have less hardware support. One of the major complaints people have against Linux is that a lot of devices that one can attach to a Windows machine plain simply do not work in Linux (I still think Linux is far behind Windows when it comes to wireless drivers, for example). I want to see a true alternative to Windows on the desktop; GPL fanaticism and an inability to understnad how big corporations work harms this.
  • User-level drivers (Score:3, Interesting)

    by spitzak (4019) on Tuesday November 08, 2005 @04:18PM (#13982385) Homepage
    Linux should support user-space drivers. Probably through FUSE and some other apis. These can then be binary, just like any other appliation. If they crash they will not take the system down. The API is limited but you will be able to open/read/write them. ioctl can be done with Plan9 style names, ie open "/dev/neato_device/volume" and write the desired volume there, etc.

    Drivers that need more elaborate API's or need more speed will be stuck with the mutable binary interface and occasional GPL restrictions. Too bad. A lot of interesting drivers do not need this speed. And those that do may force the interface to user-level drivers to be improved until it is usable, which is a very desirable result.
  • by bhurt (1081) on Tuesday November 08, 2005 @04:36PM (#13982612) Homepage
    There's a joke- programming is a lot like sex. One mistake, and you're supporting it for the rest of your life. By this measure, writting a device driver for Linux is a mistake. Because even after the feature set and the hardware itself is stable, you are letting yourself into a never-ending task of constantly changing the driver to keep track of the current API changes. At a previous job we spec'd it out at about 1/2 of a full time position to keep up with all the kernel changes. This is your job, from now until eternity, or at least until you're willing to have the driver declared obsolete and abandoned (after all, we all know that there is just three types of software- vapor, beta, and obsolete).

    Well, congratulations. You have your religous purity. And guess what- it's comming at a cost. You wonder why Linux isn't more popular on the desktop? Well, here's part of the reason. It's hardware support will always- ALWAYS- be behind that of Windows. Why? Because when the hardware ships, it ships with Windows drivers that the hardware vendor wrote with it. Note that Windows pulls the same sort of API changing crap that Linux does. The difference is that the hardware vendors look at Windows, and the half man-year per year cost of supporting Windows costs, and go "but we have to support Windows if we want to sell more than 3 units." They then look at Linux, and the half man-year per year cost of support Linux, and go "Supporting Linux is not cost-effective at this time." I know, because I've seen this happen. So now the hardware is out there. And now we wait, for someone willing to step up and volunteer the time to write, and maintain indefinately, the driver. Someone less capable of doing it than the hardware manufacturer (this isn't to question the capabilities of the current kernel developers, but the fact of the matter is that there is a huge advantage to being three cubes down from the hardware developers, and capable of wandering over and asking direct questions, instead of having to reverse engineer what is really going on, having worked both ways).

    So this is the fundamental question: which is worse. Having binary-only proprietary drivers, or being forever behind in hardware support and not having people contribute simply because they don't feel like having to constantly update the driver once they finish it, they'd like to be able to move on. I come down on one side, Linus and the kernel developers down on the other.

    Fine. Their kernel. Their problem.
    • By the way- another advantage to having a stable kernel API layer is the ability to write a dead-tree book about how to write Linux device drivers and not have it be obsolete by the time it gets published. I have half a dozen books on this subject- half of them were obsolete by the time they were published, they're all obsolete now.
  • by bandannarama (87670) on Tuesday November 08, 2005 @06:39PM (#13983760)
    This is about cost. The manufacturers' request is perfectly reasonable, given their costs, and the kernel developers' concerns are perfectly valid given their costs.

    Initial costs associated with a manufacturer-supported driver:

    • Initial development against a specific kernel version. This is the item of least concern to the manufacturers.
    • Initial testing against some specific subset of officially supported distributions. Tweaks or rewrites to handle additional kernel versions. This is where the costs start to go up.
    • Corresponding documentation, release notes, etc.
    • Make driver available on company website (ongoing cost).
    • Put driver on the CD in the box (include documentation on which releases it was tested against)

    Ongoing costs associated with a manufacturer-supported driver:

    • Development and test resources to check compatibility with subsequent kernel versions.
    • Test resources to check compatibility with subsequent specific officially supported distributions.
    • Update documentation for any changes.
    • Update website with updated versions. Handle support calls from confused users.
    • Slipstream CD in the box with updated versions.

    These costs exist even if a version of the driver is merged into the mainline kernel. The only problem solved by such source-level merging is compatibility with the latest kernel version. It is not acceptable to the manufacturers' customers to be required to update to the latest kernel/distribution to be able to use the device.

    Here's the key point: If there is no binary interface between the driver and the kernel, all of the above costs skyrocket. You have M kernel versions against N distributions, with the total increasing over the life of the product. If there is a binary interface guarantee from the kernel development team to change only very slowly and only extremely rarely breaking compatibility -- like the guarantee Windows provides -- then the incremental costs are containable. It is reasonable to expect that 95% of their testing on 2.6.5 is valid on 2.6.14.

    The perfectly reasonable response from kernel developers is that with closed-source drivers they get stuck debugging problems that are't kernel-related (I don't hold ideology to be economically significant so I'll ignore it here, without insult to people's strong opinions on the subject). Their proposed solution is to require the driver's source before they'll help with the debugging.

    From the manufacturers' point of view that's a very draconian requirement. They are justifiably concerned about intellectual property (availability of the source makes it much easier for competitors to reverse-engineer the hardware/firmware). Surely there must be a middle ground. Is there some way to have a relationship between the device manufacturers and the kernel developers that minimizes everyone's costs?

    I think there is. Note that all of the above costs and issues are just as valid in the Windows world as in the Linux world. Microsoft doesn't want to deal with bad drivers crashing their systems, costing them both development/debugging time and reduced perceived stability (--> lower sales). Their solution is the Windows Hardware Quality Lab (WHQL).

    The WHQL is a separate entity from Microsoft. Device manufacturers are required to submit their driver source (effectively under NDA) along with their device. The WHQL staff runs the driver through a battery of tests, probably mostly automated. If the device and driver meet stability standards set by Microsoft, the driver is signed by WHQL. Windows checks for this signature at installation time and warns the administrator if it is not present. Microsoft can reasonably refuse to support non-WHQL-signed drivers when crashes occur, for exactly the same reasons that Linux kernel developers refuse to support drivers without the source. This system has been the single most important factor in Windows' significan

Unix is the worst operating system; except for all others. -- Berry Kercheval

Working...