Should Linux Have a Binary Kernel Driver Layer? 944
zerojoker writes "The discussion is not new but was heated up by a blog entry from Greg Kroah-Hartman: Three OSDL Japan members, namely Fujitsu, NEC and Hitachi are pushing for a stable Kernel driver layer/API, so that driver developers wouldn't need to put their drivers into the main kernel tree. GKH has several points against such an idea." What do you think?
No Thanks! (Score:5, Interesting)
Solves the reason why I gave up Linux (Score:5, Interesting)
If you don't force the manufacturers to include their driver source in the kernel, you might get them to release actual drivers for their new hardware.
Of two minds (Score:4, Interesting)
As someone who supports free software, and has struggled with NVIDIA's video drivers (and they're at least trying to meet us halfway by making it as easy as possible to install their closed-source driver under the current system) I can see the negative consequences of encouraging binary-only drivers.
*Example: Promise SX6000. Old cards work with I20, newer ones use their own interface. An open source driver is available, at least for the 2.4 kernel, but good luck if you want to get your installer's kernel to use it. Unless you can create a driver disk, a byzantine task in itself, you're stuck with a few outdated versions of Red Hat, SuSE, and I think TurboLinux.
Re:of course it should (Score:3, Interesting)
Binary drivers should be outside the kernel (Score:3, Interesting)
Actually, all drivers should be outside the kernel, as in QNX [qnx.com], and now Minix 3. But it's probably too late to do that to Linux.
Re:Excellent suggestion! (Score:5, Interesting)
I am not a linux contributor, but I would think you'd kinda want to guard access to the kernel kinda closely. I mean, sure, anyone can fork it or grab a copy to putz around with, but contributing back into the kernel - that's gotta be just about as stable as a piece of code can be.
Despite some loss in efficiency, I've always been an advocate of abstracted access. To many of the pieces of software we write at my job do we add a logical API, so that we don't always have to open the main code branch every time we want to add a feature.
Driver developers hardly equal kernel developers. Keeping the two logically seperated makes sense - not to mention that driver developers are hardly the only ones that would benefit from this API.
Re:Excellent suggestion! (Score:3, Interesting)
A stable binary driver API that doesn't mean putting the driver into the kernel is already there. It is called the syscall interface. If you have a stable module ABI as proposed (over and over again, about once a month for the past 10 years) the drivers actually *do* go into the kernel - and closed source drivers so if they are broken, they won't ever get fixed.
This is all a very, very bad idea for Linux stability. Crash and burn, baby, crash and BURN!
Re:Solves the reason why I gave up Linux (Score:3, Interesting)
The problem is that one shouldn't have to even try to make it work. A desktop operating system in the form of a desktop oriented Linux distribution should be as painless as possible to set up. Preferably there should not be even the need to have a C compiler present. Shove in the installation CD/DVD and boot it up and the installer should take care of everything hardwareconfiguraitonwise.
With Linux and new hardware this is currently just a distant dream as the drivers simply aren't there when the hardware is new. But after a while after the drivers finally get into the kernel and distributions, the ride gets a whole lot smoother. The problem is that it can take a considerable time for this to happen. First a while for the drivers to appear in the standard kernel, and then a while for that kernel to appear in the various distribution installers.
A binary kernel driver layer could in some regard alleviate these issues, but only if the hardware manufacturers would actually start releasing Linux drivers themselves, and at the same time as the Windows drivers. And then there are the freedom concerns mentioned here already with such a binary layer.
Assuming binary drivers targetted for a binary kernel layer would be available from hardware manufacturers as soon as they release new hardware, one could just put them on a USB storage device or something like that and the Linux installer could just ask during the installation if one would like to load 3rd party drivers from just such a device.
I myself don't have huge problems with having to perform all kinds of dark magic to get things working, but the average user won't cope with most of this. Linux is not just a hacker's plaything anymore. But for it to start gaining ground the installation and hardware configuration must be improved - e.g. it isn't amusing to have the installer not detect any harddrives when there are no drivers for the harddrive controller on the installation disk.
Re:GO FOR IT (Score:3, Interesting)
Having to "hunt down drivers" is an artifact of the old third-party binary driver world. When hardware specifications are available to developers, those developers can add the hardware support to the kernel -- which means it ships with the distribution.
If there's one thing you'll guarantee by providing a binary-only driver interface, it's that you'll have to spend a lot of time hunting down drivers.
Re:of course it should (Score:3, Interesting)
Show me one example of a piece of hardware where the specifications are available (not under some absurd NDA), that has no or poor Linux support. The kernel/driver developers seem more than willing to write the driver and keep up with the "esoteric internal API", even so much that they spend tons of time trying to reverse engineer hardware where they get nothing. If you can find me just one such piece of hardware where Linux is the shortcoming, I'll donate to a fund for building that driver.
Re:No (Score:3, Interesting)
Then you go and fire the developer(s) who called it a "stable" ABI.
By definition, a "stable" ABI should change very rarely, and provide backwards compatibility.
Without this, it ain't stable.
Re:Stability like that leads to stagnation and dea (Score:1, Interesting)
Another benefit: vast improvements to the driver(s) as the community also assisted in code addition/refinement and debugging. Moral: Avoid the short range (read corporate) view, do everything possible to make and keep the future OPEN.
Remember George Santyana who said (paraphrasing): 'those who refuse to learn from the mistakes of the past, will be forced to relive them'.
Re:No (Score:2, Interesting)
I've got a solution for you: DON'T CHANGE THE DAMN DRIVER ABI EVERY FEW KERNELS! Other OSes manage to do just well using the same driver ABI for many many years, why not Linux?
Re:Only one word (Score:3, Interesting)
Why do we still have to have a user program (X) with device drivers in it? (Would anybody think it's a good idea if the Linux kernel didn't have any sound drivers, and required gstreamer to implement its own?)
It seems we have two competing driver models in Linux: some are in the kernel, and provide a consistent interface (sound cards, SCSI/IDE/... cards, network cards), and some aren't in the kernel at all, but expose them at a low level and rely on userspace programs to provide actual drivers (X11 for video cards, CUPS for printers).
I'm against a binary API, not on philosophical grounds (I like gstreamer's binary API), but because it simply never works: I've tried to use binary-only drivers under Linux in the past, and it never works nearly as well as open-source drivers. But whewher or not you agree with Linus' open-source philosophy, can we at least all agree that we need to put drivers in the same, correct abstraction level?
What about Graphics Cards? (Score:1, Interesting)
As much as I would like that source to be open there is no way NVidia for example could compete if they gave away all there interlectual property away for free. Thats how they make there money and can afford to pour more back into developing the next generation of cards.
And what about software modems. I know most people hate them but the companies how develop them do so to make them cheaper which is reflected in their retail price. The modems are just a dsp connected to a phone line. If they gave away there source anyone could buy the same processor stick it on a board and sell it from under them without any of the R&D costs.
I say bring on a static api for binary drivers!
A good portion of hardware implemented in software (Score:2, Interesting)
The problem for the open-source community is that these drivers are increasingly not just the way to talk to autonomous hardware but actually implement alot of the fucntionality of the device. Taking this in mind, the manufacturers are unlikely to give out the source code for these drivers as they will be giving their competitors a more significant view into their playbook. People in the Linux community complain about the lack of certain drivers like those for wireless cards in many notebooks, which in my understanding work as described, and are left hacking together a solution to run the Windows drivers under linux in order to get them to work. If you want things like that to work and work well in Linux then you have to give them a stable subsystem and make it as easy to port their drivers as possible while providing them the means to not give away the source which contains half of their work in creating the hardware.
I know that is not the ideal solution but thems the breaks. It is either Linux steps up with an API and binary subsystem or they will be left with fewer hardware options consisting of what more expensive all-hardware alternatives are left for many of these peripherals.
Re:Oh, I'm all for it. (Score:4, Interesting)
or maybe it improves that its drivers so frequently that it cant keep trying to certify it every single time?
Backfire. (Score:2, Interesting)
Re:Userspace, anyone? (Score:3, Interesting)
For a simple "find ~ |
Re:Absolutely (Score:2, Interesting)
(Am I the only one with a 8-bit soundblaster? JK =P)
Re:Oh, I'm all for it. (Score:2, Interesting)
Not trolling, just wanting some enlightenment here.
User-level drivers (Score:3, Interesting)
Drivers that need more elaborate API's or need more speed will be stuck with the mutable binary interface and occasional GPL restrictions. Too bad. A lot of interesting drivers do not need this speed. And those that do may force the interface to user-level drivers to be improved until it is usable, which is a very desirable result.
It'll happen eventually.. (Score:2, Interesting)
API/ABI compatibility obviously has it's own pros and cons - some times it's impossible to break things, take Windows for example. The world is going with LP64 model for 64 bit machines but Windows developers had to stick with LLP64 just because they made some design mistakes and now they cannot break the tons of applications. (See http://blogs.msdn.com/oldnewthing/archive/2005/01
Linux on the other hand can afford to break and fix things until the time where binary and out-of-tree drivers grow to out number the in-tree stuff. By that time I guess there will be a very less need to break things such as driver interfaces and the like.
And I think the mad rush to put everything in the official kernel tree is not a good idea from maintenance and complexity stand point. So if and when the Linux ABI/API stabilizes that will be a good thing for out-of-tree kernel drivers and Linux itself.
Re:Oh, I'm all for it. (Score:3, Interesting)
Bruce
Re:Excellent suggestion! (Score:3, Interesting)
Having actually done windows driver co-development in partnership with a Japanese hardware vendor before, I can tell you they do consider interfaces and protocols into their hardware proprietry. Remember some "hardware" products are actually computers in and of themselves with mini OSs and complicated protocols for communication with the host PC. This isn't your father's PIO serial port stuff. Even as a true blue co-developing partner the best we could get were software APIs to a binary library that we had to link into our drivers.
Re:GO FOR IT (Score:2, Interesting)
Maintaining device drivers (Score:3, Interesting)
Let's see here. Manufacturers want us to create a kernel than allows them to infect and interfere with its integrity, reliability, performance, and security, just so they don't have to keep maintaining that driver as the design of the kernel continues to be improved? They want us to stagnate the design of the kernel so they can let us use their stagnant device drivers? And they want us to have a system that is no longer viably supported by staff or consultants, while they are most likely not ever going to provide system support (if they can't keep the driver maintained, how the hell are they going to provide support for an old driver)?
I'd just stay away from their hardware.
Apple's KPI. Why not? (Score:2, Interesting)
If you are not familiar with the "KPI" thing, here is a short summary from http://arstechnica.com/reviews/os/macosx-10.4.ars
"With Tiger, Apple is finally ready to put some kernel interface stakes in the ground. For the first time, there are stable, officially supported kernel programming interfaces (KPIs). Even better, there's an interface versioning mechanism and migration policy in place that will ensure that the pre-Tiger situation never happens again.
From Tiger forward, kernel extensions will link against KPIs, rather than directly against the kernel. The KPIs have been broken down into smaller modules, so kexts can link against only the interfaces that they actually need to use.
Each KPI has a well-defined life cycle made up of the following stages.
* Supported - The KPI is source and binary compatible from release to release.
* Deprecated - The interface may be removed in the following major release. Compiler warnings are generated on use.
* Obsolete - It's no longer possible to build new kernel extensions using this KPI, but binary compatibility for existing kexts that use this KPI is still assured.
* Unsupported - Kexts using this KPI will no longer work, period.
The most significant part of this new system is that the kernel itself can and will change behind the scenes. KPIs will descend towards the "unsupported" end of the life cycle only as kernel changes absolutely demand.
Best of all, multiple versions of a KPI can coexist on the same system. This allows a KPI to move forward with new abilities and a changed interface without breaking kernel extensions that link to the older version of the KPI. The expectation is that the kernel can undergo a heck of a lot of changes while still supporting all of the KPIs."
Re:Excellent suggestion! (Score:3, Interesting)
As opposed to hardware engineers writing code for a kernel they know nothing about? I would much rather a kernel hacker writing a driver off of a spec sheet over an engineer copying a device driver template and sticking in the appropriate bits. In the absence of a good spec sheet, I'll take reverse-engineered drivers any day. They may not support all of the features, but they are usually better quality than a lot of closed source binary drivers. Just look through the Alsa project. The Soundblaster Live driver is a good example. Creative released one of their own (open source even, I believe), but it blew compared to the one written by the Alsa hackers.
The solution for vendors not wanting to support multiple kernels is to open source their drivers. Once it is in the kernel, the maintenance is taken care of by somebody else. There really is no reason to not open source a driver. The PHB's are just stuck in a rut..."It has to be closed source because...intellecutal property...trade secrets...err, it's mine
Re:Learn to read. (Score:3, Interesting)
Re:Oh, I'm all for it. (Score:2, Interesting)
With a bit of luck, you can catch the second type of error by running a well thought out test suite. Not always though, I think its an example of the halting problem. The first sort is much harder - there is a lot of variability in PC hardware, even to the point of it being completely broken.
where you miss (Score:4, Interesting)
Without them you aren't guaranteed support from Microsoft.
If you are running machines with all certified drivers and WMI/MSI installed applications then Microsoft will be right there with you until the problem is solved. You won't find it written anywhere but Microsoft gurantees that you're machine will not crash (BSOD) if you use certified drivers and MSI installed software. At home this isn't possible, but in some environments it is possible (and a good idea in other places).
In a way you are locked in to what Microsoft has approved, but if they've approved it then the problem is theirs to fix - not yours. Good luck meeting those two requirements, but if you can: hold them to it.
Re:*I* have an idea! (Score:5, Interesting)
A hybrid kernel. Open source drivers are compiled into the kernel. There is a API for closed-source drivers to run in user-space.
Does not violate GPL.
Little compromise to stability.
Developers who only want to do closed-source drivers can do so.
Developers have incentive to open source their drivers in order to have better performance and take advantage of newer kernel features (the internal APIs are updated with the kernel, the external APIs stay fixed and fall behind the feature curve).
Win.
Win.
Win.
Unless its just a philosophical question, in which case
Re:GO FOR IT (Score:2, Interesting)
Since the Linux community has clearly not provided a system that matches your needs, I will again ask that you do not attempt to interfere in its development by advocating changes that could end up dumping binary drivers on us. We do not want them. We do not want what they will bring to our system.
I am glad your experience with Windows XP has been so positive. Hopefully you will continue to use it rather than attempt to subvert the Open Source movement with your incompatible agenda.
Linux (and others!) should embrace Project UDI... (Score:3, Interesting)
Without open standards, the Internet would not exist. Various proprietary networking standards (Novell Netware, IBM channel architecture, Banyan Vines, etc.) used to work in their own isolated worlds, rarely speaking to each other. And people generally thought that was okay, because that's how it had always had been, and look how well each one works in its own little world! Similarly, email systems were proprietary and incompatible. Then along comes TCP/IP, SMTP and other open Internet protocols, and the world is transformed. Suddenly, everything can talk to everything, and with the 20/20 benefit of hindsight, it's clear to all how much better it is with the Internet than it was with all those proprietary islands.
Many implementations of the Internet protocols were proprietary and it didn't matter. There were always both free and proprietary implementations of the Internet protocols, but the important thing was that they all agreed on the same standards and (more or less) followed them, which bridged all those little proprietary islands into this wondrous whole we have today, where virtually any networked device is capable of communicating with any other. What mattered was that the standard was open, no matter how many of the implementations were proprietary. (And, of course, natural evolution tends to favor the extinction of most of the proprietary systems in favor of free software whenever such competition occurred, especially since vendor lock-in fails when customers demand conformance with open standards.)
The computer industry is starting to realize that XML, like TCP/IP, can bridge proprietary islands. Look at the number of legacy systems, interfaces, protocols and file formats which are being interfaced with XML to achieve at the application level what TCP/IP achieved at the networking level. Legacy systems, proprietary systems and even free systems, each with its own way of doing things, can suddenly be made to talk to each other in a robust, loosely-coupled fashion which was unfathomable just a decade or two ago. This process appears to be well on the way to revolutionizing the computer industry yet again.
Operating systems and device drivers are full of proprietary islands just waiting to be bridged, and it could revolutionize operating systems as much as TCP/IP revolutionized computer networking. Not all of these proprietary islands are "proprietary" in the closed-source sense -- many are also free-software islands which are "proprietary" in the "only works with this system" sense. Just in the domain of free software, there are countless little proprietary islands between various versions of Linux, FreeBSD, OpenBSD, NetBSD, Dragonfly BSD, Darwin, HURD, etc. These aren't "proprietary" as Stallman uses the term, but just try to take a random device driver from one of these random islands and dump it on another at random and see how likely it is to work without changes. Then, of course, there are also the truly proprietary systems such as Windows.
Bridging all those islands would benefit free software immensely, regardless of whether or not proprietary closed-source vendors jump on the bandwagon. Imagine if every device driver only needed to be implemented once to a common API, and it worked without source code changes on every operating system that supports that API? That's exactly the promise that Project UDI holds for operating systems and device drivers, and it's as revolutionary as the promise of TCP/IP.
The Internet wouldn't be where it is today without free software, yet free software wouldn't be where it is today without the Internet! This seems like a conundrum -- a chicken-and-egg problem. Actually, it's a truly symbiotic relationship, and it