Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Should Linux Have a Binary Kernel Driver Layer? 944

zerojoker writes "The discussion is not new but was heated up by a blog entry from Greg Kroah-Hartman: Three OSDL Japan members, namely Fujitsu, NEC and Hitachi are pushing for a stable Kernel driver layer/API, so that driver developers wouldn't need to put their drivers into the main kernel tree. GKH has several points against such an idea." What do you think?
This discussion has been archived. No new comments can be posted.

Should Linux Have a Binary Kernel Driver Layer?

Comments Filter:
  • No Thanks! (Score:5, Interesting)

    by Shads ( 4567 ) <shadusNO@SPAMshadus.org> on Tuesday November 08, 2005 @03:48PM (#13981339) Homepage Journal
    No thanks, this is just a great way to promote closed source inside the linux kernel and to make debugging problems totally impossible.
  • by s20451 ( 410424 ) on Tuesday November 08, 2005 @03:50PM (#13981357) Journal
    I gave up Linux mostly because I was tired of getting punished for having new hardware, which is often unsupported. Especially on laptops.

    If you don't force the manufacturers to include their driver source in the kernel, you might get them to release actual drivers for their new hardware.
  • Of two minds (Score:4, Interesting)

    by Kelson ( 129150 ) * on Tuesday November 08, 2005 @03:52PM (#13981382) Homepage Journal
    As someone who has tried to install various Linux distributions on RAID cards, and has had difficulty getting installers to use even third-party open-source drivers*, I'd love a binary driver API.

    As someone who supports free software, and has struggled with NVIDIA's video drivers (and they're at least trying to meet us halfway by making it as easy as possible to install their closed-source driver under the current system) I can see the negative consequences of encouraging binary-only drivers.

    *Example: Promise SX6000. Old cards work with I20, newer ones use their own interface. An open source driver is available, at least for the 2.4 kernel, but good luck if you want to get your installer's kernel to use it. Unless you can create a driver disk, a byzantine task in itself, you're stuck with a few outdated versions of Red Hat, SuSE, and I think TurboLinux.
  • by Spy der Mann ( 805235 ) <`moc.liamg' `ta' `todhsals.nnamredyps'> on Tuesday November 08, 2005 @03:54PM (#13981401) Homepage Journal
    But what's the problem with hardware manufacturers? Their profits are in the HARDWARE, not the software.
  • by Animats ( 122034 ) on Tuesday November 08, 2005 @03:54PM (#13981403) Homepage
    Drivers outside the kernel should be fully supported, at least for USB, FireWire, and printer devices. There's no reason for trusted drivers for any of those devices, since the interaction with memory-mapped and DMA hardware is at a lower, generic level.

    Actually, all drivers should be outside the kernel, as in QNX [qnx.com], and now Minix 3. But it's probably too late to do that to Linux.

  • by IAmTheDave ( 746256 ) <basenamedave-sd@yaho[ ]om ['o.c' in gap]> on Tuesday November 08, 2005 @03:56PM (#13981430) Homepage Journal
    Having a kernel API for drivers allows developers to stay away from the mainstream kernel. This will enhance the stability of the kernel in general and also allow hardware vendors to support Linux with less effort.

    I am not a linux contributor, but I would think you'd kinda want to guard access to the kernel kinda closely. I mean, sure, anyone can fork it or grab a copy to putz around with, but contributing back into the kernel - that's gotta be just about as stable as a piece of code can be.

    Despite some loss in efficiency, I've always been an advocate of abstracted access. To many of the pieces of software we write at my job do we add a logical API, so that we don't always have to open the main code branch every time we want to add a feature.

    Driver developers hardly equal kernel developers. Keeping the two logically seperated makes sense - not to mention that driver developers are hardly the only ones that would benefit from this API.

  • by Bloater ( 12932 ) on Tuesday November 08, 2005 @04:16PM (#13981663) Homepage Journal
    > I am not a linux contributor, but I would think you'd kinda want to guard access to the kernel kinda closely. I mean, sure, anyone can fork it or grab a copy to putz around with, but contributing back into the kernel - that's gotta be just about as stable as a piece of code can be.

    A stable binary driver API that doesn't mean putting the driver into the kernel is already there. It is called the syscall interface. If you have a stable module ABI as proposed (over and over again, about once a month for the past 10 years) the drivers actually *do* go into the kernel - and closed source drivers so if they are broken, they won't ever get fixed.

    This is all a very, very bad idea for Linux stability. Crash and burn, baby, crash and BURN!
  • by dastrike ( 458983 ) on Tuesday November 08, 2005 @04:17PM (#13981679) Homepage

    The problem is that one shouldn't have to even try to make it work. A desktop operating system in the form of a desktop oriented Linux distribution should be as painless as possible to set up. Preferably there should not be even the need to have a C compiler present. Shove in the installation CD/DVD and boot it up and the installer should take care of everything hardwareconfiguraitonwise.

    With Linux and new hardware this is currently just a distant dream as the drivers simply aren't there when the hardware is new. But after a while after the drivers finally get into the kernel and distributions, the ride gets a whole lot smoother. The problem is that it can take a considerable time for this to happen. First a while for the drivers to appear in the standard kernel, and then a while for that kernel to appear in the various distribution installers.

    A binary kernel driver layer could in some regard alleviate these issues, but only if the hardware manufacturers would actually start releasing Linux drivers themselves, and at the same time as the Windows drivers. And then there are the freedom concerns mentioned here already with such a binary layer.

    Assuming binary drivers targetted for a binary kernel layer would be available from hardware manufacturers as soon as they release new hardware, one could just put them on a USB storage device or something like that and the Linux installer could just ask during the installation if one would like to load 3rd party drivers from just such a device.

    I myself don't have huge problems with having to perform all kinds of dark magic to get things working, but the average user won't cope with most of this. Linux is not just a hacker's plaything anymore. But for it to start gaining ground the installation and hardware configuration must be improved - e.g. it isn't amusing to have the installer not detect any harddrives when there are no drivers for the harddrive controller on the installation disk.

  • Re:GO FOR IT (Score:3, Interesting)

    by croddy ( 659025 ) on Tuesday November 08, 2005 @04:19PM (#13981697)
    As a user, the last thing I want is hunting down drivers that will work with the X or Y kernel version.

    Having to "hunt down drivers" is an artifact of the old third-party binary driver world. When hardware specifications are available to developers, those developers can add the hardware support to the kernel -- which means it ships with the distribution.

    If there's one thing you'll guarantee by providing a binary-only driver interface, it's that you'll have to spend a lot of time hunting down drivers.

  • by Kjella ( 173770 ) on Tuesday November 08, 2005 @04:21PM (#13981716) Homepage
    one of the main problems for getting device manufacturers to support linux is the fact that they either have to release a new version of their driver every time the linux kernel changes some esoteric internal API, or be badmouthed for not having good linux support.

    Show me one example of a piece of hardware where the specifications are available (not under some absurd NDA), that has no or poor Linux support. The kernel/driver developers seem more than willing to write the driver and keep up with the "esoteric internal API", even so much that they spend tons of time trying to reverse engineer hardware where they get nothing. If you can find me just one such piece of hardware where Linux is the shortcoming, I'll donate to a fund for building that driver.
  • Re:No (Score:3, Interesting)

    by mad.frog ( 525085 ) <steven&crinklink,com> on Tuesday November 08, 2005 @04:25PM (#13981763)
    It's a bad idea because what happens when the driver ABI changes?

    Then you go and fire the developer(s) who called it a "stable" ABI.

    By definition, a "stable" ABI should change very rarely, and provide backwards compatibility.

    Without this, it ain't stable.
  • by Anonymous Coward on Tuesday November 08, 2005 @04:30PM (#13981824)
    Amen bro. In the past I worked as an engineer at one of those "closed source" corporations which manufactured SCSI host adapters. At the time I was using a competitors HBA (host bus adapter) for my drives. Finally a member of the Linux community volunteered his time and effort and delivered an excellent driver which the company finally took over maintenance on. When the next gen of newly architected HW was released a brand new driver was necessary. The internal debate was whether or not the new 'whiz-bang' driver architecture should be given away to the open source community. As this company never sells it's SW anyway the leap of faith was finally made, which brought it extra revenue as shortly thereafter all the large vendors (read customers) were insisting on a Linux presence.
    Another benefit: vast improvements to the driver(s) as the community also assisted in code addition/refinement and debugging. Moral: Avoid the short range (read corporate) view, do everything possible to make and keep the future OPEN.

    Remember George Santyana who said (paraphrasing): 'those who refuse to learn from the mistakes of the past, will be forced to relive them'.
  • Re:No (Score:2, Interesting)

    by dnaumov ( 453672 ) on Tuesday November 08, 2005 @04:31PM (#13981844)
    It's a bad idea because what happens when the driver ABI changes?

    I've got a solution for you: DON'T CHANGE THE DAMN DRIVER ABI EVERY FEW KERNELS! Other OSes manage to do just well using the same driver ABI for many many years, why not Linux?
  • Re:Only one word (Score:3, Interesting)

    by Anonymous Coward on Tuesday November 08, 2005 @04:31PM (#13981846)
    What are his reasons for not putting video card drivers in the kernel, like other Unix operating systems?

    Why do we still have to have a user program (X) with device drivers in it? (Would anybody think it's a good idea if the Linux kernel didn't have any sound drivers, and required gstreamer to implement its own?)

    It seems we have two competing driver models in Linux: some are in the kernel, and provide a consistent interface (sound cards, SCSI/IDE/... cards, network cards), and some aren't in the kernel at all, but expose them at a low level and rely on userspace programs to provide actual drivers (X11 for video cards, CUPS for printers).

    I'm against a binary API, not on philosophical grounds (I like gstreamer's binary API), but because it simply never works: I've tried to use binary-only drivers under Linux in the past, and it never works nearly as well as open-source drivers. But whewher or not you agree with Linus' open-source philosophy, can we at least all agree that we need to put drivers in the same, correct abstraction level?
  • by Anonymous Coward on Tuesday November 08, 2005 @04:32PM (#13981863)
    There are very good reasons for closed source drivers. Take modern grahics cards as an example. There is so much IP in the drivers its not funny. A graphics card these days is little more then a specialised CPU with really fast RAM. So much of the rendering process is in the driver, thats why a small driver revision can make so much difference to performance.

    As much as I would like that source to be open there is no way NVidia for example could compete if they gave away all there interlectual property away for free. Thats how they make there money and can afford to pour more back into developing the next generation of cards.

    And what about software modems. I know most people hate them but the companies how develop them do so to make them cheaper which is reflected in their retail price. The modems are just a dsp connected to a phone line. If they gave away there source anyone could buy the same processor stick it on a board and sell it from under them without any of the R&D costs.

    I say bring on a static api for binary drivers!
  • by Deviant ( 1501 ) on Tuesday November 08, 2005 @04:34PM (#13981883)
    More and more hardware is being implemented in software. This makes sense because as the CPU has increased in speed and capability it has gained the resources and free-time to pick up the slack for less-expensive hardware implementations. If you look at the new push for multi-core and multi-CPU systems this will be even more true. Plus, as a bonus, if they mess something up then they can more easily get people to install a new driver revison successfully than to flash a device or fix a problem in the silicon.

    The problem for the open-source community is that these drivers are increasingly not just the way to talk to autonomous hardware but actually implement alot of the fucntionality of the device. Taking this in mind, the manufacturers are unlikely to give out the source code for these drivers as they will be giving their competitors a more significant view into their playbook. People in the Linux community complain about the lack of certain drivers like those for wireless cards in many notebooks, which in my understanding work as described, and are left hacking together a solution to run the Windows drivers under linux in order to get them to work. If you want things like that to work and work well in Linux then you have to give them a stable subsystem and make it as easy to port their drivers as possible while providing them the means to not give away the source which contains half of their work in creating the hardware.

    I know that is not the ideal solution but thems the breaks. It is either Linux steps up with an API and binary subsystem or they will be left with fewer hardware options consisting of what more expensive all-hardware alternatives are left for many of these peripherals.
  • by shadowmas ( 697397 ) on Tuesday November 08, 2005 @04:39PM (#13981939)
    "....if a vendor doesn't bother to certify the driver (it's not that expensive after all) it's a good indication that they might not care about driver improvement as well...."

    or maybe it improves that its drivers so frequently that it cant keep trying to certify it every single time?
  • Backfire. (Score:2, Interesting)

    by Bezben ( 877719 ) on Tuesday November 08, 2005 @04:47PM (#13982045)
    At the end of the day, there might not be a choice at all. What's to stop them forking the code and developing their own binary driver api? If people (and by people I mean businesses) want to use the hardware of these companies, it might become widespread.
  • by 0xABADC0DA ( 867955 ) on Tuesday November 08, 2005 @05:01PM (#13982199)
    Instead of a binary kernel layer what linux needs is a bytecode interpreter layer... a Java-esque langage for running processes in the kernel address space.

    For a simple "find ~ | ..." over 20% of the real time is wasted just between system call overhead (~1200 cycles) and actually copying data in/out of kernel mode. That doesn't even include the time setting up the copyin/out (it has to add the instruction address to a list for the interrupt handler in case there's a fault, which might have to do a lock I didn't check). And that doesn't even include overhead of context switching time (ie 2 context switches every 4k or 16k for the pipe read/write)! There's no reason a context switch should be taking milliseconds on a modern system. It should be microseconds.
  • Re:Absolutely (Score:2, Interesting)

    by freedom_surfer ( 203272 ) on Tuesday November 08, 2005 @05:04PM (#13982222) Homepage
    Have you ever tried to use an older windows binary driver with a newer version of windows? It doesn't work so well. Do the hardware manufacturer ever go back and pump out a new driver for newer windows? Rarely. Only in the case of very popular products, and often its Microsoft who necessitates this. Now contrast that with linux. Linux runs fantastic on older hardware...rarely will a piece of hardware be supported in an older version of Linux and not work on a newer rendition. Granted, new hardware in Linux often lags behind driver support in Windows, however, this is not because of any inheirit problem with Linux, its an inheirit problem with IP and how companies protect said IP. If you want good open source driver support, then only buy from companies that embrace the concept. A company makes NO commitment to your long term support if they weld the hood shut on you. Also, is hardware obsolete because the company that made it disappears or no longer provides a driver for the device? Of course not. But, in actuality, it does, often through planned obsolesence.

    (Am I the only one with a 8-bit soundblaster? JK =P)
  • by Afrosheen ( 42464 ) on Tuesday November 08, 2005 @05:17PM (#13982365)
    That's fascinating, as the majority of Windows defenders here blame bluescreens on poor drivers and badly written applications. If Microsoft themselves are certifying the drivers and putting them through 'very rigorous driver tests', why the blue screens? Or is it that un-certified drivers are to blame? In my experience, there are 2 kinds of Linux drivers for any piece of hardware...ones that work, and ones that don't. I have yet to see any half-assed, barely working drivers in Linux thus far. Windows, OTOH, has always had flaky drivers. How can it be explained?

      Not trolling, just wanting some enlightenment here.
  • User-level drivers (Score:3, Interesting)

    by spitzak ( 4019 ) on Tuesday November 08, 2005 @05:18PM (#13982385) Homepage
    Linux should support user-space drivers. Probably through FUSE and some other apis. These can then be binary, just like any other appliation. If they crash they will not take the system down. The API is limited but you will be able to open/read/write them. ioctl can be done with Plan9 style names, ie open "/dev/neato_device/volume" and write the desired volume there, etc.

    Drivers that need more elaborate API's or need more speed will be stuck with the mutable binary interface and occasional GPL restrictions. Too bad. A lot of interesting drivers do not need this speed. And those that do may force the interface to user-level drivers to be improved until it is usable, which is a very desirable result.
  • by parryFromIndia ( 687708 ) on Tuesday November 08, 2005 @05:29PM (#13982515)
    For an OS which is continually evolving and was not designed with a lot many future developments in mind, it is very natural to say no to the stable binary API/ABI concept for drivers. But as it matures and there is no longer a need to fix interfaces to support some out-of-world functionality, the driver interfaces are automatically going to be stabilized. (Unless kernel folks decide they get bored with having one function name for more than a year or that they want to keep driver writers continuosly on their toes - all of which is unlikely.)

    API/ABI compatibility obviously has it's own pros and cons - some times it's impossible to break things, take Windows for example. The world is going with LP64 model for 64 bit machines but Windows developers had to stick with LLP64 just because they made some design mistakes and now they cannot break the tons of applications. (See http://blogs.msdn.com/oldnewthing/archive/2005/01/ 31/363790.aspx [msdn.com]).

    Linux on the other hand can afford to break and fix things until the time where binary and out-of-tree drivers grow to out number the in-tree stuff. By that time I guess there will be a very less need to break things such as driver interfaces and the like.

    And I think the mad rush to put everything in the official kernel tree is not a good idea from maintenance and complexity stand point. So if and when the Linux ABI/API stabilizes that will be a good thing for out-of-tree kernel drivers and Linux itself.
  • by SirBruce ( 679714 ) on Tuesday November 08, 2005 @05:32PM (#13982562) Homepage
    In my experience, most end-users don't run WHQL certified drivers. This is usually because certification takes a long time. In the case of graphics drivers (which is what is commonly the problem), there will be many updates that relate that improve performance or fix a specific game-related bug that are installed by power users long before such fixes make it into an updated driver that's officially certified.

    Bruce
  • by bataras ( 169548 ) on Tuesday November 08, 2005 @05:36PM (#13982602)
    Hardware vendors MUST write (or supply) drivers at least for Windows. That's the reality. Sure they could release specs and hope someone in the open source windows kernel driver world codes it up. But that ain't going to happen.

    Having actually done windows driver co-development in partnership with a Japanese hardware vendor before, I can tell you they do consider interfaces and protocols into their hardware proprietry. Remember some "hardware" products are actually computers in and of themselves with mini OSs and complicated protocols for communication with the host PC. This isn't your father's PIO serial port stuff. Even as a true blue co-developing partner the best we could get were software APIs to a binary library that we had to link into our drivers.

  • Re:GO FOR IT (Score:2, Interesting)

    by croddy ( 659025 ) on Tuesday November 08, 2005 @05:58PM (#13982843)
    In that case, I would prefer that you use some other OS and not try to influence the direction of Linux development. I am quite happy with the driver situation exactly as it is, and I am not interested in giving up any ground to closed-source drivers -- it is far better to have a choice of three wireless cards with open software than thirty with closed software.
  • by Skapare ( 16644 ) on Tuesday November 08, 2005 @06:12PM (#13982985) Homepage

    Let's see here. Manufacturers want us to create a kernel than allows them to infect and interfere with its integrity, reliability, performance, and security, just so they don't have to keep maintaining that driver as the design of the kernel continues to be improved? They want us to stagnate the design of the kernel so they can let us use their stagnant device drivers? And they want us to have a system that is no longer viably supported by staff or consultants, while they are most likely not ever going to provide system support (if they can't keep the driver maintained, how the hell are they going to provide support for an old driver)?

    I'd just stay away from their hardware.

  • by Stalin ( 13415 ) on Tuesday November 08, 2005 @06:27PM (#13983112)
    This is a legitimate question I have. Why not support a system like Apple introduced with their kernel in OS 10.4? When it comes to operating systems, I am just a user. I don't hack on them. So, I could be missing something in the whole "the drivers must be open source so that they can be included in the kernel and updated along with it" thing. I would like a clear explanation why doing things the current way is better than implementing a new system that supports binary drivers in a clean way.

    If you are not familiar with the "KPI" thing, here is a short summary from http://arstechnica.com/reviews/os/macosx-10.4.ars/ 4 [arstechnica.com]. I think it is a rather neat solution:

    "With Tiger, Apple is finally ready to put some kernel interface stakes in the ground. For the first time, there are stable, officially supported kernel programming interfaces (KPIs). Even better, there's an interface versioning mechanism and migration policy in place that will ensure that the pre-Tiger situation never happens again.

    From Tiger forward, kernel extensions will link against KPIs, rather than directly against the kernel. The KPIs have been broken down into smaller modules, so kexts can link against only the interfaces that they actually need to use.

    Each KPI has a well-defined life cycle made up of the following stages.

            * Supported - The KPI is source and binary compatible from release to release.
            * Deprecated - The interface may be removed in the following major release. Compiler warnings are generated on use.
            * Obsolete - It's no longer possible to build new kernel extensions using this KPI, but binary compatibility for existing kexts that use this KPI is still assured.
            * Unsupported - Kexts using this KPI will no longer work, period.

    The most significant part of this new system is that the kernel itself can and will change behind the scenes. KPIs will descend towards the "unsupported" end of the life cycle only as kernel changes absolutely demand.

    Best of all, multiple versions of a KPI can coexist on the same system. This allows a KPI to move forward with new abilities and a changed interface without breaking kernel extensions that link to the older version of the KPI. The expectation is that the kernel can undergo a heck of a lot of changes while still supporting all of the KPIs."
  • by Rutulian ( 171771 ) on Tuesday November 08, 2005 @07:14PM (#13983534)
    Instead, we get reverse-"engineered" (i.e. hacked-together) drivers made by people doing their best to get devices working with no real understanding of how the device works.

    As opposed to hardware engineers writing code for a kernel they know nothing about? I would much rather a kernel hacker writing a driver off of a spec sheet over an engineer copying a device driver template and sticking in the appropriate bits. In the absence of a good spec sheet, I'll take reverse-engineered drivers any day. They may not support all of the features, but they are usually better quality than a lot of closed source binary drivers. Just look through the Alsa project. The Soundblaster Live driver is a good example. Creative released one of their own (open source even, I believe), but it blew compared to the one written by the Alsa hackers.

    The solution for vendors not wanting to support multiple kernels is to open source their drivers. Once it is in the kernel, the maintenance is taken care of by somebody else. There really is no reason to not open source a driver. The PHB's are just stuck in a rut..."It has to be closed source because...intellecutal property...trade secrets...err, it's mine ."
  • Re:Learn to read. (Score:3, Interesting)

    by Peter La Casse ( 3992 ) on Tuesday November 08, 2005 @07:44PM (#13983807)
    One of the reasons for releasing closed-source drivers is to avoid revealing that you're violating your competitor's patents. The more obfuscation there is, the less likely you are to be sued, if you are indeed doing something illegal. Releasing documentation does not increase obfuscation, so it's not an option for criminals.
  • by Hal_Porter ( 817932 ) on Tuesday November 08, 2005 @09:03PM (#13984373)
    You do realise that most of these problems are more like "with the POS 417 AGP host bridge and a Froon 812 graphics card writes to off screen memory fail 1% of the time when the graphics accelerator is doing a lengthy operation and we see it often now we altered the driver " than "dude I forgot a break; statement"

    With a bit of luck, you can catch the second type of error by running a well thought out test suite. Not always though, I think its an example of the halting problem. The first sort is much harder - there is a lot of variability in PC hardware, even to the point of it being completely broken.
  • where you miss (Score:4, Interesting)

    by ImaLamer ( 260199 ) <john.lamar@gma[ ]com ['il.' in gap]> on Tuesday November 08, 2005 @10:53PM (#13985003) Homepage Journal
    There is one thing you all keep leaving out about certified drivers:

    Without them you aren't guaranteed support from Microsoft.

    If you are running machines with all certified drivers and WMI/MSI installed applications then Microsoft will be right there with you until the problem is solved. You won't find it written anywhere but Microsoft gurantees that you're machine will not crash (BSOD) if you use certified drivers and MSI installed software. At home this isn't possible, but in some environments it is possible (and a good idea in other places).

    In a way you are locked in to what Microsoft has approved, but if they've approved it then the problem is theirs to fix - not yours. Good luck meeting those two requirements, but if you can: hold them to it.
  • Re:*I* have an idea! (Score:5, Interesting)

    by Chuckstar ( 799005 ) on Wednesday November 09, 2005 @12:32AM (#13985846)
    I know you're joking, but how about this for an idea:

    A hybrid kernel. Open source drivers are compiled into the kernel. There is a API for closed-source drivers to run in user-space.

    Does not violate GPL.
    Little compromise to stability.
    Developers who only want to do closed-source drivers can do so.
    Developers have incentive to open source their drivers in order to have better performance and take advantage of newer kernel features (the internal APIs are updated with the kernel, the external APIs stay fixed and fall behind the feature curve).

    Win.
    Win.
    Win.

    Unless its just a philosophical question, in which case ... rant away open source crazies. ;)
  • Re:GO FOR IT (Score:2, Interesting)

    by croddy ( 659025 ) on Wednesday November 09, 2005 @01:11AM (#13986122)
    Your list of technologies and dates is marginally impressive -- but I do not see how it relates to the issue at hand.

    Since the Linux community has clearly not provided a system that matches your needs, I will again ask that you do not attempt to interfere in its development by advocating changes that could end up dumping binary drivers on us. We do not want them. We do not want what they will bring to our system.

    I am glad your experience with Windows XP has been so positive. Hopefully you will continue to use it rather than attempt to subvert the Open Source movement with your incompatible agenda.

  • by Deven ( 13090 ) <deven@ties.org> on Wednesday November 09, 2005 @04:34AM (#13987083) Homepage
    I'm glad I'm not the only one who recognizes that Project UDI would benefit Linux and free software developers in general. Isn't it obvious that every time someone successfully standardizes open interfaces, major leaps in productivity follow?

    Without open standards, the Internet would not exist. Various proprietary networking standards (Novell Netware, IBM channel architecture, Banyan Vines, etc.) used to work in their own isolated worlds, rarely speaking to each other. And people generally thought that was okay, because that's how it had always had been, and look how well each one works in its own little world! Similarly, email systems were proprietary and incompatible. Then along comes TCP/IP, SMTP and other open Internet protocols, and the world is transformed. Suddenly, everything can talk to everything, and with the 20/20 benefit of hindsight, it's clear to all how much better it is with the Internet than it was with all those proprietary islands.

    Many implementations of the Internet protocols were proprietary and it didn't matter. There were always both free and proprietary implementations of the Internet protocols, but the important thing was that they all agreed on the same standards and (more or less) followed them, which bridged all those little proprietary islands into this wondrous whole we have today, where virtually any networked device is capable of communicating with any other. What mattered was that the standard was open, no matter how many of the implementations were proprietary. (And, of course, natural evolution tends to favor the extinction of most of the proprietary systems in favor of free software whenever such competition occurred, especially since vendor lock-in fails when customers demand conformance with open standards.)

    The computer industry is starting to realize that XML, like TCP/IP, can bridge proprietary islands. Look at the number of legacy systems, interfaces, protocols and file formats which are being interfaced with XML to achieve at the application level what TCP/IP achieved at the networking level. Legacy systems, proprietary systems and even free systems, each with its own way of doing things, can suddenly be made to talk to each other in a robust, loosely-coupled fashion which was unfathomable just a decade or two ago. This process appears to be well on the way to revolutionizing the computer industry yet again.

    Operating systems and device drivers are full of proprietary islands just waiting to be bridged, and it could revolutionize operating systems as much as TCP/IP revolutionized computer networking. Not all of these proprietary islands are "proprietary" in the closed-source sense -- many are also free-software islands which are "proprietary" in the "only works with this system" sense. Just in the domain of free software, there are countless little proprietary islands between various versions of Linux, FreeBSD, OpenBSD, NetBSD, Dragonfly BSD, Darwin, HURD, etc. These aren't "proprietary" as Stallman uses the term, but just try to take a random device driver from one of these random islands and dump it on another at random and see how likely it is to work without changes. Then, of course, there are also the truly proprietary systems such as Windows.

    Bridging all those islands would benefit free software immensely, regardless of whether or not proprietary closed-source vendors jump on the bandwagon. Imagine if every device driver only needed to be implemented once to a common API, and it worked without source code changes on every operating system that supports that API? That's exactly the promise that Project UDI holds for operating systems and device drivers, and it's as revolutionary as the promise of TCP/IP.

    The Internet wouldn't be where it is today without free software, yet free software wouldn't be where it is today without the Internet! This seems like a conundrum -- a chicken-and-egg problem. Actually, it's a truly symbiotic relationship, and it

One man's constant is another man's variable. -- A.J. Perlis

Working...