Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Linux Kernel to Fork? 578

Ninjy writes "Techworld has a story up about the possibility of the 2.7 kernel forking to accomodate large patch sets. Will this actually happen, and will the community back such a decision? "
This discussion has been archived. No new comments can be posted.

Linux Kernel to Fork?

Comments Filter:
  • by Anonymous Coward on Sunday November 21, 2004 @11:01AM (#10880632)
    > Each version of the kernel requires applications to be
    > compiled specifically for it.

    FUD FUD FUD. No. no no no. NO!. Who writes this generic shit?. There's no truth behind the above statement and it just implies something that is not a problem.
  • About time.... (Score:5, Insightful)

    by ThisNukes4u ( 752508 ) <tcoppi@gmail. c o m> on Sunday November 21, 2004 @11:02AM (#10880640) Homepage
    I say its about time to get a real development branch going. I'm sick of 2.6 being less that optimally stable, its time for 2.7 to take the untested patches.
  • by MartinG ( 52587 ) on Sunday November 21, 2004 @11:03AM (#10880643) Homepage Journal
    The kernel will fork to a new 2.7 branch. This is exactly what happens every iteration of kernel development. This looks like a case of poor journalistic understanding of the usual linux process and/or fear inducing sensationalist headlines.

    Even if this was a more hostile type of fork it wouldn't matter. Some amount of forking is healthy in open source.
  • by Anonymous Coward on Sunday November 21, 2004 @11:05AM (#10880653)
    It strains credulity to call the 2.7 linux kernel a "fork" of linux. Every new development version of linux always starts out by forking the old stable kernel. This is how linux 1.3, 2.1, 2.3, and 2.5 all started. It is quite irresponsible for a journalist to proclaim all this doom and gloom over what is in fact a normal development fork in a proven development process.

    In fact, out of all the news articles out there about linux 2.7, it seems (not that this surprises me) that slashdot went out of its way to pick one laden with the most possible negative FUD and the least possible useful information about what really is news with 2.7. A much better writeup can be found at LWN [lwn.net]. In summary, the present situation is:

    • The -mm tree of Andrew Morton is now the Linux development kernel, and the 2.6 tree of Linus is now the stable kernel. This represents a role reversal from what people were expecting last year when Andrew Morton was named 2.6 maintainer.
    • Andrew Morton is managing the -mm tree very well. Unlike all the previous development kernels, the -mm tree is audited well enough that it is possible to remove patches that prove to have no benefit (and this does often happen). Bitkeeper is to some degree contributing to this flexibility, although not every kernel developer uses it.
    • The development process is going so smoothly that there may not need to be a 2.7 at all; for the first time in linux development history the developers are able to make useful improvements to linux while keeping it somewhat stable. If there is a 2.7 at all, it will be used for major experimental forks and there is no guarantee that the result will be adopted for 2.8.
    There is a story here, but you could easily be forgiven for missing it if you follow the link. The story is that linux development has changed, it is better than ever, and if (not when) 2.7 shows up, it's not gonna be the 2.7 that you're used to seeing.
  • Idiot. (Score:5, Insightful)

    by lostlogic ( 831646 ) on Sunday November 21, 2004 @11:05AM (#10880656) Homepage
    The writer of that article is an idiot. The linux kernel forks after every major release in order to accomodate large patches. How did we get to where we are today? Linux-2.4 forked into 2.4 and 2.5 to allow the major scheduler and other changes to be made on a non-production branch. Then 2.5 became 2.6 which was the new stable branch. Currently there are 4 maintained stable branches that I am aware of (2.0, 2.2, 2.4, and 2.6), having a new unstable branch is just the same path that Linux has been following for years. That writer needs to STFU and get a brain.
  • i don't get it (Score:3, Insightful)

    by Anonymous Coward on Sunday November 21, 2004 @11:06AM (#10880660)
    I think that either the writers of this article, or myself are not getting something here.

    A couple of months ago there was a general upheavel over the fact that Torvalds et al. had decided not to fork a developement tree of of 2.6.8, but rather do feature developement in the main kernel tree. The message of the article (brushing aside the compiling-applications-for-each-kernel-FUD) seems to be that they have made up their mind and fork an unstable kernel branch of anyway.

    What am I missing?
  • Re:Uh-oh (Score:5, Insightful)

    by MartinG ( 52587 ) on Sunday November 21, 2004 @11:07AM (#10880662) Homepage Journal
    Firstly, the article is talking about linux itself, not linux distributions which are another issue and may or may not have "massive problems" of their own.

    Secondly, linux (the kernel) already "forks" every time a new development version is opened. ie, 2.1, 2,3, 2.5 etc. All this is saying is that 2.7 is about to open.

    "Fork" is not a dirty word.
  • by MartinG ( 52587 ) on Sunday November 21, 2004 @11:13AM (#10880699) Homepage Journal
    I imagine a future where I can download a copy of Linux and it would install on my system without any configuration

    erm.. when did you last try installing linux, and which distro did you use?

    I have recently installed ubuntu and fedora 3 on hardware ranging from a fairly old PII 400 with matrox gfx and scsi to an amd64 3000 with radeon 9200 gfx and serial ata, to an ibm thinkpad r40e.

    All of these installed with almost no effort and Just Worked. (apart from power management on the laptop which took about 30 mins of googling to find a solution)

    I even had hardware accelerated gfx on _all_ of the above machines with no extra configuration of drivers to download or install.

    Really, if you want "easy to install and get running" give something like ubuntu or fedora a try. You might be pleasantly surprised.
  • by asciiRider ( 154712 ) on Sunday November 21, 2004 @11:19AM (#10880717)
    Why is it that every Windows XP user thinks the goal of the Linux community is to convince windows user to make the switch?

    Dude - just stick with Winblows. You have no time to "know linux", as you put it, so just stick with what you know. You can post on Slashdot either way.

    Please, developers, don't dumb Linux apps/distros down so much that it looks and feels like Windows.
  • by boaworm ( 180781 ) <boaworm@gmail.com> on Sunday November 21, 2004 @11:22AM (#10880734) Homepage Journal
    Perhaps he is refering to "Applications" such as the "Nvidia Driver Software" for Linux? That has to be rebuilt/recompiled if you switch kernels, even when switching between 2.6.9-r1 to -r2 etc (Gentoo!).

    Perhaps he is not talking about applications such as "Emacs" or "vim" ? (Or, he just finished his crackpipe :-)
  • by IGnatius T Foobar ( 4328 ) on Sunday November 21, 2004 @11:23AM (#10880742) Homepage Journal
    Hold on, take this into consideration before you hit that "flamebait" button. I'm responsible for a large number of Linux systems at a hosting center, and this is our single biggest complaint:

    There needs to be a consistent driver API across each major version of the kernel.

    A driver compiled for 2.6.1 should work, in its binary form, on 2.6.2, 2.6.3, and 2.6.99. If Linus wants to change the API, he should wait until 2.7/2.8 to do so.

    The current situation is completely ridiculous. Anything which requires talking to the kernel (mainly drivers, but there are other things) needs either driver source code (watch your Windows people laugh at you when you tell them that) or half a dozen different modules compiled for the most popular Linux distributions. These days, that usually means you're going to get a RHEL version, and possibly nothing else. What happens when you're competent enough to maintain Fedora or Debian, but you don't have driver binaries? (Yeah I know, White Box or Scientific, but that's not the point.)

    In fact, I recently had to ditch Linux for a project which required four different third-party add-ons, because I couldn't find a Linux distribution common to those supported by all four. We had to buy a Sun machine and use Solaris, because Sun has the common sense to keep a consistent driver API across each major version.

    Yes, I've heard all the noise. Linus and others say that a stable driver API encourages IHV's to release binary-only drivers. So what? They're going to release binary-only drivers anyway. Others will simply avoid supporting Linux at all. LSB is going to make distributing userland software for Linux a lot easier, but until Linus grows up and stabilizes the driver API, anything which requires talking to the kernel is still stuck in the bad old days of 1980's-1990's. Come on people, it's 2004 and it's not too much to expect to be able to buy a piece of hardware that says "Drivers supplied for Linux 2.6" and expect to be able to use those drivers.
  • by DanTilkin ( 129681 ) on Sunday November 21, 2004 @11:26AM (#10880759)
    FUD [catb.org] generally implies deliberate disinformation. All I see here is a clueless reporter. To anyone who knows enough about Linux to make major decisions, Linus Torvalds will fork off Version 2.7 to accommodate the changes should make it clear what's going on, then the rest of the article makes much more sense.
  • by Anonymous Coward on Sunday November 21, 2004 @11:32AM (#10880798)
    Dependency hell?

  • by Darren Winsper ( 136155 ) on Sunday November 21, 2004 @11:40AM (#10880843)
    The only part that needs to be recompiled is the kernel module, and it's not an application, it's a fucking kernel module!
  • by geg81 ( 816215 ) on Sunday November 21, 2004 @11:43AM (#10880866)
    A driver compiled for 2.6.1 should work, in its binary form, on 2.6.2, 2.6.3, and 2.6.99. If Linus wants to change the API, he should wait until 2.7/2.8 to do so.

    That's deliberate...

    In fact, I recently had to ditch Linux for a project which required four different third-party add-ons, because I couldn't find a Linux distribution common to those supported by all four. We had to buy a Sun machine and use Solaris, because Sun has the common sense to keep a consistent driver API across each major version.

    ... and that's the reason why. If it were easy to use binary drivers, more and more drivers would become binary. For making Linux distributions easier to manage, it would be nice if binary drivers were easier to manage and distribute for Linux. But the fact that that would make distribution of binary-only drivers easier is considered a disadvantage by many.

    Overall, please either by from open source friendly hardware vendors, or pay the price for a proprietary operating system. You have chosen the second option, so deal with it.
  • by Bombcar ( 16057 ) <racbmob@@@bombcar...com> on Sunday November 21, 2004 @11:49AM (#10880887) Homepage Journal
    Yes, but drivers are part of the kernel, and so saying you need to recompile the kernel every time you recompile the kernel doesn't say much.

    Now the binary parts of those modules mean that the kernel can't autorecompile them for you, but that's not the kernel's fault.

    And if fact, the 2.4->2.6 kernel change did require a new version of modutils, and also, you could get improvements to some applications if you recompiled.
  • by Reality Master 101 ( 179095 ) <RealityMaster101@gmail. c o m> on Sunday November 21, 2004 @11:54AM (#10880921) Homepage Journal
    Maybe not the kernel, but one thing that I despise about Linux is the library dependency hell. I can download a binary onto Windows, and it just works. For a hell of a lot of binaries, I simply can't under Linux. I have to recompile the f-ing source for it to link to the right libraries.

    Gah, I get irritated just thinking about it. I hate, hate, HATE this about Linux.

  • Re:Uh-oh (Score:4, Insightful)

    by Angstroem ( 692547 ) on Sunday November 21, 2004 @11:59AM (#10880941)
    I think at some point everyone needs to get together and say OK. Everything from this point on will be compatible with everything from this point on. No more of this crap.
    Actually, we had this like 20 years ago. Whenever you upgraded your machines (at least in home and semi-pro market) there was a sharp cut. Hardware changed, the OS was not even remotely the same, you had to get new software. Sometimes you even got converters to re-use old data.

    Someone decided that this is "bad" (and which finally opened the market for DOS/Windows), which I still don't fully get. If the software/system is still usable to me, I keep on using it (I'm still running my trusty old Atari in the studio for average MIDI sequencing). If I need to get a more powerful machine and/or the software will only be supported on this new machine -- what is this any different to todays Windows/Office situation?

    With each new Windows the user interface changes (think of 3.11->95; XP anyone?), new data formats which are not backward compatible are introduced (.doc), and all they ensure is that you can load your old documents and please, please use the new formats as quickly as possible to make a lot of people buy the latest release...

    If your Linux application breaks because it requires some stoneage whatever library, then just install it. For instance, people are used to carry a shitload of same-but-of-different-version DLLs on Windows systems and don't seem to object it.

    With wide acceptance of RPMs we also accepted the breaks-if-lib-version-of-the-day-is-not-present kind of behavior... (The next logial step would be including required libraries in the RPMs just as every Windows program will come with all required DLLs.)

  • by discord5 ( 798235 ) on Sunday November 21, 2004 @12:00PM (#10880942)

    Well, this was fun to read. This article is about as educated about the subject as the average donkey.

    In a worrying parallel to the issue that stopped Unix becoming a mass-market product in the 1980s - leaving the field clear for Microsoft

    Uhm, what gave MS the edge in the 80s was cheap i386 (well, actually 8088) hardware, and a relatively cheap OS (MS-DOS). Unix servers cost an arm and a leg in those days, and many companies/people wanted a pc as cheap as possible. Buying an i386 in those days meant running DOS, and the "marketplace" standardized around MS-DOS.

    Each version of the kernel requires applications to be compiled specifically for it.

    Utter bull. Upgrade kernels as much as you like, it won't break software unless you change major/minor numbers perhaps. The same thing will happen to windows if you start running stuff for win2k on win'95. But this is rather a matter of features in the kernel, not compilation against the kernel.

    So at some point, Linux founder Linus Torvalds will fork off version 2.7 to accommodate the changes, Morton said at the SDForum open source conference.

    And the big news is? This happens every couple of years, with stable versions having even minor version numbers and unstable versions having odd minor version numbers. This helps admins and users to effectively KNOW which versions are good for everyday use, and which versions are experimental and for developers.

    He cited clustering as a feature sought for Linux.

    Well, imagine a Beowulf cluster... How long have those patches existed? There's several ways to build a cluster as long as you patch your kernel.

    OSDL does not anticipate, for example, having to ever rewrite the kernel, which would take 15 years, Morton said.

    And why on earth would they want to do that? Linux is on the right track, so why bother with an entire rewrite of good functional code with good design.

    Open source has focused on software such as the operating system, kernels, runtime libraries, and word processors.

    It's also focussed on multimedia (xmms, mplayer, xine), webservers (apache), mailservers (sendmail, qmail, postfix)... I'd rather have people say that open source has focussed on internet servers than stuff it needs to make an OS run and wordprocessors. This like saying that an oven is primarily being used for making toast, while actually it also bakes cake, pizza and whatever you toss inside.

    I'm sorry, this kind of article belongs in the trashbin. Either the journalist doesn't know what he's writing about, or he's being paid to know nothing about the subject. One of the things that keeps suprising me in business IT journalism is the lack of knowledge these people have about the subjects they're writing about.

  • by Cheeze ( 12756 ) on Sunday November 21, 2004 @12:04PM (#10880963) Homepage
    you're placing the blame on linux. Windows doesn't create drivers for that fancy new camera you bought, the camera company does. Is the problem really linux, or is is the companies that don't release the drivers?

    I think linux gets the blame, but you wouldn't expect microsoft to write drivers for your camera.

    Case in point, I bought a HP scanner/copier/printer about a week ago, and it took about 2 hours of constant reboots, driver conflict errors, and other problems to get it to work correctly. The end result had me download almost 400MB worth of drivers from hp.com, uninstall the printer, and reinstall it with the new drivers. The drivers on the cd were bad. That's not an "everything works" scenario. Yeah, and that's with WindowsXPhome on a HP workstation connected to the printer with usb. A problem like that is NEVER a windows problem, it's always a problem with the device. If I were using linux, it would be linux's problem, and not the device.
  • by Gannoc ( 210256 ) on Sunday November 21, 2004 @12:04PM (#10880965)
    For a straight FAQ Q&A style of answering the question: http://www.tldp.org/FAQ/Linux-FAQ/kernel.html#linu x-versioning

    Christ.

    I'm not making fun of you. What you said was completely accurate, but when you're dealing with clueless people, you need to speak simply and plainly. "holy pengiun pee?" C'mon.

    Quick example:

    To Whomever:

    Your most recent article regarding the upcoming linux fork may be confusing to your readers. The current version of Linux is 2.6. As new enhancements and bug fixes are developed and tested, they are added to this 2.6 kernel. This is similar to the way Microsoft puts out service packs on their current version of the Windows XP operating system.

    When significant or cutting-edge features are added, the team in charge of maintaining the linux kernel needs to decide whether to "fork" the kernel to a new version. Again, this is similar to how Microsoft made decide to put a new feature into Longhorn instead of patching it in to Windows XP in a service pack.

    Forking simply means that a new release of Linux is being actively prepared.

  • FUD (Score:3, Insightful)

    by erroneus ( 253617 ) on Sunday November 21, 2004 @12:08PM (#10880976) Homepage
    Knowing the Kernel developing community as I do, the threat of forking would result in a movement to unite in some way. Even if a fork occurrd, the population would essentially choose one and ignore the other leaving it to die.

    The fact that patches exist, large or small, is what keeps the main kernel working. So for special implementations, patched kernels exist and everyone is cool with that. I have yet to see a patch that isn't from the main kernel and I don't forsee a situation necessitating that it not be.

    I think we should look into the motivation of this article that cites no specific information or sources. It's pure speculation.
  • by IO ERROR ( 128968 ) <errorNO@SPAMioerror.us> on Sunday November 21, 2004 @12:14PM (#10881006) Homepage Journal
    I think this doesn't get enough attention because most of the hardware Linux people use on a daily basis has drivers included with the kernel, and for the most part they work from version to version. So what if a vendor puts out binary-only drivers. It's a free market; go buy their competitor's hardware. That's the only way to get their attention.
    Hello, ${VENDOR}, I am writing to you today to let you know that I looked at your ${HARDWARE} and was very impressed. It's the best thing since sliced bread! However, I bought your competitor ${VENDOR_2}'s ${HARDWARE_2} today, because they provide source code with their Linux drivers, and you do not.
    In short, don't like binary-only drivers? Don't give them any money.
  • by Reality Master 101 ( 179095 ) <RealityMaster101@gmail. c o m> on Sunday November 21, 2004 @12:15PM (#10881009) Homepage Journal

    which can and does cause earlier programs to suddenly fail, because they depended upon a particular DLL's quirks. It's called "DLL Hell".

    Sorry, I've never had this happen in my life. Ever. It's simply not an issue that comes up all that often. And I think the weight of evidence is on my side... people download stuff for Windows all the time.

  • by William Baric ( 256345 ) on Sunday November 21, 2004 @12:15PM (#10881013)
    Maybe it's just "fucking kernel modules" that need to be recompiled, but this is why linux is not supported by most hardware companies. And this is why linux has only 1 or 2% market share on the desktop.
  • by discord5 ( 798235 ) on Sunday November 21, 2004 @12:24PM (#10881061)
    Why is it that every Windows XP user thinks the goal of the Linux community is to convince windows user to make the switch?

    Because some people are overzaelous in their free software speeches to the masses. Linux users have a bad rep for a few bad elements.

    Dude - just stick with Winblows. You have no time to "know linux", as you put it, so just stick with what you know.

    Everyone should use what they want to use (at home at least). You like MacOS? Be my guest. Windows? Go right ahead. Linux? Hell yeah! People should be encouraged to try and use open source software, not forced. If people don't have the time to learn new things, let them use whatever they want.

    Please, developers, don't dumb Linux apps/distros down so much that it looks and feels like Windows.

    Please, end-users, stop having this elitist feeling because you're running linux. If apps and distros want to dumb down their applications to increase the amount of users, let them. A good example is perhaps lprng versus cups. Cups is easy to setup and use, lprng is not that easy to setup and use. If normal users can setup their printservers using an easy tool, and power users can set it up with their favourite tool, who is going to complain? It's a matter of choice.

    As soon as we make linux distributions easy enough for Joe Common to use, and decide that Random J. Hacker can't do things the way he wants to do them then we're in trouble. Then it's no longer a matter of choice, but a matter of locking in people to solutions that only work in 80% of all cases.

  • by DaHat ( 247651 ) on Sunday November 21, 2004 @12:47PM (#10881172)
    I agree, even with a package manager, updating a single app under linux can be a nightmare. If you check out my blog in my sig, you'll see some rantings related to that.

    One of my biggest gripes, was how when you try to install the latest version of Foo, it requires the latest version of Bar, which in turn requires newer versions of X and Y and so on.

    If you are using a more recent distro, this is far less of a problem, but the moment you move back to something older that cannot be updated as far as required... you end up with problems.

    Specifically, I was trying to get some things working under Red Hat 6.2, a 5 year old distro. Many called me dumb for even trying such a thing, which I find quite entertaining considering how many still use 6.2 in server back ends, not unlike how many still use NT4, because it works.

    Speaking of NT4, I found it far easier to back port a Windows based app written for XP or 2k back to NT4, jumping back 5-9 years in terms of age, than it is to go from Fedora Core 2 to Red Hat 6.2, a jump of only 5.

    This is why I so love Windows, consistent targets (within reason), where the # of system updates is finite and can be controlled.

    As for this so called 'Dll hell' people like to bad mouth Windows for... I can't say I've ever had that issue myself... however I did find it worse than hell to try to figure out how to run 2 different versions of GLIBC on a system without recompiling every single application requiring one or the other... Windows has many simple solutions for a problem like that.
  • by be-fan ( 61476 ) on Sunday November 21, 2004 @01:00PM (#10881230)
    Please, try and get out of the early 1990s. Manual dependency resolution hasn't been a problem with Linux siince at least then, and even then, it was only a problem on RedHat systems. You wanna know how I install software (I'm on Ubuntu). I start up Synaptic, I click on what I want, click install, and have the app and it's dependencies all downloaded and installled automatically. Quite a bit easier than any Windows installer I've eer used.
  • by smallfeet ( 609452 ) on Sunday November 21, 2004 @01:08PM (#10881279) Journal
    I think we have isolated your Linux problem. It's you.

  • by rogabean ( 741411 ) on Sunday November 21, 2004 @01:10PM (#10881290)
    then use Windows... noone twisting your arm.

    some of prefer to... LEARN.
  • by Anonymous Coward on Sunday November 21, 2004 @01:10PM (#10881291)
    Enforcing in software the things that the opensource people think is a good thing means people that disagree in principle have less change of corrupting the process.

    You are assuming that Linus and co. are doing everything to accomodate the current businessmodel that lead to the wild-success for Windows. I find that notion silly and asking them to betray the things they believe in.

    The way of thinking you completely trash in your post is the reason for its success, and it means to me to not betray your roots.
    Remember that Linux ships more drivers than ANY OS out there.
  • by Darren Winsper ( 136155 ) on Sunday November 21, 2004 @01:19PM (#10881340)
    NVIDIA do rather well with their kernel module wrapper.
  • by trashme ( 670522 ) <{moc.liamg} {ta} {elbbirt}> on Sunday November 21, 2004 @01:23PM (#10881363)
    OK, I'll bite. Someone needs to teach you how this works.

    You aren't going through bullshit. How is 'apt-get install foo' or 'yum install foo' or 'emerge foo' going through bullshit? It's one command! Do you want something easier? Must the OS read your mind and install the package for you?

    These "200 other barely-related packages" are called dependencies. Pakcage managers don't just start downloading other packages willy-nilly. It installs those packages that your new package is dependant on. Some package managers can also download packages marked as suggested or recommended, but that is easily changed via a config option, menu choice, or dialog box.
  • Re:BS.. (Score:2, Insightful)

    by ickpoo ( 454860 ) on Sunday November 21, 2004 @01:23PM (#10881364)
    I agree. I recently installed Windows on an old laptop that I have. The laptop is old and slow so I installed Windows 98 on the machine. (Son needs it for writing papers in Word). The machine has a NetLux pcmcia ethernet card. Under Linux the network card worked out of the box, under windows it needs a driver disk. The driver disk doesn't exist anymore and neither does NetLux. So, no driver and not ethernet.
  • by Kent Recal ( 714863 ) on Sunday November 21, 2004 @01:35PM (#10881439)
    Nice summary of DLL-Hell.
    But how exactly does that collide with grandparents point?

    You have stated yourself that any installer is free to use any of the quierks you've described (in short: rely on registry in hope it's not messed up yet again, overwrite DLLs that other programs may be using or waste diskspace and memory by dumping yet another copy of bozo.dll to be loaded at runtime).

    So it's only a matter of time until you run into a piece of software that picks the route that breaks your system.
  • by Mnemia ( 218659 ) on Sunday November 21, 2004 @01:41PM (#10881467)

    I think you may be missing the point of OSS. These things (breaks to backwards compatibility) aren't really as much of a problem on Linux as they would be on Windows because virtually all of the code in question is available in source form. You can always fix the problem by recompiling stuff. On Windows if an operating system API changes you have to wait for whoever made all of your software to fix and recompile it and then redownload/repurchase it. This is part of why Microsoft is unable to fix many longstanding problems with Windows and the Windows API: they are slaves to backwards compatibility. In fact the whole .NET thing seems to me to be an attempt to escape from this limitation and enjoy some of the benefits that open source now does.

    Virtually all of the problems you describe are problems with binary packaging rather than with the core Linux software itself (with a very small number of exceptions such as the GCC 2.9.x -> GCC 3.x transition; and even that was fixable through recompilation). All I can say is get a better distro. Debian doesn't have so many problems with this, and Gentoo and other source-based distros certainly don't either. This is in fact why I stopped using Red Hat and switched to Gentoo in early 2002. You don't have many binary version compatibility problems on Gentoo because it doesn't use binary packages except where the software is not available in any other form. It thus manages to parallel the development of most open source software very well: it isn't a problem when developers break an API...you just use a single command to recompile everything that was broken.

    Open source developers often don't worry about maintaining binary compatibility because it isn't a problem if you just recompile. Using binary packages just invites problems: whoever makes your distro has to stay on top of the constant changes in the API. So if you do use binary packages, at least do yourself a favor and use a well-tested distro like Debian.

  • by Etyenne ( 4915 ) on Sunday November 21, 2004 @01:51PM (#10881515)
    You don't understand. I don't WANT to go through all this bullshit.

    Going thru this "bullshit" is actually easier than installing software in Windows. Assuming you use and apt-based distro, just type apt-get install foo. You don't need to even download the software, apt does it for you. The only interaction it require is a confirmation if your package have dependencies. A minute or two later (depending on the size of the software and the speed of your connection), the magic happen : the software is installed ! No chasing software on the Web, no downloading, almost no interaction (don't you find clicking Next, Next, Next stupid at last ?). It's the best thing since sliced bread, yet you fail to see it. Again, which distro do you use so I can give you clear instructions on how to use your package manager properly ?

  • by JamieF ( 16832 ) on Sunday November 21, 2004 @02:11PM (#10881589) Homepage
    Yes, it is really annoying how when you try and create a "DLL hell"-like library situation under Linux, the package manager prevents you from shooting yourself in the foot like that.

    If you want to target Red Hat 6.2, target Red Hat 6.2. If you want to have it both ways and depend on something that's much newer (and thus has lots of dependencies that have to be updated) that's your choice. You can't have it both ways - you want to target an old OS but you want to use the newest libraries to save yourself some effort, but you also don't want to have to update those libraries. Somehow they're supposed to magically appear.

    I suspect that your IDE and/or installer maker on Windows make your dependencies magically appear on NT4, by figuring out what you need and bundling it with your executable. That doesn't make the API consistent, it just means that the development tools you're using are convenient in that respect.

    Also, you picked a distro that doesn't come with an automatic package downloader (at least, as far as I know it doesn't). It has a local package manager (RPM) that stops you from shooting yourself in the foot, but it doesn't go get stuff for you.

    I think you need to learn the difference between using developer tools that insulate you from dependency horrors, and a package manager that insulates you from dependency horrors. It sounds like on Windows you had the first, and on RH 6.2 you had neither.
  • by Allen Zadr ( 767458 ) * <Allen.Zadr@g m a i l . com> on Sunday November 21, 2004 @02:38PM (#10881737) Journal
    Even though I'm replying to an AC post, I'm going to assume that it isn't just trolling...

    A very few of the top Kernel developers are actually paid to do what they do. For the rest of the developers (the countless number of real folks with other things to do) who submit patches (many of which actually end up in the Kernel after a few bounces back-and forth with a lead).
    For the perspective of these folks, the kernel does exist for them to code.

    I think what you are forgetting, is that nobody can lock the Linux kernel up into an ivory tower. It is a community effort. When it's really, really important to someone with resources (IBM, HP), that someone will assign a few developers to get it done.

    I think the biggest thing your argument forgets is that - by the nature of Open Source development - implimentation of something someone else has already done (often the case in Linux) must be done in a vacuum to avoid IP infringement. So, when it was time to do USB support - decisions had to be made. For most devices, the USB stuff does work, the fact that it isn't done the same as Windows is important.

  • by r00t ( 33219 ) on Sunday November 21, 2004 @02:41PM (#10881761) Journal
    It's better to break binary drivers early than to
    break them late.

    If all 2.6.x. kernels supported a driver, you'd just
    accept that driver... until the 2.8.0 kernel comes
    out. Then what? The vendor doesn't care; they got
    their money. They either want to sell you new
    hardware, or they've gone out of business. So you'd
    then expect Linus to add some serious bloat for
    supporting a driver ABI translation layer to let you
    run ancient drivers on modern kernels.

    Then what if you upgrade to a 64-bit processor?
    You want Linux to emulate the old stuff????
    That's what made Windows 95 so lovely.

    The way Linus does things, you and these corporations
    can't ever forget that binary drivers are 2nd-class.

  • by Jameth ( 664111 ) on Sunday November 21, 2004 @03:05PM (#10881929)
    I've seen a lot of the flaws in the article pointed out, but I'd like to note this too:

    "Top contributors to the Linux kernel have been Red Hat and SuSE, he said. Also contributing have been IBM, SGI, HP, and Intel."

    Usually, when talking about the Kernel, it's valid to at least note some individuals, such as, say, Linus.
  • by Tim C ( 15259 ) on Sunday November 21, 2004 @03:10PM (#10881952)
    Well, if that's FUD, then so are upwards of 70% of anti-Windows comments here.
  • Re:Uh-oh (Score:3, Insightful)

    by rogabean ( 741411 ) on Sunday November 21, 2004 @03:29PM (#10882052)
    so if application I write took advantage of the say GDI bug in Windows, but my app was dependent on that bug being present, MS shouldn't fix the bug??

    You see there's a flaw in your logic.

    Not fixing a bug to allow some bad code that uses said bug to run is just plain ignorant.
  • by AaronW ( 33736 ) on Sunday November 21, 2004 @03:31PM (#10882070) Homepage
    I agree. It's about time that Linux provided a generic API for various drivers. Two types of drivers could exist, one type that sticks strictly with the API and the other that is not bound to it. For drivers that stick to the API they don't need to worry about being recompiled with each kernel version, only the API version. Different sets of APIs could be provided for different driver types. For example, network drivers have very different requirements than RAID drivers.

    Most other operating systems do this and it's about time Linux provided some standards for drivers.

    As much as we hate it, we do need to support binary only drivers.

    It pisses me off that I can no longer use my webcam because the driver maintainer can't keep up with every variation of the kernel, and for legal reasons can't release the source code.

    -Aaron
  • I'm only really replying to give your comment a bit more weight -- that writer is as dense as lead.

    I'm not sure he's ever actually followed kernel development before.

    For all those wanting to know whats going on without reading the linux-kernel mailing list, just run over to Kernel Traffic [kerneltraffic.org] -- a summary of the week's happenings on the list.
  • by Foolhardy ( 664051 ) <`csmith32' `at' `gmail.com'> on Sunday November 21, 2004 @04:36PM (#10882415)
    If I write software, all I should release is the source code. Let the distributions package it for their architecture.
    So then I'm at the mercy of the distro people? I have to wait for them to support the app (if they ever do); I have to wait for them for new versions, long after the creator has released it. I thought free software was supposed to be decentralized.

    The people who are motivated to create the app (the author) should be the one releasing packages, not some third party. Think what it would be like if you couldn't install anything on Windows without Microsoft individually making a special installer package for it; you can still install but not without a major headache.
  • by damiam ( 409504 ) on Sunday November 21, 2004 @04:48PM (#10882493)
    This is one of the biggest concepts that Windows users can't get their heads around when trying out Linux.

    If I write software, all I should release is the source code. Let the distributions package it for their architecture.

    I understand the concept perfectly, thank you. I just think it's wrong. It's unrealistic to expect distros to package every conceivable piece of software a user might want, and a whole lot of wasted effort if each distro packages its own version. There's no reason that a binary that runs on one distro shouldn't run on a different distro or even another version of the same distro.

  • by slux ( 632202 ) on Sunday November 21, 2004 @04:49PM (#10882496)
    No.
    If we want to maintain the quality and stability that the Linux kernel has, we need to resist binary drivers. Many of the stability issues remaining with Windows today I believe are in fact driver issues.

    Giving in to the hardware companies' (pointless) fear of losing so-called "intellectual property" by opening up their drivers would pass part of the control of the kernel from Linus & co. to countless programmers who may or may not have special interest in improving Linux specifically. The quality assurance that currently takes place for the free software drivers that get into the kernel is valuable.

    Giving up on free/open source software at every turn where it is convenient would lead us to having an OS that is an assortment of non-free parts a bit like the current proprietary UNIXes. It might even lead to someone eventually getting into a position where they could charge for an essential part of the system thus rendering it non-free even in the beer sense.

    For a kernel developer's take on this, read this, it's from Greg Kroah-Hartmann's blog [kroah.com]:

    But the issue of driver compatibility. For all of the people that seem to get upset about this, I really don't see anyone understand why Linux works this way. Here's why the Linux kernel does not have binary driver compatibility, and why it never will:

    • We want to fix the bugs that we find. If we find a bug in a kernel interface, we fix it, fix up all drivers that use that api call, and everyone is happy.
    • We learn over time how to write better interfaces. Take a look at the USB driver interface in Windows (as an example). They have rewritten the USB interface in Windows at least 3 times, and changed the driver interface a bit every time. But every time they still have to support that first interface, as there are drivers out there somewhere that use it. So they can never drop old driver apis, no matter how buggy or broken they are. So that code remains living inside that kernel forever. In Linux we have had at least 3 major changes in the USB driver interface (and it looks like we are due for another one...) Each time this happened, we fixed up all of the drivers in the kernel tree, and the api, and got rid of the old one. Now we don't have to support an old, broken api, and the kernel is smaller and cleaner. This saves time and memory for everyone in the long run.
    • compiler versions and kernel options. If you select something as simple as CONFIG_SMP, that means that core kernel structures will be different sizes, and locks will either be enabled, or compiled away into nothing. So, if you wanted to ship a binary driver, you would have to build your driver for that option enabled, and disabled. Now combine that with the zillion different kernel options that are around that change the way structures are sized and built, and you have a huge number of binary drivers that you need to ship. Combine that with the different versions of gcc which align things differently (and turn on some kernel options themselves, based on different features available in the compiler) and there's no way you can successfully ship a binary kernel driver that will work for all users. It's just an impossible dream of people who do not understand the technology.
    • Drivers outside the kernel tree and binary drivers take away from Linux, they give nothing back. This was one of the main statements from Andrew Morton's 2004 OLS keynote, and I agree. Out of the box, Linux supports more hardware devices than any other operating system. That is very important, and is something that we could not have done without the drivers being in our tree.
    • If a kernel api is not being used by anyone in the tree, we delete it. We have no way of knowing if there is some user of this api in a driver living outside on some sf.net site somewhere. I have been yelled at for removing apis like this, when there was no physical way I could have possibly known that someone was using th
  • Serious question: (Score:5, Insightful)

    by deadlinegrunt ( 520160 ) on Sunday November 21, 2004 @04:55PM (#10882542) Homepage Journal
    I am not trolling nor do I disagree with the majority of your post. I am however a bit curious about this statement:

    "It pisses me off that I can no longer use my webcam because the driver maintainer can't keep up with every variation of the kernel..."

    Since this is a webcam I am making an assumption that this is more of a personal/desktop/workstation type role. With that in mind, is there any compelling reason that you must upgrade to the latest greatest kernel as opposed to sticking with a previous kernel that has worked along with your "webcam" driver that worked as well?

    I am under the assumption that there are a lot of users that upgrade/acquire the latest greatest software and that in and of itself is not a bad thing but not always the smart thing. I'm referring to a "if it ain't broke, don't fix it" line of thinking.

    Can you or someone else inform me what the other part of this issue is I seem to completely miss?
  • by ComputerSlicer23 ( 516509 ) on Sunday November 21, 2004 @05:39PM (#10882831)
    There needs to be a consistent driver API across each major version of the kernel.

    Why? You tell a nice little story about running 4 different binary only drivers. You represent a very, very niche market.

    Of anyone I've ever read or heard of, you are absolutely unique. I've run and installed Linux on hundreds if not thousands of computers. The only binary only driver I've ever had to install is the nVidia one. Even that is merely just because I wanted better performance. I could easily use the machine just fine without it.

    I've installed SCSI cards, IDE cards, RAID controllers for SCSI drives. I've installed SAN drivers. I've installed up to Gigabit cards. I've installed scanners, mice and keyboards, printers, CD burners, soundcards, USB and Firewire drives. I've had desktop cameras. I've had digital cameras that used USB connectors. I've used magneto-optical drives. Heck, the Parallel port Zip drive worked better under Linux then it did under Windows. I've hooked up flash readers. I've connected a myriad of perphials. What on earth are you connecting up to a machine that you need 4 separate binary drivers? Are you sure they aren't replaceable with a component that would have worked with drivers?

    I'd much rather have the developers have the ability to fix and solve bugs rather then cludge together something that might or might not work.

    For the most part, 99% of the driver API is in fact stable inside of a single kernel minor release (technical speaking 2 is the major kernel release, 4 is the minor release) kernel series (the single exception I can think of is that 2.4.10 completely broke the VM subsystem from 2.4.9). Most of the other things are merely your vendor being too lazy to get off their butt and release the binary only driver. It's not particularly hard. If what your are using doesn't support both Suse and RedHat, put down the box, and back away.

    Finally, just like the Apache guys have been saying for years, "We have a lot more drivers to maintain then you do, if we change the API we have a very good reason to". It's not like Linus goes and makes changes willy nilly. Generally speaking, it's been to make the API easier to use, to refactor common parts into higher layers for code reuse, or to use a more efficient/scalable model.

    You can keep saying that it really needs a decent driver model. No it doesn't. It is what it is, precisely because they refuse to have choke points to when innovation can happen. You might love it if it does. Personally, I like it not having choke points. I review the hardware I plan on hooking up to a machine, and ensure that all the perphials do what I need them to do, and that they are usable under the OS I run.

    Next you'll complain because your x86 machine doesn't support Nubus, or Altivec instructions. Pick things that have OS drivers and your problems go away.

    Lastly, you are complaining to the wrong guy. Linus isn't your man there. Try complaining about the distributors. The distro people are the ones who make the actual final release you are using. They could just wait until tail end of the kernel life cycle and release it then with a stable API. They could maintain a stable API for you. However, you are not common in terms of users. If everyone clamoured for it, they'd get it. Heck, if that's really what you want, start using the 2.2 series kernel. It has a world class stable driver API. The problem is Linux moves fast and you want to stay with the cutting edge. It's not Window's is releasing their development stuff as early or as often as Linux is. I'm fairly sure that by the time you get the Windows kernel, it's probably similar to roughly where the 2.4 kernel is right now (maybe 6 months or a year ago). By the time you get Windows software, it's old and stale (in terms of true innovation and change, Microsoft is fairly serious about limiting architectural change, just like the 2.2 and 2.4 guys are right about now). Pick what you want.

    Kirby

  • by Etyenne ( 4915 ) on Sunday November 21, 2004 @05:56PM (#10882939)
    I know you are making an attempt at sarcasm, but just to clarify, yes, apt take care of the downloading. You do not have to open to open your browser, look for the software, download it and install it by hand. All these step are automated, including fetching the software on the Internet. If you are on a slow link, most package managers can be configured to use a local media (CD-ROM) as software repository. But if you happen to have a fast link, then package manager and online software repositories really shine in ease-of-use.

  • by 0racle ( 667029 ) on Sunday November 21, 2004 @06:11PM (#10883057)
    What you're referring to is a stable ABI, not API. A stable, locked-down binary interface.
    Yes your right, my mistake.

    strong bias toward requiring people and organizations to release their software in source form
    And this is the childishness of Linux. My Linux system has a nvidia tnt2 card because thats what I had around when I put the system together. Now I have 2 choices of drivers, nvidias official one, or the barely works nv in X. Now if I felt like being a childish zealot, the nv driver would be a no brainer, however I like to use what works best, and thats the nvidia binary driver. Anywhere else, this is fine, but not with Linux because Linus and others have decided that I shouldn't use the best choice for my card, I should either use an inferior solution (nv), or bought another card, also an inferior solution (spent money I didn't have on an open card that doesn't exist). They seem to go out of there way to break every binary driver they can with every release without even considering that the open source alternatives range from almost alright to compleatly useless. Linux can be a little hobby or an actual, usefull OS product and at the moment the kernel dev's have gone with acting like children and developing Linux like a little hobby.
  • by Pandora's Vox ( 231969 ) on Sunday November 21, 2004 @06:14PM (#10883079) Homepage Journal
    i have the same webcam issue. i need the latest kernel to support other bits of hardware, most importantly my BIOS's ACPI implementation. so the webcam is a no-go.

    -Leigh
  • by omicronish ( 750174 ) on Sunday November 21, 2004 @06:38PM (#10883259)
    One potential problem I'm seeing is when thousands of hardware manufacturers want and insist on proprietary drivers for Linux. What happens if the driver interface changes? Will they spend the time and energy to port their drivers to the new interface? I agree that fixed interfaces result in legacy code, but you really need some sort of stable platform if companies are to develop Linux drivers, and they refuse to submit them to the kernel tree.
  • Re:Why upgrade... (Score:3, Insightful)

    by Allen Zadr ( 767458 ) * <Allen.Zadr@g m a i l . com> on Sunday November 21, 2004 @09:01PM (#10884138) Journal
    I'm fully aware that 2.4 to 2.6 _was_ a major update. However, 2.6.8 to 2.6.9 should not have been. I recently had to make this switch. It went pretty flawlessly, but it did require driver rebuilds...

    Under Fedora (on the other hand), the NTFS driver (fully open, and PART OF the kernel) is not a default-included module (Fedora is not alone in this distinction) - so the module must be rebuilt (or wait for a new RPM, and download that). It's not the fault of 'Linux', per se, but the kernel developers could elliviate this problem by better structure versioning within the drivers - let the driver itself determine if the kernel is close enough.

    On my RHEL 3.0/Oracle 9i server, you are certainly right - RedHat does a great job back-porting all 'patches' into the same build-number code base as the original release. This server was also purchased with RedHat in mind, and I had the freedom at the time to make sure that everything would be fully supported by the default 2.4 RedHat Enterprise kernel.

    Finally, as a working manager - I'm happy when users can answer their own questions. On the other hand, I get a lot of technical respect from those who work with-me, and the requisite questions that go with that. It's too bad you don't have managers deserving of respect where you work.
    In IT it's part of my job to know what is available, and how it works. I take that part of my job seriously.

  • by XO ( 250276 ) <blade,eric&gmail,com> on Sunday November 21, 2004 @10:05PM (#10884435) Homepage Journal
    Yes.

    A better implementation would allow binary drivers, without any of these issues.

    While many of the issues out there may BE the binary drivers.. but, if Joe User goes out and buys a piece of hardware for his computer, plugs it in, and can just install a little driver program, to make it work, then Joe User is happy.

    When I, someone who has been using Unix since System III was common, and have been using computers for the last 25 years (i'm 28..), and have done kernel hacking, and worked on major products, even gained quite a bit of income from my own projects back in the day, has to set aside an entire day just because I want to operate my USB Webcam in Linux (which I Haven't done yet, because I am too busy to have an entire day to spend fucking with my computer), and I --know-- that it's not going to be as simple as "plug it in, install the driver, recompile". .. why the fuck should it require a recompile, anyway? If I'm using common hardware, and common software, there should be binary driver compatibility.

    To Greg's points:

    re: fixing bugs. Yes, you fix the api bugs, you fix the drivers you have control over, you bump the API version, and now drivers that can't work with that change refuse to load. I highly doubt that my Diamond Stealth 64 Windows '95 video driver will work if I try to load it into XP. It ain't gonna happen.

    re: building better apis. Yes, you fix the API, bump the version number, and now drivers that can't work with the new version refuse to load.

    re: CONFIG_SMP There must be an API to deal with "core kernel structures". What the fuck is a driver doing with "core kernel structures"?

    re: GCC alignment issues- GCC is obviously not the best compiler to do this with. GCC is quite obviously not the best at anything, except being compatible across a bazillion operating systems. That's ALL GCC is good for.

    re: drivers outside the kernel tree give nothing back; Maybe they have nothing useful to give, either? WHY is it important that we know how everything the hardware manufacturer does works, if they are competent enough to make it work? YES, I agree that open source drivers and being available with the kernel are the BETTER option, but that doesn't mean that some people aren't going to want a different option.

    re: deleting functions; See also API versioning. I think I've repeated that a few times now.

    Why do you hate Linux? Why do you not want it to succeed?

    Or do you enjoy spending an entire day or longer just making some USB gadget work?

  • by PastaLover ( 704500 ) on Monday November 22, 2004 @06:21AM (#10886550) Journal
    Get your facts straight. linux is not supported by major hardware (and software) companies _because_ it only has 1 or 2% market share. And don't point to macintoshes either. It took big industry a looong while before supporting hardware on the mac. But at some point they started to realise that apple was capitalizing on hardware they could have been building. The same will probably be true one day for linux.

It is easier to write an incorrect program than understand a correct one.

Working...