Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Operating Systems Software Linux

Linux Kernel to Fork? 578

Ninjy writes "Techworld has a story up about the possibility of the 2.7 kernel forking to accomodate large patch sets. Will this actually happen, and will the community back such a decision? "
This discussion has been archived. No new comments can be posted.

Linux Kernel to Fork?

Comments Filter:
  • by Anonymous Coward on Sunday November 21, 2004 @11:01AM (#10880632)
    > Each version of the kernel requires applications to be
    > compiled specifically for it.

    FUD FUD FUD. No. no no no. NO!. Who writes this generic shit?. There's no truth behind the above statement and it just implies something that is not a problem.
    • by boaworm ( 180781 ) <> on Sunday November 21, 2004 @11:22AM (#10880734) Homepage Journal
      Perhaps he is refering to "Applications" such as the "Nvidia Driver Software" for Linux? That has to be rebuilt/recompiled if you switch kernels, even when switching between 2.6.9-r1 to -r2 etc (Gentoo!).

      Perhaps he is not talking about applications such as "Emacs" or "vim" ? (Or, he just finished his crackpipe :-)
      • The only part that needs to be recompiled is the kernel module, and it's not an application, it's a fucking kernel module!
      • That recompile of modules is just a security from the kernel - it doesn't want to load modules not compiled for it with the exact same compiler version. Otherwise the behaviour of the module can't be certain...

        So if you switch from gcc 3.3.1 to 3.3.1-r1 or something, you compile your new nvidia module with it, then you *also* need to recompile your kernel, otherwise the module won't load...

        Really, this is the same for every kernel module, so I don't know what the big deal is with that nvidia module bitc

      • by RedWizzard ( 192002 ) on Sunday November 21, 2004 @06:02PM (#10882986)
        Perhaps he is refering to "Applications" such as the "Nvidia Driver Software" for Linux? That has to be rebuilt/recompiled if you switch kernels, even when switching between 2.6.9-r1 to -r2 etc (Gentoo!).
        No, the author is just clueless. Look at the first paragraph:
        In a worrying parallel to the issue that stopped Unix becoming a mass-market product in the 1980s - leaving the field clear for Microsoft - a recent open source conference saw a leading Linux kernel developer predict that there could soon be two versions of the Linux kernel.
        He's obviously not aware that parallel development and stable branches are the norm for Linux, and indeed most open source software.
    • by DanTilkin ( 129681 ) on Sunday November 21, 2004 @11:26AM (#10880759)
      FUD [] generally implies deliberate disinformation. All I see here is a clueless reporter. To anyone who knows enough about Linux to make major decisions, Linus Torvalds will fork off Version 2.7 to accommodate the changes should make it clear what's going on, then the rest of the article makes much more sense.
      • by Doc Ruby ( 173196 ) on Sunday November 21, 2004 @11:35AM (#10880812) Homepage Journal
        We don't know why this reporter is spreading Fear, Uncertainty and Doubt. Maybe they were misinformed, and lack critical skills required to be a journalist. Maybe they were informed, but are looking for something sensational to get readers (it worked). Maybe they're trying to impress their mother somehow, without even realizing they're making up for a playground trauma from 1983. Who knows? Who cares? They're a FUDder - we're interested in the damage they cause, not the damage that was done to them. That's their problem, unless we propose a massive mental health makeover for the world's journalists. That would probably decimate the ranks of the industry, allowing them to get real jobs.
    • by elendril ( 15418 ) on Sunday November 21, 2004 @11:32AM (#10880800) Homepage
      You're right: Each version of the kernel doesn't requires applications to be compiled specifically for it.

      Yet, where I work, the applications have to be specifically recompiled for each of the three versions of the Linux distribution currently in use.

      While it may be mainly the in-house distribution designers fault, it is a real mess, and a major reason for many of the engineers staying away from Linux.
      • by Anonymous Coward
        That's why a bunch of guys are assembling a new project to embrace as many Linux distributions as possible, adding FreeBSD and Windows to the mix.

        Oops! The link:

        Please, have a look at it. Its perspective is smarter than it seems at a first glance, and very promising as well.

      • The compatibility problem is due to glibc. glibc is the software developer's worst compatibility nightmare. Code compiled under one version won't work under another version, regardless of whether you use dynamic or static linking. This problem is so severe that even different minor versions of glibc don't work together. They are continually changing their symbol names. It's gotten so bad that we write our own versions of c libary calls so we can have some minimal level of compatibility. By way of cont
  • by Anonymous Coward on Sunday November 21, 2004 @11:01AM (#10880633)
    They just got too many weird patches, and had to put them somewhere.
    Business as usual.
  • Huh? (Score:2, Interesting)

    by laptop006 ( 37721 )
    Of course it will happen, whether it's now or later is a different matter. The problem this time is that several of the core kernel devs want to keep 2.6 under active feature development, and doing that in 2.7 means that they don't get nearly as tested.

    But it will happen, and probably this year (or early next).
  • About time.... (Score:5, Insightful)

    by ThisNukes4u ( 752508 ) <`moc.liamg' `ta' `ippoct'> on Sunday November 21, 2004 @11:02AM (#10880640) Homepage
    I say its about time to get a real development branch going. I'm sick of 2.6 being less that optimally stable, its time for 2.7 to take the untested patches.
  • by MartinG ( 52587 ) on Sunday November 21, 2004 @11:03AM (#10880643) Homepage Journal
    The kernel will fork to a new 2.7 branch. This is exactly what happens every iteration of kernel development. This looks like a case of poor journalistic understanding of the usual linux process and/or fear inducing sensationalist headlines.

    Even if this was a more hostile type of fork it wouldn't matter. Some amount of forking is healthy in open source.
    • "Paul Krill, Infoworld" seems to specialize in breathless, high-anxiety stories about rather ordinary events.

      InfoWorld []
      PC World []
  • Utter bunk (Score:5, Informative)

    by Rysc ( 136391 ) * <> on Sunday November 21, 2004 @11:04AM (#10880648) Homepage Journal
    The Linux kernel forks all the time. 2.5 was a fork of 2.4 when big patches couldn't be merged otherwise. This is all terribly normal, the article was obviously written by an uninformed outsider. 2.6 will fork into 2.7, which most people wont use while big changes are made, and eventually 2.7 will become 2.8, and then for a while there will be one version. Until the next "fork," also known in Linux land as a "development version."
  • Why fork 2.6? (Score:3, Interesting)

    by demon_2k ( 586844 ) on Sunday November 21, 2004 @11:04AM (#10880652) Journal
    Im not sure if i like the idea. Developers have have lives, that's why the developement is moving at the pace it is. And i like the pace the developement is at. Forking another kernel tree will split the developers apart and slow down the developement of the 2.6 kernel.

    What seems to me like a good idea is to modularize the code so that you can just plug things in and out. That way, if the kernel got forked it wouldn't be much work to remove and add support. I would also like to see projects dedicated to only certain parts of the kernel. For exampmle, one group does networking and another does video and maybe one that check and approves the code. From then on the code would be piecet together in whatever way it suits people and because there's ony one group working on a particular part of the kernel, there would be no repetition. "One fit's all" sort of spreak. One "driver" or piece of code to support some hardware would work an all forks. Then each fork would be kind of like a distribution of pieced together code.
    • Re:Why fork 2.6? (Score:3, Informative)

      You'd be supprised how modular the linux kernel actually is. One time, a couple of years ago there was some bug with the VM that was crashing my computer for some reason, so what I did is I dragged and dropped the VM section from an old version into the new version, and it actually worked. All without having to even use a terminal (except for in the compilation of cause), let alone a text editor.

      I think there are some assumptions going around that because linux is monolithic, it is also a mess of spaghett

    • Re:Why fork 2.6? (Score:3, Interesting)

      by Eil ( 82413 )

      Forking another kernel tree will split the developers apart and slow down the developement of the 2.6 kernel.

      Ideally, actual development should have been all over with at 2.6.0. Patchlevels would only fix bugs, not introduce new capabilities and thus unstable code.

      Too bad it doesn't work that way with Linux. :(

      What seems to me like a good idea is to modularize the code so that you can just plug things in and out. That way, if the kernel got forked it wouldn't be much work to remove and add support.

  • by Anonymous Coward on Sunday November 21, 2004 @11:05AM (#10880653)
    It strains credulity to call the 2.7 linux kernel a "fork" of linux. Every new development version of linux always starts out by forking the old stable kernel. This is how linux 1.3, 2.1, 2.3, and 2.5 all started. It is quite irresponsible for a journalist to proclaim all this doom and gloom over what is in fact a normal development fork in a proven development process.

    In fact, out of all the news articles out there about linux 2.7, it seems (not that this surprises me) that slashdot went out of its way to pick one laden with the most possible negative FUD and the least possible useful information about what really is news with 2.7. A much better writeup can be found at LWN []. In summary, the present situation is:

    • The -mm tree of Andrew Morton is now the Linux development kernel, and the 2.6 tree of Linus is now the stable kernel. This represents a role reversal from what people were expecting last year when Andrew Morton was named 2.6 maintainer.
    • Andrew Morton is managing the -mm tree very well. Unlike all the previous development kernels, the -mm tree is audited well enough that it is possible to remove patches that prove to have no benefit (and this does often happen). Bitkeeper is to some degree contributing to this flexibility, although not every kernel developer uses it.
    • The development process is going so smoothly that there may not need to be a 2.7 at all; for the first time in linux development history the developers are able to make useful improvements to linux while keeping it somewhat stable. If there is a 2.7 at all, it will be used for major experimental forks and there is no guarantee that the result will be adopted for 2.8.
    There is a story here, but you could easily be forgiven for missing it if you follow the link. The story is that linux development has changed, it is better than ever, and if (not when) 2.7 shows up, it's not gonna be the 2.7 that you're used to seeing.
  • Idiot. (Score:5, Insightful)

    by lostlogic ( 831646 ) on Sunday November 21, 2004 @11:05AM (#10880656) Homepage
    The writer of that article is an idiot. The linux kernel forks after every major release in order to accomodate large patches. How did we get to where we are today? Linux-2.4 forked into 2.4 and 2.5 to allow the major scheduler and other changes to be made on a non-production branch. Then 2.5 became 2.6 which was the new stable branch. Currently there are 4 maintained stable branches that I am aware of (2.0, 2.2, 2.4, and 2.6), having a new unstable branch is just the same path that Linux has been following for years. That writer needs to STFU and get a brain.
  • i don't get it (Score:3, Insightful)

    by Anonymous Coward on Sunday November 21, 2004 @11:06AM (#10880660)
    I think that either the writers of this article, or myself are not getting something here.

    A couple of months ago there was a general upheavel over the fact that Torvalds et al. had decided not to fork a developement tree of of 2.6.8, but rather do feature developement in the main kernel tree. The message of the article (brushing aside the compiling-applications-for-each-kernel-FUD) seems to be that they have made up their mind and fork an unstable kernel branch of anyway.

    What am I missing?
  • by mindstrm ( 20013 ) on Sunday November 21, 2004 @11:06AM (#10880661)
    No details, no important names.. no nothing.

    There are plenty of forked kernel trees out there. Most continually merge in changes from Linus' tree, though.

    A fork doesn't matter. What matters is what it represents. If there is enough popularity that the Linux community ends up using incompatable forks, then yes, we have a problem.. but forking in no way necessarily leads to this.

    As always, the available kernels in wide use will reflect what people actually want to use.

  • Is this the beginning of FreeLinux, OpenLinux and NetLinux?

    What about SCOLinux or MSLinux?

  • by Minwee ( 522556 ) <> on Sunday November 21, 2004 @11:14AM (#10880704) Homepage
    Oh no! If this sort of thing is allowed to happen then before long we will start seeing seperate kernel forks for people like Alan Cox, Andrea Arcangeli and Hans Reiser. It could even lead to every major Linux distribution applying their own patches to their own forked kernels.

    Then where would we be?

  • "Each version of the kernel requires applications to be compiled specifically for it. "

    I'm sorry but that's utter bullshite[sic]. I've never had to recompile applications because I upgraded the kernel...... have you?

  • Letter to Editor... (Score:5, Informative)

    by runswithd6s ( 65165 ) on Sunday November 21, 2004 @11:19AM (#10880722) Homepage
    Here's a copy of a letter I wrote to the online Magazine editor that this article was posted on.
    From: Chad Walstrom

    Subject: Comment: Is Linux about to fork?
    Date: Fri, 19 Nov 2004 19:43:15 -0600

    I'm writing to comment on the article "Is Linux about to fork?" written by Paul Krill, posted on the 18th of November, 2004. Paul really doesn't do his homework, does he? Nor does he understand the development process of the Linux kernel. Linux has ONLY been around for ten years, with a well documented idea behind the "fork" he is speaking about.

    Currently, the Linux kernel is at version 2.6.9, with 2.6.10 peeking around the corner. This is the STABLE kernel, the one receiving most of the attention over the last year or so. The kernel eventually always forks to a DEVELOPMENT branch, in this case the 2.7 branch. Is Linux about to fork? Yes! Does this have any correlation to the Unix idea of forking? No! covered the recent possible changes to the Linux Development Model in In general, forks are good things in the Free Software environment; it's part of life.

    For a straight FAQ Q&A style of answering the question: x-versioning

    Q: How Does Linux Kernel Versioning Work?

    A: At any given time, there are several "stable" versions of Linux, and one "development" version. Unlike most proprietary software, older stable versions continue to be supported for as long as there is interest, which is why multiple versions exist.

    Linux version numbers follow a longstanding tradition. Each version has three numbers, i.e., X.Y.Z. The "X" is only incremented when a really significant change happens, one that makes software written for one version no longer operate correctly on the other. This happens very rarely -- in Linux's history it has happened exactly once.

    The "Y" tells you which development "series" you are in. A stable kernel will always have an even number in this position, while a development kernel will always have an odd number.

    The "Z" specifies which exact version of the kernel you have, and it is incremented on every release.

    The current stable series is 2.4.x, and the current development series is 2.5.x. However, many people continue to run 2.2.x and even 2.0.x kernels, and they als o continue to receive bugfixes. The development series is the code that the Linu x developers are actively working on, which is always available for public viewing, testing, and even use, although production use is not recommended! This is part of the "open source development" method.

    Eventually, the 2.5.x development series will be "sprinkled with holy penguin pee" and become the 2.6.0 kernel and a new stable series will then be established, and a 2.7.x development series begun. Or, if any really major changes happen, it might become 3.0.0 instead, and a 3.1.x series begun.

    • by Gannoc ( 210256 )
      For a straight FAQ Q&A style of answering the question: u x-versioning


      I'm not making fun of you. What you said was completely accurate, but when you're dealing with clueless people, you need to speak simply and plainly. "holy pengiun pee?" C'mon.

      Quick example:

      To Whomever:

      Your most recent article regarding the upcoming linux fork may be confusing to your readers. The current version of Linux is 2.6. As new enhancements and bug fixes are
  • by IGnatius T Foobar ( 4328 ) on Sunday November 21, 2004 @11:23AM (#10880742) Homepage Journal
    Hold on, take this into consideration before you hit that "flamebait" button. I'm responsible for a large number of Linux systems at a hosting center, and this is our single biggest complaint:

    There needs to be a consistent driver API across each major version of the kernel.

    A driver compiled for 2.6.1 should work, in its binary form, on 2.6.2, 2.6.3, and 2.6.99. If Linus wants to change the API, he should wait until 2.7/2.8 to do so.

    The current situation is completely ridiculous. Anything which requires talking to the kernel (mainly drivers, but there are other things) needs either driver source code (watch your Windows people laugh at you when you tell them that) or half a dozen different modules compiled for the most popular Linux distributions. These days, that usually means you're going to get a RHEL version, and possibly nothing else. What happens when you're competent enough to maintain Fedora or Debian, but you don't have driver binaries? (Yeah I know, White Box or Scientific, but that's not the point.)

    In fact, I recently had to ditch Linux for a project which required four different third-party add-ons, because I couldn't find a Linux distribution common to those supported by all four. We had to buy a Sun machine and use Solaris, because Sun has the common sense to keep a consistent driver API across each major version.

    Yes, I've heard all the noise. Linus and others say that a stable driver API encourages IHV's to release binary-only drivers. So what? They're going to release binary-only drivers anyway. Others will simply avoid supporting Linux at all. LSB is going to make distributing userland software for Linux a lot easier, but until Linus grows up and stabilizes the driver API, anything which requires talking to the kernel is still stuck in the bad old days of 1980's-1990's. Come on people, it's 2004 and it's not too much to expect to be able to buy a piece of hardware that says "Drivers supplied for Linux 2.6" and expect to be able to use those drivers.
    • by geg81 ( 816215 ) on Sunday November 21, 2004 @11:43AM (#10880866)
      A driver compiled for 2.6.1 should work, in its binary form, on 2.6.2, 2.6.3, and 2.6.99. If Linus wants to change the API, he should wait until 2.7/2.8 to do so.

      That's deliberate...

      In fact, I recently had to ditch Linux for a project which required four different third-party add-ons, because I couldn't find a Linux distribution common to those supported by all four. We had to buy a Sun machine and use Solaris, because Sun has the common sense to keep a consistent driver API across each major version.

      ... and that's the reason why. If it were easy to use binary drivers, more and more drivers would become binary. For making Linux distributions easier to manage, it would be nice if binary drivers were easier to manage and distribute for Linux. But the fact that that would make distribution of binary-only drivers easier is considered a disadvantage by many.

      Overall, please either by from open source friendly hardware vendors, or pay the price for a proprietary operating system. You have chosen the second option, so deal with it.
    • by drinkypoo ( 153816 ) <> on Sunday November 21, 2004 @12:09PM (#10880984) Homepage Journal

      There needs to be a consistent driver API across each major version of the kernel.
      A driver compiled for 2.6.1 should work, in its binary form, on 2.6.2, 2.6.3, and 2.6.99. If Linus wants to change the API, he should wait until 2.7/2.8 to do so.

      The first digit is the major version; aka 1.x, 2.x. The second digit is known as the minor version. From your examples, you appear to be asking for a consistent driver API across each minor version.

      HTH, HAND.

    • I think this doesn't get enough attention because most of the hardware Linux people use on a daily basis has drivers included with the kernel, and for the most part they work from version to version. So what if a vendor puts out binary-only drivers. It's a free market; go buy their competitor's hardware. That's the only way to get their attention.

      Hello, ${VENDOR}, I am writing to you today to let you know that I looked at your ${HARDWARE} and was very impressed. It's the best thing since sliced bread! Ho

    • by captaineo ( 87164 ) on Sunday November 21, 2004 @12:20PM (#10881037)
      The first thing that disappears when you don't get paid as a developer is backwards compatibility. It's the type of thing only paying users (vs casual users and developers) care about.

      I completely agree and wish the kernel API were kept more stable. Which is saying a lot, as the Linux kernel API is currently way more stable than glibc, GCC, and most user-space libraries. Virtually all of my Linux trouble-shooting time over the last few years has been caused by API versioning issues in glibc and/or GCC.
      • I think you may be missing the point of OSS. These things (breaks to backwards compatibility) aren't really as much of a problem on Linux as they would be on Windows because virtually all of the code in question is available in source form. You can always fix the problem by recompiling stuff. On Windows if an operating system API changes you have to wait for whoever made all of your software to fix and recompile it and then redownload/repurchase it. This is part of why Microsoft is unable to fix many longst

    • by Cyno ( 85911 )
      but until Linus grows up and stabilizes the driver API

      How did this get modded insightful? Are you saying you know more about designing a kernel than Linus? Most hardware either has GPL drivers embedded in the kernel which automatically get updated to new changes in the API, or no driver at all. For those binary-only models I don't see nVidia having any problems. Maybe the people making the binary-only drivers need to learn how to do their job. Ever think of that?

      Come on people, it's 2004 and it's n
    • There needs to be a consistent driver API across each major version of the kernel.

      Why? You tell a nice little story about running 4 different binary only drivers. You represent a very, very niche market.

      Of anyone I've ever read or heard of, you are absolutely unique. I've run and installed Linux on hundreds if not thousands of computers. The only binary only driver I've ever had to install is the nVidia one. Even that is merely just because I wanted better performance. I could easily use the m

  • Irresponsible (Score:3, Interesting)

    by Craig Maloney ( 1104 ) * on Sunday November 21, 2004 @11:35AM (#10880823) Homepage
    I'd imagine any article that talks about Linux Forking would have at the very least grabbed one or two quotes from Linus before going to print. Linus is only mentioned once in the article, and that is a passing reference as the owner of the Linux Kernel. And while Andrew Morton may have mentioned what was going on in the interview, the reporter made sure it didn't show up in the article. Irresponsible.
  • kernel panic (Score:3, Informative)

    by Doc Ruby ( 173196 ) on Sunday November 21, 2004 @11:39AM (#10880837) Homepage Journal
    The reporter says that some developers have made big changes, in different directions, to their copies of the kernel source that Linus won't accomodate in a single encompassing kernel. Like desktop and server versions. So he'll have to fork it. Why forking the kernel, rather than just the magic "#ifdef DESKTOP_KERNEL_" that keeps all the manageability of a single kernel source version, is the solution, is not addressed. Combined with the rest of the bad logic and information reported in the article, this is just journalistic kernel panic, and probably not a real issue for the kernel. At least the fork - divergent execution scenarios are a valid issue for maintainers. But there are so many ways to manage source control that punting with a fork seems desperate, and unlikely.
  • News in disguise ... (Score:3, Interesting)

    by foobsr ( 693224 ) on Sunday November 21, 2004 @11:43AM (#10880865) Homepage Journal
    ... []

    erm ...

    "We all assume that the kernel is the kernel that is maintained by and that Linux won't fork the way UNIX did..right? There's a great story at about the SuSe CTO taking issue with Red Hat backporting features of the 2.6 Kernel into its own version of the 2.4 kernel. "I think it's a mistake, I think it's a big mistake," he said. "It's a big mistake because of one reason, this work is not going to be supported by the open source community because it's not interesting anymore because everyone else is working on 2.6." My read on this is a thinly veiled attack on Red Hat for 'forking' the kernel. The article also give a bit of background on SuSe's recent decision to GPL their setup tool YAST, which they hope other distros will adopt too."

  • by Maljin Jolt ( 746064 ) on Sunday November 21, 2004 @11:44AM (#10880870) Journal
    I have read the article three times but still it looks to me like a random collection of irrelevant sentences unrelated to each other. Maybe it would make more sense if Paul Krill himself was written in lisp, or drank less if he's a real biological entity. This article looks like a random google cache copy and paste made in php.
  • by discord5 ( 798235 ) on Sunday November 21, 2004 @12:00PM (#10880942)

    Well, this was fun to read. This article is about as educated about the subject as the average donkey.

    In a worrying parallel to the issue that stopped Unix becoming a mass-market product in the 1980s - leaving the field clear for Microsoft

    Uhm, what gave MS the edge in the 80s was cheap i386 (well, actually 8088) hardware, and a relatively cheap OS (MS-DOS). Unix servers cost an arm and a leg in those days, and many companies/people wanted a pc as cheap as possible. Buying an i386 in those days meant running DOS, and the "marketplace" standardized around MS-DOS.

    Each version of the kernel requires applications to be compiled specifically for it.

    Utter bull. Upgrade kernels as much as you like, it won't break software unless you change major/minor numbers perhaps. The same thing will happen to windows if you start running stuff for win2k on win'95. But this is rather a matter of features in the kernel, not compilation against the kernel.

    So at some point, Linux founder Linus Torvalds will fork off version 2.7 to accommodate the changes, Morton said at the SDForum open source conference.

    And the big news is? This happens every couple of years, with stable versions having even minor version numbers and unstable versions having odd minor version numbers. This helps admins and users to effectively KNOW which versions are good for everyday use, and which versions are experimental and for developers.

    He cited clustering as a feature sought for Linux.

    Well, imagine a Beowulf cluster... How long have those patches existed? There's several ways to build a cluster as long as you patch your kernel.

    OSDL does not anticipate, for example, having to ever rewrite the kernel, which would take 15 years, Morton said.

    And why on earth would they want to do that? Linux is on the right track, so why bother with an entire rewrite of good functional code with good design.

    Open source has focused on software such as the operating system, kernels, runtime libraries, and word processors.

    It's also focussed on multimedia (xmms, mplayer, xine), webservers (apache), mailservers (sendmail, qmail, postfix)... I'd rather have people say that open source has focussed on internet servers than stuff it needs to make an OS run and wordprocessors. This like saying that an oven is primarily being used for making toast, while actually it also bakes cake, pizza and whatever you toss inside.

    I'm sorry, this kind of article belongs in the trashbin. Either the journalist doesn't know what he's writing about, or he's being paid to know nothing about the subject. One of the things that keeps suprising me in business IT journalism is the lack of knowledge these people have about the subjects they're writing about.

    • I'm only really replying to give your comment a bit more weight -- that writer is as dense as lead.

      I'm not sure he's ever actually followed kernel development before.

      For all those wanting to know whats going on without reading the linux-kernel mailing list, just run over to Kernel Traffic [] -- a summary of the week's happenings on the list.
  • FUD (Score:3, Insightful)

    by erroneus ( 253617 ) on Sunday November 21, 2004 @12:08PM (#10880976) Homepage
    Knowing the Kernel developing community as I do, the threat of forking would result in a movement to unite in some way. Even if a fork occurrd, the population would essentially choose one and ignore the other leaving it to die.

    The fact that patches exist, large or small, is what keeps the main kernel working. So for special implementations, patched kernels exist and everyone is cool with that. I have yet to see a patch that isn't from the main kernel and I don't forsee a situation necessitating that it not be.

    I think we should look into the motivation of this article that cites no specific information or sources. It's pure speculation.
  • by 3seas ( 184403 ) on Sunday November 21, 2004 @12:13PM (#10880999) Homepage Journal
    ....But fail to realize that the spoon is like MS windows without competition... grows fat, bloated and contains the manifestation of the user frustration function so as to make people need to upgrade the spoon....and plug up holes in it...

    Forking is a better evolution process as forking is only part the the process. The other part is re-integration of new and wonderful things resulting from forking..
  • by YetAnotherName ( 168064 ) on Sunday November 21, 2004 @12:17PM (#10881021) Homepage
    For the kernel itself to support fork(2), you'd have to have a meta-OS running the kernel, similar to a supervisor OS running as a user task in Mach [].

    But I can see things deteriorating rapidly: someone will want vfork for kernels, someone else will implement kernel-to-kernel pipes, someone else will make vfork obsolete, someone will complain about kernels not getting SIGCHLDs from their child kernels, etc.

    What? No, of I course I didn't read the fsck'n article ... not even the summary!
  • by Morosoph ( 693565 ) on Sunday November 21, 2004 @12:25PM (#10881069) Homepage Journal
    Journalists tend to be ignorant, so a little education can come in useful. Here's my letter to the editor:

    Re: Is Linux about to fork?

    Dear Kieren McCarthy,

    I cannot believe this article: ID=2648&Page=1&pagePos=2 []

    The Linux kernel has historically alternated between stable
    (even-numbered) sets: 2.0, 2.2, 2.4, 2.6, and odd-numbered development
    sets. For this to be cast as a major disaster now that the next
    development kernel is expected to be starting up is extremely odd. If
    this is forking, it is forking only in the most pedantic sense, and yet
    Paul Krill's article paints this as a major problem. This portrays a
    simple lack of understanding of the Linux development process. The
    article is therefore more confusing than informative.

    Yours sincerely,
  • by borgheron ( 172546 ) on Sunday November 21, 2004 @12:27PM (#10881079) Homepage Journal
    Apparently the author is entirely unfamiliar with the concept of creating a branch to isolate disruptive changes to a system.

    Odd numbered kernels do not get released, only even numbered ones. The scheme in Linux development is odd = unstable, even = stable.

    I won't be suprised to see something from OSDL calling this article a piece of crap by tomorrow.

  • by Theovon ( 109752 ) on Sunday November 21, 2004 @01:07PM (#10881273)
    Forking the kernel is a normal part of the development process. It's happened numerous times before, and it's the usual way of making significant leaps in functionality.

    These guys are making it out like some majorly new thing's going to happen that's going to change everything. Did everyone suddenly forget about how 2.4 forked to 2.5 which became 2.6? Give me a break.
  • by Jameth ( 664111 ) on Sunday November 21, 2004 @03:05PM (#10881929)
    I've seen a lot of the flaws in the article pointed out, but I'd like to note this too:

    "Top contributors to the Linux kernel have been Red Hat and SuSE, he said. Also contributing have been IBM, SGI, HP, and Intel."

    Usually, when talking about the Kernel, it's valid to at least note some individuals, such as, say, Linus.
  • Kernel Fork (Score:5, Informative)

    by loconet ( 415875 ) on Sunday November 21, 2004 @03:34PM (#10882085) Homepage
    I notice a number of posts indicating that this is just pure uninformed journalism but is it? Or is he actually just blowing up a different related issue out of proportion.

    In the Linux Kernel Development Summit back in July, the core developers announced they weren't creating a 2.7 development kernel any time soon (discussed here [] and here []).

    Developers liked the way things were going with the new BitKeeper in use by Linus and at the time, they didn't see the need to fork a 2.7.

    Traditionally before BitKeeper, kernel maintainers would send Linus 10-20 patches at once, then wait for him to release a snapshot to determine whether or not the patch made it in. If not, they would try again. During the 2.5 development cycle, problems started over dropped patches and that is when Linus decided to try BitKeeper.

    According to Greg Kroah-Hartman, kernel maintainer, Bitkeeper has increased the amount of development and improved efficency. From 2.5 and 2.6, they were doing 1.66 changes per hour for 680 days. From 2.6.0 to 2.6.7 they were at 2.2 patches per hour thanks to the ability of wider range of testing of patches that went into the tree. The new process is - 1) Linus releases a 2.6 kernel release. 2) Maintainers flood Linus with patches that have been proven in the -mm tree 3) After a few weeks, Linus releases a -rc kernel 4) Everyone recovers from a load of changes and starts to fix any bugs found in the -rc kernel 5) A few weeks later, the next 2.6 kernel is released and the cycle starts again.

    Because this new process has proved to be pretty efficient and is keeping mainters happy, it was predicted that no new 2.7 kernel was to be forked any time soon unless a set of changes appeared big enough and intrusive that a 2.7 fork is needed. If that is the case, Linus will apply the experimental patches to the new 2.7 tree, then he will continue to pull all of the ongoing 2.6 changes into the 2.7 kernel as the version stabilizes. If it turns out that the 2.7 kernel is taking an incorrect direction, the 2.7 will be deleted an deveryone will continue on the 2.6. If 2.7 becomes stable, it will be merged back into 2.6 or will be declared 2.8.

    In conclusion, there was no plan for a 2.7 any time soon thanks to maintainers working well in the current setup but this was not carved in stone. It might just be that big enough changes are calling for a fork.
  • by neurophys ( 13737 ) on Sunday November 21, 2004 @04:13PM (#10882302)
    PJ at [] has a comment on the forking. Seems like the fork rumor is due to a misunderstanding. The forking in the talk was about 2.7 being a fork off of 2.6

  • by Master of Transhuman ( 597628 ) on Sunday November 21, 2004 @04:53PM (#10882525) Homepage
    that the use of the pronoun "he" in the article in reference to Kim Polese's remarks was wrong - Kim is (very) female.

  • by compwiz ( 21231 ) on Sunday November 21, 2004 @09:11PM (#10884180)
    Groklaw [] clears this mess up. Turns out someone doesn't understand the word "fork."

Doubt isn't the opposite of faith; it is an element of faith. - Paul Tillich, German theologian and historian