Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Linux Kernel to Fork? 578

Ninjy writes "Techworld has a story up about the possibility of the 2.7 kernel forking to accomodate large patch sets. Will this actually happen, and will the community back such a decision? "
This discussion has been archived. No new comments can be posted.

Linux Kernel to Fork?

Comments Filter:
  • Utter bunk (Score:5, Informative)

    by Rysc ( 136391 ) * <sorpigal@gmail.com> on Sunday November 21, 2004 @11:04AM (#10880648) Homepage Journal
    The Linux kernel forks all the time. 2.5 was a fork of 2.4 when big patches couldn't be merged otherwise. This is all terribly normal, the article was obviously written by an uninformed outsider. 2.6 will fork into 2.7, which most people wont use while big changes are made, and eventually 2.7 will become 2.8, and then for a while there will be one version. Until the next "fork," also known in Linux land as a "development version."
  • by rwebb ( 732790 ) on Sunday November 21, 2004 @11:16AM (#10880709)
    "Paul Krill, Infoworld" seems to specialize in breathless, high-anxiety stories about rather ordinary events.

    InfoWorld [infoworld.com]
    PC World [idg.com.au]
  • Letter to Editor... (Score:5, Informative)

    by runswithd6s ( 65165 ) on Sunday November 21, 2004 @11:19AM (#10880722) Homepage
    Here's a copy of a letter I wrote to the online Magazine editor that this article was posted on.
    From: Chad Walstrom

    Subject: Comment: Is Linux about to fork?
    Date: Fri, 19 Nov 2004 19:43:15 -0600
    To: kierenm@techworld.com

    I'm writing to comment on the article "Is Linux about to fork?" written by Paul Krill, posted on the 18th of November, 2004. Paul really doesn't do his homework, does he? Nor does he understand the development process of the Linux kernel. Linux has ONLY been around for ten years, with a well documented idea behind the "fork" he is speaking about.

    Currently, the Linux kernel is at version 2.6.9, with 2.6.10 peeking around the corner. This is the STABLE kernel, the one receiving most of the attention over the last year or so. The kernel eventually always forks to a DEVELOPMENT branch, in this case the 2.7 branch. Is Linux about to fork? Yes! Does this have any correlation to the Unix idea of forking? No!

    Kernel-Trap.com covered the recent possible changes to the Linux Development Model in http://kerneltrap.org/node/view/3513. In general, forks are good things in the Free Software environment; it's part of life.

    For a straight FAQ Q&A style of answering the question: http://www.tldp.org/FAQ/Linux-FAQ/kernel.html#linu x-versioning

    Q: How Does Linux Kernel Versioning Work?

    A: At any given time, there are several "stable" versions of Linux, and one "development" version. Unlike most proprietary software, older stable versions continue to be supported for as long as there is interest, which is why multiple versions exist.

    Linux version numbers follow a longstanding tradition. Each version has three numbers, i.e., X.Y.Z. The "X" is only incremented when a really significant change happens, one that makes software written for one version no longer operate correctly on the other. This happens very rarely -- in Linux's history it has happened exactly once.

    The "Y" tells you which development "series" you are in. A stable kernel will always have an even number in this position, while a development kernel will always have an odd number.

    The "Z" specifies which exact version of the kernel you have, and it is incremented on every release.

    The current stable series is 2.4.x, and the current development series is 2.5.x. However, many people continue to run 2.2.x and even 2.0.x kernels, and they als o continue to receive bugfixes. The development series is the code that the Linu x developers are actively working on, which is always available for public viewing, testing, and even use, although production use is not recommended! This is part of the "open source development" method.

    Eventually, the 2.5.x development series will be "sprinkled with holy penguin pee" and become the 2.6.0 kernel and a new stable series will then be established, and a 2.7.x development series begun. Or, if any really major changes happen, it might become 3.0.0 instead, and a 3.1.x series begun.

  • kernel panic (Score:3, Informative)

    by Doc Ruby ( 173196 ) on Sunday November 21, 2004 @11:39AM (#10880837) Homepage Journal
    The reporter says that some developers have made big changes, in different directions, to their copies of the kernel source that Linus won't accomodate in a single encompassing kernel. Like desktop and server versions. So he'll have to fork it. Why forking the kernel, rather than just the magic "#ifdef DESKTOP_KERNEL_" that keeps all the manageability of a single kernel source version, is the solution, is not addressed. Combined with the rest of the bad logic and information reported in the article, this is just journalistic kernel panic, and probably not a real issue for the kernel. At least the fork - divergent execution scenarios are a valid issue for maintainers. But there are so many ways to manage source control that punting with a fork seems desperate, and unlikely.
  • Not at all like Unix (Score:1, Informative)

    by Anonymous Coward on Sunday November 21, 2004 @11:42AM (#10880852)
    "In a worrying parallel to the issue that stopped Unix becoming a mass-market product in the 1980s - leaving the field clear for Microsoft."

    As long as everything stays open source, this won't be a problem. When you get an application now it is likely as a binary for a particular distro. I don't see that changing. You will still run urpmi or apt-get and for most people, things won't change. Really, how would forking create a different situation than we currently have with Gnome/KPI?

    Unix in the 80's was not open source and that makes all the difference.
  • by geg81 ( 816215 ) on Sunday November 21, 2004 @11:48AM (#10880881)
    I imagine a future where I can download a copy of Linux and it would install on my system without any configuration and every option would be through an option menu, like our Slashdot prefs. If this could be a reality today, I would drop XP in a heartbeat.

    Install SuSE, RedHat, or Ubuntu: they are easier to install than Windows XP and come with tons of applications. They even come with excellent printed documentation in case you do need to look something up.

    Even easier, buy a PC with Linux pre-installed: you just plug it in and it works.
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday November 21, 2004 @12:09PM (#10880984) Homepage Journal

    There needs to be a consistent driver API across each major version of the kernel.
    A driver compiled for 2.6.1 should work, in its binary form, on 2.6.2, 2.6.3, and 2.6.99. If Linus wants to change the API, he should wait until 2.7/2.8 to do so.

    The first digit is the major version; aka 1.x, 2.x. The second digit is known as the minor version. From your examples, you appear to be asking for a consistent driver API across each minor version.

    HTH, HAND.

  • by WoodstockJeff ( 568111 ) on Sunday November 21, 2004 @12:10PM (#10880987) Homepage
    I can download a binary onto Windows, and it just works.

    Because installers for Windows programs silently replace DLLs with the versions they require... which can and does cause earlier programs to suddenly fail, because they depended upon a particular DLL's quirks. It's called "DLL Hell".

    Linux programs are more proactive about checking library versions. But, you can install multiple versions, because the shared libraries usually have different names. Not so under Windows, and windows will only load the first version of a named DLL it finds, and hang onto it until you reboot. If that version fits your program, life is good; if not, well...

  • by atavus ( 256938 ) on Sunday November 21, 2004 @12:10PM (#10880988)
    Or, you can USE A PACKAGE MANAGER!!! This is MY biggest gripe about idiots that think they are "Power Users". Use a package manager and install the programs. I realize not every program is packaged for your particular variant, but a large percentage of libraries ARE. So USE A PACKAGE MANAGER!

    And, BTW, have you ever noticed that little pesky message on you beloved Windows "unable to locate foo.dll"? Same thing. Except for when program bar needs foo version 1.2 and program baz need foo version 1.4. Then you fucked because, without renaming the library (aka: access to the source of the application) you cant have multiple versions of a library installed. Linux, however, you'll link against libfoo.so-1.2 and libfoo.so-1.4 and you PACKAGE MANAGER will make symlinks to the correct lib.

    Yes, I realize it can be annoying. However, you know that jpeg problem in windows? and the corresponding png problem in libpng? Well, in linux its a matter of putting a fixed libpng. In windows its a matter of recompiling all of those static applications you love so much, downloading and reinstalling each and every one of them.

    Then again, your an idiot replying to an idiot article. Does that make me an idiot for replying to you? Most likely.
  • by Anonymous Coward on Sunday November 21, 2004 @12:12PM (#10880996)
    Use a distribution with a decent package management system, such as Debian(, Ubuntu), Redhat, SuSE, Mandrake, Xandros, etc...

    Have you ever tried compiling software on Windows without experiencing dependency hell?
  • by leonmergen ( 807379 ) * <lmergen@gmaEEEil.com minus threevowels> on Sunday November 21, 2004 @12:18PM (#10881022) Homepage
    That recompile of modules is just a security from the kernel - it doesn't want to load modules not compiled for it with the exact same compiler version. Otherwise the behaviour of the module can't be certain...

    So if you switch from gcc 3.3.1 to 3.3.1-r1 or something, you compile your new nvidia module with it, then you *also* need to recompile your kernel, otherwise the module won't load...

    Really, this is the same for every kernel module, so I don't know what the big deal is with that nvidia module bitching all the time...

  • by captaineo ( 87164 ) on Sunday November 21, 2004 @12:20PM (#10881037)
    The first thing that disappears when you don't get paid as a developer is backwards compatibility. It's the type of thing only paying users (vs casual users and developers) care about.

    I completely agree and wish the kernel API were kept more stable. Which is saying a lot, as the Linux kernel API is currently way more stable than glibc, GCC, and most user-space libraries. Virtually all of my Linux trouble-shooting time over the last few years has been caused by API versioning issues in glibc and/or GCC.
  • by Morosoph ( 693565 ) on Sunday November 21, 2004 @12:25PM (#10881069) Homepage Journal
    Journalists tend to be ignorant, so a little education can come in useful. Here's my letter to the editor:

    Re: Is Linux about to fork?

    Dear Kieren McCarthy,

    I cannot believe this article:
    http://www.techworld.com/opsys/news/index.cfm?News ID=2648&Page=1&pagePos=2 [techworld.com]

    The Linux kernel has historically alternated between stable
    (even-numbered) sets: 2.0, 2.2, 2.4, 2.6, and odd-numbered development
    sets. For this to be cast as a major disaster now that the next
    development kernel is expected to be starting up is extremely odd. If
    this is forking, it is forking only in the most pedantic sense, and yet
    Paul Krill's article paints this as a major problem. This portrays a
    simple lack of understanding of the Linux development process. The
    article is therefore more confusing than informative.

    Yours sincerely,
  • by daniil ( 775990 ) * <evilbj8rn@hotmail.com> on Sunday November 21, 2004 @12:33PM (#10881115) Journal
    In fact, I have just looked up the article, and it pretty much says just that. No 2.7 development branch plans!

    If you look a level deeper -- ie read the article linked in the /. blurb -- you'll find that what they said was "2.7 will only be created when it becomes clear that there are sufficient patches which are truly disruptive enough to require it." Must be that this critical mass of patches is about to be reached.

  • by Etyenne ( 4915 ) on Sunday November 21, 2004 @12:35PM (#10881121)
    Oh, man. You hit a button... I F-ING HATE PACKAGE MANAGERS UNDER LINUX. I try and update program X. Oh oh, dependency. Chase that down. That creates two more dependencies. Chase those down. Soon it cascades into a total nightmare. And then, I frankly give up, and just download the source, recompiled it in parallel to the package, and just delete the package.

    You don't understand. A package manager is a piece of software that does resolve dependency, download packages (from the Internet or local media) and install them for you. That is why they are called package manager. Using these, you never have to "chase" down package, it's all automated. There are many of them : apt, yum, up2date, urpm, emerge, etc.

    Please get current instead of making a fool of yourself on the Web; this problem have been solved a few years ago. Your favorite distro probably use one, and you don't even know. Which one is it, anyway, so I can give you the executive summary on it's usage ?

  • Re:Why fork 2.6? (Score:3, Informative)

    by donscarletti ( 569232 ) on Sunday November 21, 2004 @12:58PM (#10881224)
    You'd be supprised how modular the linux kernel actually is. One time, a couple of years ago there was some bug with the VM that was crashing my computer for some reason, so what I did is I dragged and dropped the VM section from an old version into the new version, and it actually worked. All without having to even use a terminal (except for in the compilation of cause), let alone a text editor.

    I think there are some assumptions going around that because linux is monolithic, it is also a mess of spaghetti code with no real structure. This is simply not true. My guess at why linux undergoes a complete branching process rather than a subsystem splitup is that it is simply easier and safer. Packaging everything in modules would require plenty of cross-version testing before anything is considered stable. Why bother? Who wants just a single chunk of new technology when you can have the whole thing.

    Plus, a fully split up design would make it far harder to make fast structural changes without huge amounts of negociating between teams. As it stands Torvalds can just do what he wants and everything goes right.

  • by Etyenne ( 4915 ) on Sunday November 21, 2004 @01:05PM (#10881258)
    Software distributed on Windows are either compiled statitically or ship with their own kit of .dll. It's necessary because Windows does not have *any* dependency resolution facility and, up until recently, had no library versionning mechanism either. There are drawback of shipping "ready-to-run" software : waste of disk space (static executable are larger, and duplicate .dll take up room), having library spread all over your disk instead of having them in a central location, potential library overwriting (aka "DLL Hell") and no way to update library centrally (witness the recent security hole with GDI+; you had to run an utility that scan your entire disk in search of vulnerable application). The Windows way *do* work, but it is certainly not the optimal way. People have gotten used to the drawback thus they don't complain.

    Just for fun, search for files named "mfc42.dll" on your disk (or any other common Windows dll; I'm not very up-to-date on these). How many are there ? Are all of these up-to-date ? Does any of them have security issues (known buffer overflow, for example) ? How much disk space do they use collectivelly ?

    You could distribute application the same way on Linux, but people don't because it would break the architecture of having your libraries centrally stored and managed. The Linux architecture to libraries management is much superior but have the drawback that it require that you use a dependencies-aware package manager correctly. Apparently, you don't.
  • by Theovon ( 109752 ) on Sunday November 21, 2004 @01:07PM (#10881273)
    Forking the kernel is a normal part of the development process. It's happened numerous times before, and it's the usual way of making significant leaps in functionality.

    These guys are making it out like some majorly new thing's going to happen that's going to change everything. Did everyone suddenly forget about how 2.4 forked to 2.5 which became 2.6? Give me a break.
  • by Kent Recal ( 714863 ) on Sunday November 21, 2004 @01:38PM (#10881454)
    Obviously not.
    For his enlightenment: apt (the debian package manager) does all the "dependency-chasing" for you. You say "apt-get install kde" and it happens.
  • by caveat ( 26803 ) on Sunday November 21, 2004 @02:19PM (#10881629)
    ...here's what it told me when I installed NetHack on OS X:
    The following package will be installed or updated:

    nethack
    The following 47 additional packages will be installed:
    audiofile audiofile-bin audiofile-shlibs bzip2-shlibs cctools-extra docbook-dsssl-nwalsh docbook-dtd docbook-xsl esound fink-mirrors fink-prebinding gdbm3 gdbm3-shlibs gettext gettext-bin gettext-dev giflib gmp gmp-shlibs gnome-libs-dev gnome-libs-shlibs gtk+ gtk+-data gtk+-shlibs gtk-doc imlib imlib-shlibs libiconv libiconv-bin libiconv-dev libxml2 libxml2-bin libxml2-shlibs libxslt libxslt-shlibs ncurses-shlibs netpbm netpbm-shlibs openjade opensp3 opensp3-shlibs orbit orbit-dev orbit-shlibs sgml-entities-iso8879 xfree86 xfree86-shlibs
    Do you want to continue? [Y/n]
    FORTY-SEVEN DEPENDENCIES. It ended up taking about an hour to finish, but I don't even want to think about how hellish it would have been to do by hand.
  • by lazy_playboy ( 236084 ) on Sunday November 21, 2004 @02:32PM (#10881698)
    $apt-get install foobar

    There. Was that so difficult?
  • by Kent Recal ( 714863 ) on Sunday November 21, 2004 @02:44PM (#10881786)
    Dear Anonymous Coward,

    apt will not put you into 'dependency hell' unless at least one of the following preconditions is met:

    1) You are running debian/unstable
    2) You are overriding warnings (using the --force switch)
    3) You are doing something stupid, as root

    sincerly,
    the truth
  • by automatix ( 664568 ) on Sunday November 21, 2004 @03:18PM (#10881989) Homepage
    > waste diskspace and memory by dumping yet another copy of bozo.dll
    >> That's basically the Linux solution (short of recompiling everything).

    If you've read the other comments you'll know this is NOT the way linux manages it! Linux distributions have binary packages that contain ONLY the applications code, and they depend on other library packages, which are shipped and updated separately. There is usually only 1 copy of a library package, shared by all the apps on the system.

  • Kernel Fork (Score:5, Informative)

    by loconet ( 415875 ) on Sunday November 21, 2004 @03:34PM (#10882085) Homepage
    I notice a number of posts indicating that this is just pure uninformed journalism but is it? Or is he actually just blowing up a different related issue out of proportion.

    In the Linux Kernel Development Summit back in July, the core developers announced they weren't creating a 2.7 development kernel any time soon (discussed here [linuxjournal.com] and here [slashdot.org]).

    Developers liked the way things were going with the new BitKeeper in use by Linus and at the time, they didn't see the need to fork a 2.7.

    Traditionally before BitKeeper, kernel maintainers would send Linus 10-20 patches at once, then wait for him to release a snapshot to determine whether or not the patch made it in. If not, they would try again. During the 2.5 development cycle, problems started over dropped patches and that is when Linus decided to try BitKeeper.

    According to Greg Kroah-Hartman, kernel maintainer, Bitkeeper has increased the amount of development and improved efficency. From 2.5 and 2.6, they were doing 1.66 changes per hour for 680 days. From 2.6.0 to 2.6.7 they were at 2.2 patches per hour thanks to the ability of wider range of testing of patches that went into the tree. The new process is - 1) Linus releases a 2.6 kernel release. 2) Maintainers flood Linus with patches that have been proven in the -mm tree 3) After a few weeks, Linus releases a -rc kernel 4) Everyone recovers from a load of changes and starts to fix any bugs found in the -rc kernel 5) A few weeks later, the next 2.6 kernel is released and the cycle starts again.

    Because this new process has proved to be pretty efficient and is keeping mainters happy, it was predicted that no new 2.7 kernel was to be forked any time soon unless a set of changes appeared big enough and intrusive that a 2.7 fork is needed. If that is the case, Linus will apply the experimental patches to the new 2.7 tree, then he will continue to pull all of the ongoing 2.6 changes into the 2.7 kernel as the version stabilizes. If it turns out that the 2.7 kernel is taking an incorrect direction, the 2.7 will be deleted an deveryone will continue on the 2.6. If 2.7 becomes stable, it will be merged back into 2.6 or will be declared 2.8.

    In conclusion, there was no plan for a 2.7 any time soon thanks to maintainers working well in the current setup but this was not carved in stone. It might just be that big enough changes are calling for a fork.
  • Libraries? (Score:1, Informative)

    by Anonymous Coward on Sunday November 21, 2004 @03:47PM (#10882160)
    "This is, for example, why amaroK depends on such things as taglib - because there was no point in writing their own tag editing library when there was one already written they could use."

    I was not aware that Linux had this install mechanism. It makes sense though, its kinda like CVS compiling one time where I had to get some code from this other project's tree since they used code from it. ( The first and currently the last compile I did ;) )

    So these libraries are like DLLs in that they are required to make the program run? What was this recompiling stuff I heard?
  • by neurophys ( 13737 ) on Sunday November 21, 2004 @04:13PM (#10882302)
    PJ at http://groklaw.net/ [groklaw.net] has a comment on the forking. Seems like the fork rumor is due to a misunderstanding. The forking in the talk was about 2.7 being a fork off of 2.6

  • by Kevinb ( 138146 ) on Sunday November 21, 2004 @04:23PM (#10882349) Homepage
    I hate it when people spew information about Windows that hasn't been true in over 6 years and get modded up as +5 Informative.

    Because installers for Windows programs silently replace DLLs with the versions they require... which can and does cause earlier programs to suddenly fail, because they depended upon a particular DLL's quirks. It's called "DLL Hell".

    This hasn't been true since Windows 98. See Windows File Protection [microsoft.com] on MSDN.

    Linux programs are more proactive about checking library versions. But, you can install multiple versions, because the shared libraries usually have different names. Not so under Windows, and windows will only load the first version of a named DLL it finds, and hang onto it until you reboot. If that version fits your program, life is good; if not, well...

    This hasn't been true since Windows 3.1.

    1. Each process under Win32 has its own address space and thus can load its own version of a DLL.
    2. Each program has control over which version of a DLL it loads; see LoadLibraryEx [microsoft.com] for details.
  • by Master of Transhuman ( 597628 ) on Sunday November 21, 2004 @04:53PM (#10882525) Homepage
    that the use of the pronoun "he" in the article in reference to Kim Polese's remarks was wrong - Kim is (very) female.

  • by Mornelithe ( 83633 ) on Sunday November 21, 2004 @05:41PM (#10882843)
    I worked at a company where we used a certain web-based collaboration suite. Part of my job was to fix existing bugs in said suite some of which were caused by our particular setup. For our particular version of the suite, this was their supported setup:

    Weblogic 7
    JDK 1.3.x

    Now, our Weblogic people, whom we had no control over, decided to upgrade to Weblogic 8. Weblogic 8 requires JDK 1.4. It won't work with 1.3, or at least, I couldn't get it to work in my hours of trying (Note, 1.3 to 1.4 is just a "minor" version change, not a major revision like 1.x to 2.x (which, I might add, usually signals a significant break in API compatibility, so you probably should have known), although with Sun's naming conventions, you never really know).

    Now, when you use our particular version of the collaboration software with JDK 1.4 and Weblogic 8, it breaks various parts of the application. The solution? "We don't know what the problem is but it's fixed in our upgrade release." Now, we could just upgrade our suite of collaboration software, except it will break tons of stuff that we built on top of it, I assure you.

    I had to dig through their source and track down what was broken and fix it myself. Keep in mind, this suite is the best in its realm, so with anyone else, you'd probably be worse off.

    So don't tell me that Open Source has problems with version compatibility that commercial software just doesn't have. These systems are all produced by big corporations, and they just flat don't work between revisions. It's a struggle whatever you use.
  • by RedWizzard ( 192002 ) on Sunday November 21, 2004 @06:02PM (#10882986)
    Perhaps he is refering to "Applications" such as the "Nvidia Driver Software" for Linux? That has to be rebuilt/recompiled if you switch kernels, even when switching between 2.6.9-r1 to -r2 etc (Gentoo!).
    No, the author is just clueless. Look at the first paragraph:
    In a worrying parallel to the issue that stopped Unix becoming a mass-market product in the 1980s - leaving the field clear for Microsoft - a recent open source conference saw a leading Linux kernel developer predict that there could soon be two versions of the Linux kernel.
    He's obviously not aware that parallel development and stable branches are the norm for Linux, and indeed most open source software.
  • by branchingfactor ( 650391 ) on Sunday November 21, 2004 @06:55PM (#10883367)
    The compatibility problem is due to glibc. glibc is the software developer's worst compatibility nightmare. Code compiled under one version won't work under another version, regardless of whether you use dynamic or static linking. This problem is so severe that even different minor versions of glibc don't work together. They are continually changing their symbol names. It's gotten so bad that we write our own versions of c libary calls so we can have some minimal level of compatibility. By way of contrast, most proprietary unix operating systems (such as Solaris or Tru64) make a huge effort to ensure compatibility. Code I've compiled ten years ago under SunOS or OSF still runs on the latest versions of those operating systems.
  • by compwiz ( 21231 ) on Sunday November 21, 2004 @09:11PM (#10884180)
    Groklaw [groklaw.net] clears this mess up. Turns out someone doesn't understand the word "fork."

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...