Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Linux Kernel to Fork? 578

Ninjy writes "Techworld has a story up about the possibility of the 2.7 kernel forking to accomodate large patch sets. Will this actually happen, and will the community back such a decision? "
This discussion has been archived. No new comments can be posted.

Linux Kernel to Fork?

Comments Filter:
  • by Anonymous Coward on Sunday November 21, 2004 @11:01AM (#10880633)
    They just got too many weird patches, and had to put them somewhere.
    Business as usual.
  • Huh? (Score:2, Interesting)

    by laptop006 ( 37721 ) on Sunday November 21, 2004 @11:01AM (#10880637) Homepage Journal
    Of course it will happen, whether it's now or later is a different matter. The problem this time is that several of the core kernel devs want to keep 2.6 under active feature development, and doing that in 2.7 means that they don't get nearly as tested.

    But it will happen, and probably this year (or early next).
  • Is it just me or... (Score:1, Interesting)

    by A beautiful mind ( 821714 ) on Sunday November 21, 2004 @11:02AM (#10880641)
    ...someone really overused "said" in that article? It got really annoying from the middle of the article.
  • Why fork 2.6? (Score:3, Interesting)

    by demon_2k ( 586844 ) on Sunday November 21, 2004 @11:04AM (#10880652) Journal
    Im not sure if i like the idea. Developers have have lives, that's why the developement is moving at the pace it is. And i like the pace the developement is at. Forking another kernel tree will split the developers apart and slow down the developement of the 2.6 kernel.

    What seems to me like a good idea is to modularize the code so that you can just plug things in and out. That way, if the kernel got forked it wouldn't be much work to remove and add support. I would also like to see projects dedicated to only certain parts of the kernel. For exampmle, one group does networking and another does video and maybe one that check and approves the code. From then on the code would be piecet together in whatever way it suits people and because there's ony one group working on a particular part of the kernel, there would be no repetition. "One fit's all" sort of spreak. One "driver" or piece of code to support some hardware would work an all forks. Then each fork would be kind of like a distribution of pieced together code.
  • by mindstrm ( 20013 ) on Sunday November 21, 2004 @11:06AM (#10880661)
    No details, no important names.. no nothing.

    There are plenty of forked kernel trees out there. Most continually merge in changes from Linus' tree, though.

    A fork doesn't matter. What matters is what it represents. If there is enough popularity that the Linux community ends up using incompatable forks, then yes, we have a problem.. but forking in no way necessarily leads to this.

    As always, the available kernels in wide use will reflect what people actually want to use.

  • Re:About time.... (Score:4, Interesting)

    by Rysc ( 136391 ) * <sorpigal@gmail.com> on Sunday November 21, 2004 @11:11AM (#10880689) Homepage Journal
    I second that. After having the nvidia driver broken four times I'm starting to get frustrated.

    And, besides, we're approaching the time Linux kernel's typically fork: a few versions into to the series, the developers are starting to feel restricted by what they can't change in a stable kernel.

    I just want to know how crap like this makes it to Slashdot. You'd think Taco would know better.
  • by Anonymous Coward on Sunday November 21, 2004 @11:11AM (#10880692)
    Bullshit. The only thing that breaks is system utilities.
  • Re:Uh-oh (Score:4, Interesting)

    by Epistax ( 544591 ) <<moc.liamg> <ta> <xatsipe>> on Sunday November 21, 2004 @11:14AM (#10880702) Journal
    Already there are massive problems with dependencies.

    Tell me about it. When I try installing older programs I get compile errors because the libraries aren't backwards compatible, or ./configure won't be able to find the libraries because the version installed is too new.

    I think at some point everyone needs to get together and say OK. Everything from this point on will be compatible with everything from this point on. No more of this crap. One standard installation procedure for every distribution (but each distribution does things its own way). If RPMs are so horrible, then stop releasing everything as RPMs!
  • by Bralkein ( 685733 ) on Sunday November 21, 2004 @11:23AM (#10880738)
    Hey, but didn't the kernel developers say a while ago that there would not be stable and development branches of the kernel, and that it would be up to the distributions to look after final stability, or something like that?

    In fact, I have just looked up the article [slashdot.org], and it pretty much says just that. No 2.7 development branch plans! Did they just change their minds or what? Or are they genuinely doing a proper fork of the kernel? I am confused!
  • by nayigeta ( 792068 ) on Sunday November 21, 2004 @11:24AM (#10880745) Homepage Journal
    I am of similar opinion.

    The kernel has been forking since its early days - consider the even numbered versions running concurrently.

    The key is - will there be enough following and momentum behind each fork to push each fork into the mainstream as a focus and concentration point.

    In my mind, for each fork to be successful, it requires some single reliable individual to hand hold things - focal person roles pretty much like those of Alan, Marcelo, and Andrew.

    We must not forget that there are also special Linux kernels forked for varying purposes - like firewall, small devices, high availability, etc.

    All said, fragmentation - an unhealhty kind of forking, is definitely not desirable.
  • by elendril ( 15418 ) on Sunday November 21, 2004 @11:32AM (#10880800) Homepage
    You're right: Each version of the kernel doesn't requires applications to be compiled specifically for it.

    Yet, where I work, the applications have to be specifically recompiled for each of the three versions of the Linux distribution currently in use.

    While it may be mainly the in-house distribution designers fault, it is a real mess, and a major reason for many of the engineers staying away from Linux.
  • by Doc Ruby ( 173196 ) on Sunday November 21, 2004 @11:35AM (#10880812) Homepage Journal
    We don't know why this reporter is spreading Fear, Uncertainty and Doubt. Maybe they were misinformed, and lack critical skills required to be a journalist. Maybe they were informed, but are looking for something sensational to get readers (it worked). Maybe they're trying to impress their mother somehow, without even realizing they're making up for a playground trauma from 1983. Who knows? Who cares? They're a FUDder - we're interested in the damage they cause, not the damage that was done to them. That's their problem, unless we propose a massive mental health makeover for the world's journalists. That would probably decimate the ranks of the industry, allowing them to get real jobs.
  • Strength of Linux (Score:2, Interesting)

    by b0lt ( 729408 ) on Sunday November 21, 2004 @11:35AM (#10880822)
    I believe that the strength of Linux comes from the fact that it has a central core, which is compatible with basically everything across distros. This makes faster development, and combined with a charismatic leader (Go Linus! :o) makes a very strong platform for an OS. These are my personal beliefs, so feel free to flame me if you disagree :)

    -b0lt
  • Irresponsible (Score:3, Interesting)

    by Craig Maloney ( 1104 ) * on Sunday November 21, 2004 @11:35AM (#10880823) Homepage
    I'd imagine any article that talks about Linux Forking would have at the very least grabbed one or two quotes from Linus before going to print. Linus is only mentioned once in the article, and that is a passing reference as the owner of the Linux Kernel. And while Andrew Morton may have mentioned what was going on in the interview, the reporter made sure it didn't show up in the article. Irresponsible.
  • News in disguise ... (Score:3, Interesting)

    by foobsr ( 693224 ) on Sunday November 21, 2004 @11:43AM (#10880865) Homepage Journal
    ... [slashdot.org]

    erm ...

    "We all assume that the kernel is the kernel that is maintained by kernel.org and that Linux won't fork the way UNIX did..right? There's a great story at internetnews.com about the SuSe CTO taking issue with Red Hat backporting features of the 2.6 Kernel into its own version of the 2.4 kernel. "I think it's a mistake, I think it's a big mistake," he said. "It's a big mistake because of one reason, this work is not going to be supported by the open source community because it's not interesting anymore because everyone else is working on 2.6." My read on this is a thinly veiled attack on Red Hat for 'forking' the kernel. The article also give a bit of background on SuSe's recent decision to GPL their setup tool YAST, which they hope other distros will adopt too."

    CC.
  • by m50d ( 797211 ) on Sunday November 21, 2004 @12:23PM (#10881052) Homepage Journal
    The reality is more like When there's a 2.6 that is actually useable on my system (crashes less than once every 30 mins) I'll believe that. 2.7 is not ready to split off until then.

    Well, I actually think it's "2.5 should not have become 2.6 until it didn't crash so frequently on my system". But I can understand that 2.6.0 had bugs, it was after all the first stable release, and so had a much wider audience than 2.5 did. The problem is that they've kept adding new features to it, and, like it or not, new code has bugs. What they need to do is put the cool new things in 2.7, and just concentrate on squashing bugs in 2.6. After about 3 releases of pure bugfix, no more features, the kernel tends to become stable. But as long as they're constantly adding new features to 2.6, it will be unstable.

  • by borgheron ( 172546 ) on Sunday November 21, 2004 @12:27PM (#10881079) Homepage Journal
    Apparently the author is entirely unfamiliar with the concept of creating a branch to isolate disruptive changes to a system.

    Odd numbered kernels do not get released, only even numbered ones. The scheme in Linux development is odd = unstable, even = stable.

    I won't be suprised to see something from OSDL calling this article a piece of crap by tomorrow.

    GJC
  • by Anonymous Coward on Sunday November 21, 2004 @12:31PM (#10881096)
    WoodstockJeff - "Not so under Windows, and windows will only load the first version of a named DLL it finds, and hang onto it until you reboot"

    Untrue - there are SEVERAL rulesets that allow side-by-side loading of DLL's in Windows ever since Windows 2000 in fact.

    Ruleset for library loading underneath Windows2000/XP/2003 (assuming developer/oem of the software didn't "hardcode" his library path define or LoadLibrary API call with a path to a file), is as follows:

    "DLL HELL" may occur on Win32 OS when 2 of SAME named "Dynamic Link Libraries" (.dll extension, executable with no loader header that cant self initialize) exist on a system and are accessible to a program. Since same name it can cause a program to "grab hold" of wrong version build to init for call functions it uses from it!

    Can be problem that causes crash in programs because program expects function it calls to return integer return from a function but latest build of DLL of same name sends back pointer data instead!

    Microsoft overcame a great deal of "DLL Hell" using ActiveX controls (OLE Servers, which have .DLL extensions also), that have .OCX extensions too. These are "marshalled" (loaded) not by name, as is done with older .DLL file types using the LoadLibrary call, but via a thing called a "GUID" (Globally Unique Identifier).

    This is a 128-bit UNIQUE generated number in the registry that summons the CORRECT build the calling program requires by internally checking the .ocx or .dll file being called for its internal version information (using the functions this program uses on Version checking no doubt, std. Win32 API call method).

    Microsoft has also put in "Side-by-Side" loading in their newer Operating Systems (2000/XP/2003) which can load the older type DLLs technically by name but into RAM separately, where they can be accessed separately by programs (the program that calls and finds the one it loads) so no "collisions" occur.

    This is a GOOD move, but still can be problematic if the program finds the wrong version build of the .DLL it is calling, first

    Order of seeks used by Win32 Portable .exe files for finding DLLs to load -

    NT based Os by default, use different approaches for 32-bit vs. 16-bit apps:

    1.) For 32-bit apps, NT/2000/XP/2003 search for implicitly loaded DLLs at:

    a. .exe location folder
    b. Current folder
    c. %SystemRoot%\SYSTEM32 folder
    d. %SystemRoot% folder
    e. %Path% environment variable

    * BUT if a DLL is listed as a KnownDLLs here in registry:

    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Cont ro l\Session Manager

    As REG_SZ entry type & Value of DLL name w/out the extension + data value of DLLname w/ .DLL extension, then search order becomes:

    aa. %SystemRoot%\SYSTEM32.
    bb. .exe file folder.
    cc. Current folder.
    dd. %SystemRoot% folder.
    ee. %Path%.

    KnownDLLs are mapped at boot time. Rernaming or moving during a session has no effect.

    You can alter this behavior by including the 8.3 DLL name in the ExcludeFromKnownDlls entry, a REG_MULTI_SZ value, & one per line in that comma delimited listing.

    (This makes NT believe that the DLL is not listed in KnownDLLs.)

    2.) For 16-bit apps, Windows NT uses KnownDLLs for both implicitly and explicitly load DLLs. The value is at:

    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Cont ro l\WOW.

    Here in that key, KnownDLLs is a REG_SZ value that lists 8.3 DOS formatted DLL names, & is separated by spaces. Without a KnownDLLs entry, WOW searches:

    a. The current directory.
    b. The %SystemRoot% directory.
    c. The %SystemRoot%\SYSTEM directory.
    d. The %SystemRoot%\SYSTEM32 directory.
    e. The .exe file directory.
    f. The directories in your Path.

    With
  • by giampy ( 592646 ) on Sunday November 21, 2004 @12:39PM (#10881142) Homepage
    > That's deliberate...
    > ... and that's the reason why.
    > If it were easy to use binary drivers, ...

    ohh i see, so the drivers API are changed not to improve them but for the only purpose of make the using of binary drivers impossible !!

    it sounds an awful lot like MS changing some code for the only purpose of making interoperability with other platforms very hard ...

    and all this for what ??

    > is considered a disadvantage by many.

    ohh i see, those unspecified "many" have to be right right ? like all those peole who believed that earth was flat, some time ago ...
    that's what's called a scientific approach !

    Mod me flamebait if you will, there's always a first, but I honestly think that it is exactly this hobbystic, unprofessional, and not-caring way of thinking that hinders the widespread adoption of linux.

    Which i still believe it would be a very good thing for many reasons, but maybe it's just me ...

  • by dazk ( 665669 ) on Sunday November 21, 2004 @01:20PM (#10881349)
    Uhm, where exactly is widespread adoption of Linux really in danger? What is it you want? A mostly closed source System? Use Solaris or whatever else if you want that. You might have to switch to AIX or HPUX soon since Solaris is about to be opensourced.

    It definately is a good thing that there are efforts to keep as many drivers as possible open source.
  • by Anonymous Coward on Sunday November 21, 2004 @01:32PM (#10881426)
    Yes, I've heard all the noise. Linus and others say that a stable driver API encourages IHV's to release binary-only drivers.
    No this is not what Linus says. What Linus is that fixing the API for a certain version of the kernel reduces the flexibility for the kernel programmers and will drive resources to API compatibility checking. Because this is only necessary for binary only drivers the kernel programmers choose not to do this and driver programmers can relieve their task by providing the driver source code.
  • by Anonymous Coward on Sunday November 21, 2004 @02:12PM (#10881593)
    That's why a bunch of guys are assembling a new project to embrace as many Linux distributions as possible, adding FreeBSD and Windows to the mix.

    Oops! The link: http://elektra.sourceforge.net/

    Please, have a look at it. Its perspective is smarter than it seems at a first glance, and very promising as well.

  • Re:Why fork 2.6? (Score:3, Interesting)

    by Eil ( 82413 ) on Sunday November 21, 2004 @02:41PM (#10881758) Homepage Journal

    Forking another kernel tree will split the developers apart and slow down the developement of the 2.6 kernel.

    Ideally, actual development should have been all over with at 2.6.0. Patchlevels would only fix bugs, not introduce new capabilities and thus unstable code.

    Too bad it doesn't work that way with Linux. :(

    What seems to me like a good idea is to modularize the code so that you can just plug things in and out. That way, if the kernel got forked it wouldn't be much work to remove and add support.

    Well, those are the breaks with a monolithic kernel.

    I would also like to see projects dedicated to only certain parts of the kernel. For exampmle, one group does networking and another does video and maybe one that check and approves the code. From then on the code would be piecet together in whatever way it suits people and because there's ony one group working on a particular part of the kernel, there would be no repetition. "One fit's all" sort of spreak.

    I can't help but notice that what you're asking for is a microkernel. These feats would not be nearly so easy with a monolithic kernel. Also, you'd have all these factions bitching that they can't innovate since they don't have "commit" authority over other parts of the kernel that they need to change in order to add a new whiz-bang feature.

    One "driver" or piece of code to support some hardware would work an all forks. Then each fork would be kind of like a distribution of pieced together code.

    That is, until someone forks the underlying glue that binds the pieces together.
  • by Cyno ( 85911 ) on Sunday November 21, 2004 @03:46PM (#10882151) Journal
    but until Linus grows up and stabilizes the driver API

    How did this get modded insightful? Are you saying you know more about designing a kernel than Linus? Most hardware either has GPL drivers embedded in the kernel which automatically get updated to new changes in the API, or no driver at all. For those binary-only models I don't see nVidia having any problems. Maybe the people making the binary-only drivers need to learn how to do their job. Ever think of that?

    Come on people, it's 2004 and it's not too much to expect to be able to buy a piece of hardware that says "Drivers supplied for Linux 2.6" and expect to be able to use those drivers.

    Yes it is too much to expect. Its only 2004. There are many pieces of hardware that still don't support Linux. And it has nothing to do with the driver API or how difficult it is/isn't to support. Most hardware would already be supported for free if they had released the specs.

    Want to have your cake and eat it too? Well, we don't play that way. Its more like put up or shut up.
  • by lakeland ( 218447 ) <lakeland@acm.org> on Sunday November 21, 2004 @04:30PM (#10882388) Homepage
    Er... can you even install office 2004 on windows 98? And assuming you can, how many things will you break in doing so?

    RH6.2 is stable enough when running software designed for RH6.2. Try forcing the install of fedora softare on it and you'll have to be damn careful not to break things. You say yourself in your message that you're not upgrading because it works. Well, guess what, installing your fancy new package on it is damn close to the full upgrade you're trying to avoid.

    Essentially, 6.2 = stable, FC2 = fairly stable. Upgrading from 6.2 to FC2 may break things. Installing a FC2 package on 6.2 and telling the system to pull down everything it needs, well, that'll break things too.

    Curiously, the only distribution I know which will gracefully handle an extremely modern package amongnst many ancient ones is gentoo. If they ever manage to get 'gentoo stable' to the same standard as debian stable, this may be exactly what you're looking for.
  • by bob beta ( 778094 ) on Sunday November 21, 2004 @05:04PM (#10882599)
    You don't need to even download the software, apt does it for you.

    Whoah. You mean the OS actually has it's own connection to the internet? I won't have to turn on the 56K modem that I can sometimes get as fast as a 19K connection through?

    Yep. You just made downloading that 3.4 Meg Install Shield EXE file seem like a burden...
  • by Anonymous Coward on Sunday November 21, 2004 @05:25PM (#10882732)
    > You can always fix the problem by recompiling stuff ... you just use a single command to recompile everything that was broken.

    What a load of shit. Look, if the API really has changed you can't fix it by recompiling. Someone is going to have to fix the programing in all the dependant apps, and then you have to download them again.

    The reason it seems like you can "fix" it with a recompile is that it wasn't really broken in the first place. Someone just decided that foolib-1.2.4 is better than foolib-1.2.3, probably for no good reason, and then you are on the upgrade treadmill.

    The only real way to avoid this is to fix baselines (Debian Stable, RedHat vX, Windows 2000, etc), something which would require cross-distro cooperation and therefore is probably impossible.
  • by bob beta ( 778094 ) on Sunday November 21, 2004 @07:49PM (#10883708)
    Windows NT made that disasterous transition to 'video in the kernel' with NT 4.0 and it reduced stability from NT 3.51. Why are the hooks for AGP ports vendor-specific? Why is the AGP hardware driver not a well-tested checkbox option in the kernel? Why is any vendor code at all required? Is AGP I/O that quirky and vendor-keyed?

    I'm just asking. It sounds like it could be an issue.

  • by BiggerIsBetter ( 682164 ) on Sunday November 21, 2004 @09:25PM (#10884249)

    If we want to maintain the quality and stability that the Linux kernel has, we need to resist binary drivers.

    Firstly, I agree. BUT if need to allow third party vendors to ship binary drivers (and maybe we do, in the IP crazy world) then a QNX [qnx.com] user-space driver model might be smarter?

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...