Linux Kernel to Fork? 578
Ninjy writes "Techworld has a story up about the possibility of the 2.7 kernel forking to accomodate large patch sets. Will this actually happen, and will the community back such a decision? "
2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League
Nothing weird in that. (Score:3, Interesting)
Business as usual.
Huh? (Score:2, Interesting)
But it will happen, and probably this year (or early next).
Is it just me or... (Score:1, Interesting)
Why fork 2.6? (Score:3, Interesting)
What seems to me like a good idea is to modularize the code so that you can just plug things in and out. That way, if the kernel got forked it wouldn't be much work to remove and add support. I would also like to see projects dedicated to only certain parts of the kernel. For exampmle, one group does networking and another does video and maybe one that check and approves the code. From then on the code would be piecet together in whatever way it suits people and because there's ony one group working on a particular part of the kernel, there would be no repetition. "One fit's all" sort of spreak. One "driver" or piece of code to support some hardware would work an all forks. Then each fork would be kind of like a distribution of pieced together code.
Pretty baseless article. (Score:5, Interesting)
There are plenty of forked kernel trees out there. Most continually merge in changes from Linus' tree, though.
A fork doesn't matter. What matters is what it represents. If there is enough popularity that the Linux community ends up using incompatable forks, then yes, we have a problem.. but forking in no way necessarily leads to this.
As always, the available kernels in wide use will reflect what people actually want to use.
Re:About time.... (Score:4, Interesting)
And, besides, we're approaching the time Linux kernel's typically fork: a few versions into to the series, the developers are starting to feel restricted by what they can't change in a stable kernel.
I just want to know how crap like this makes it to Slashdot. You'd think Taco would know better.
Re:From the article... (Score:1, Interesting)
Re:Uh-oh (Score:4, Interesting)
Tell me about it. When I try installing older programs I get compile errors because the libraries aren't backwards compatible, or
I think at some point everyone needs to get together and say OK. Everything from this point on will be compatible with everything from this point on. No more of this crap. One standard installation procedure for every distribution (but each distribution does things its own way). If RPMs are so horrible, then stop releasing everything as RPMs!
Re:Yes, of course it will. (Score:2, Interesting)
In fact, I have just looked up the article [slashdot.org], and it pretty much says just that. No 2.7 development branch plans! Did they just change their minds or what? Or are they genuinely doing a proper fork of the kernel? I am confused!
Re:Yes, of course it will. (Score:2, Interesting)
The kernel has been forking since its early days - consider the even numbered versions running concurrently.
The key is - will there be enough following and momentum behind each fork to push each fork into the mainstream as a focus and concentration point.
In my mind, for each fork to be successful, it requires some single reliable individual to hand hold things - focal person roles pretty much like those of Alan, Marcelo, and Andrew.
We must not forget that there are also special Linux kernels forked for varying purposes - like firewall, small devices, high availability, etc.
All said, fragmentation - an unhealhty kind of forking, is definitely not desirable.
Re:From the article... (Score:5, Interesting)
Yet, where I work, the applications have to be specifically recompiled for each of the three versions of the Linux distribution currently in use.
While it may be mainly the in-house distribution designers fault, it is a real mess, and a major reason for many of the engineers staying away from Linux.
Ours is not to wonder why. (Score:5, Interesting)
Strength of Linux (Score:2, Interesting)
-b0lt
Irresponsible (Score:3, Interesting)
News in disguise ... (Score:3, Interesting)
erm
"We all assume that the kernel is the kernel that is maintained by kernel.org and that Linux won't fork the way UNIX did..right? There's a great story at internetnews.com about the SuSe CTO taking issue with Red Hat backporting features of the 2.6 Kernel into its own version of the 2.4 kernel. "I think it's a mistake, I think it's a big mistake," he said. "It's a big mistake because of one reason, this work is not going to be supported by the open source community because it's not interesting anymore because everyone else is working on 2.6." My read on this is a thinly veiled attack on Red Hat for 'forking' the kernel. The article also give a bit of background on SuSe's recent decision to GPL their setup tool YAST, which they hope other distros will adopt too."
CC.
Re:New linux development process (Score:2, Interesting)
Well, I actually think it's "2.5 should not have become 2.6 until it didn't crash so frequently on my system". But I can understand that 2.6.0 had bugs, it was after all the first stable release, and so had a much wider audience than 2.5 did. The problem is that they've kept adding new features to it, and, like it or not, new code has bugs. What they need to do is put the cool new things in 2.7, and just concentrate on squashing bugs in 2.6. After about 3 releases of pure bugfix, no more features, the kernel tends to become stable. But as long as they're constantly adding new features to 2.6, it will be unstable.
It's called a "branch"!!! So much FUD for nothing (Score:4, Interesting)
Odd numbered kernels do not get released, only even numbered ones. The scheme in Linux development is odd = unstable, even = stable.
I won't be suprised to see something from OSDL calling this article a piece of crap by tomorrow.
GJC
Re:From the article... (Score:5, Interesting)
Untrue - there are SEVERAL rulesets that allow side-by-side loading of DLL's in Windows ever since Windows 2000 in fact.
Ruleset for library loading underneath Windows2000/XP/2003 (assuming developer/oem of the software didn't "hardcode" his library path define or LoadLibrary API call with a path to a file), is as follows:
"DLL HELL" may occur on Win32 OS when 2 of SAME named "Dynamic Link Libraries" (.dll extension, executable with no loader header that cant self initialize) exist on a system and are accessible to a program. Since same name it can cause a program to "grab hold" of wrong version build to init for call functions it uses from it!
Can be problem that causes crash in programs because program expects function it calls to return integer return from a function but latest build of DLL of same name sends back pointer data instead!
Microsoft overcame a great deal of "DLL Hell" using ActiveX controls (OLE Servers, which have
This is a 128-bit UNIQUE generated number in the registry that summons the CORRECT build the calling program requires by internally checking the
Microsoft has also put in "Side-by-Side" loading in their newer Operating Systems (2000/XP/2003) which can load the older type DLLs technically by name but into RAM separately, where they can be accessed separately by programs (the program that calls and finds the one it loads) so no "collisions" occur.
This is a GOOD move, but still can be problematic if the program finds the wrong version build of the
Order of seeks used by Win32 Portable
NT based Os by default, use different approaches for 32-bit vs. 16-bit apps:
1.) For 32-bit apps, NT/2000/XP/2003 search for implicitly loaded DLLs at:
a.
b. Current folder
c. %SystemRoot%\SYSTEM32 folder
d. %SystemRoot% folder
e. %Path% environment variable
* BUT if a DLL is listed as a KnownDLLs here in registry:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Cont ro l\Session Manager
As REG_SZ entry type & Value of DLL name w/out the extension + data value of DLLname w/
aa. %SystemRoot%\SYSTEM32.
bb.
cc. Current folder.
dd. %SystemRoot% folder.
ee. %Path%.
KnownDLLs are mapped at boot time. Rernaming or moving during a session has no effect.
You can alter this behavior by including the 8.3 DLL name in the ExcludeFromKnownDlls entry, a REG_MULTI_SZ value, & one per line in that comma delimited listing.
(This makes NT believe that the DLL is not listed in KnownDLLs.)
2.) For 16-bit apps, Windows NT uses KnownDLLs for both implicitly and explicitly load DLLs. The value is at:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Cont ro l\WOW.
Here in that key, KnownDLLs is a REG_SZ value that lists 8.3 DOS formatted DLL names, & is separated by spaces. Without a KnownDLLs entry, WOW searches:
a. The current directory.
b. The %SystemRoot% directory.
c. The %SystemRoot%\SYSTEM directory.
d. The %SystemRoot%\SYSTEM32 directory.
e. The
f. The directories in your Path.
With
Re:It is Linus's fault. (Score:2, Interesting)
>
> If it were easy to use binary drivers,
ohh i see, so the drivers API are changed not to improve them but for the only purpose of make the using of binary drivers impossible !!
it sounds an awful lot like MS changing some code for the only purpose of making interoperability with other platforms very hard
and all this for what ??
> is considered a disadvantage by many.
ohh i see, those unspecified "many" have to be right right ? like all those peole who believed that earth was flat, some time ago
that's what's called a scientific approach !
Mod me flamebait if you will, there's always a first, but I honestly think that it is exactly this hobbystic, unprofessional, and not-caring way of thinking that hinders the widespread adoption of linux.
Which i still believe it would be a very good thing for many reasons, but maybe it's just me
Re:It is Linus's fault. (Score:2, Interesting)
It definately is a good thing that there are efforts to keep as many drivers as possible open source.
Re:It is Linus's fault. (Score:1, Interesting)
No this is not what Linus says. What Linus is that fixing the API for a certain version of the kernel reduces the flexibility for the kernel programmers and will drive resources to API compatibility checking. Because this is only necessary for binary only drivers the kernel programmers choose not to do this and driver programmers can relieve their task by providing the driver source code.
Re:From the article... (Score:3, Interesting)
Oops! The link: http://elektra.sourceforge.net/
Please, have a look at it. Its perspective is smarter than it seems at a first glance, and very promising as well.
Re:Why fork 2.6? (Score:3, Interesting)
Forking another kernel tree will split the developers apart and slow down the developement of the 2.6 kernel.
Ideally, actual development should have been all over with at 2.6.0. Patchlevels would only fix bugs, not introduce new capabilities and thus unstable code.
Too bad it doesn't work that way with Linux.
What seems to me like a good idea is to modularize the code so that you can just plug things in and out. That way, if the kernel got forked it wouldn't be much work to remove and add support.
Well, those are the breaks with a monolithic kernel.
I would also like to see projects dedicated to only certain parts of the kernel. For exampmle, one group does networking and another does video and maybe one that check and approves the code. From then on the code would be piecet together in whatever way it suits people and because there's ony one group working on a particular part of the kernel, there would be no repetition. "One fit's all" sort of spreak.
I can't help but notice that what you're asking for is a microkernel. These feats would not be nearly so easy with a monolithic kernel. Also, you'd have all these factions bitching that they can't innovate since they don't have "commit" authority over other parts of the kernel that they need to change in order to add a new whiz-bang feature.
One "driver" or piece of code to support some hardware would work an all forks. Then each fork would be kind of like a distribution of pieced together code.
That is, until someone forks the underlying glue that binds the pieces together.
Re:It is Linus's fault. (Score:3, Interesting)
How did this get modded insightful? Are you saying you know more about designing a kernel than Linus? Most hardware either has GPL drivers embedded in the kernel which automatically get updated to new changes in the API, or no driver at all. For those binary-only models I don't see nVidia having any problems. Maybe the people making the binary-only drivers need to learn how to do their job. Ever think of that?
Come on people, it's 2004 and it's not too much to expect to be able to buy a piece of hardware that says "Drivers supplied for Linux 2.6" and expect to be able to use those drivers.
Yes it is too much to expect. Its only 2004. There are many pieces of hardware that still don't support Linux. And it has nothing to do with the driver API or how difficult it is/isn't to support. Most hardware would already be supported for free if they had released the specs.
Want to have your cake and eat it too? Well, we don't play that way. Its more like put up or shut up.
Re:From the article... (Score:3, Interesting)
RH6.2 is stable enough when running software designed for RH6.2. Try forcing the install of fedora softare on it and you'll have to be damn careful not to break things. You say yourself in your message that you're not upgrading because it works. Well, guess what, installing your fancy new package on it is damn close to the full upgrade you're trying to avoid.
Essentially, 6.2 = stable, FC2 = fairly stable. Upgrading from 6.2 to FC2 may break things. Installing a FC2 package on 6.2 and telling the system to pull down everything it needs, well, that'll break things too.
Curiously, the only distribution I know which will gracefully handle an extremely modern package amongnst many ancient ones is gentoo. If they ever manage to get 'gentoo stable' to the same standard as debian stable, this may be exactly what you're looking for.
Re:From the article... (Score:2, Interesting)
Whoah. You mean the OS actually has it's own connection to the internet? I won't have to turn on the 56K modem that I can sometimes get as fast as a 19K connection through?
Yep. You just made downloading that 3.4 Meg Install Shield EXE file seem like a burden...
Re:It is Linus's fault. (Score:1, Interesting)
What a load of shit. Look, if the API really has changed you can't fix it by recompiling. Someone is going to have to fix the programing in all the dependant apps, and then you have to download them again.
The reason it seems like you can "fix" it with a recompile is that it wasn't really broken in the first place. Someone just decided that foolib-1.2.4 is better than foolib-1.2.3, probably for no good reason, and then you are on the upgrade treadmill.
The only real way to avoid this is to fix baselines (Debian Stable, RedHat vX, Windows 2000, etc), something which would require cross-distro cooperation and therefore is probably impossible.
Re:From the article... (Score:2, Interesting)
I'm just asking. It sounds like it could be an issue.
Re:From the article... (Score:4, Interesting)
If we want to maintain the quality and stability that the Linux kernel has, we need to resist binary drivers.
Firstly, I agree. BUT if need to allow third party vendors to ship binary drivers (and maybe we do, in the IP crazy world) then a QNX [qnx.com] user-space driver model might be smarter?