Linux Kernel to Fork? 578
Ninjy writes "Techworld has a story up about the possibility of the 2.7 kernel forking to accomodate large patch sets. Will this actually happen, and will the community back such a decision? "
Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!
Utter bunk (Score:5, Informative)
Re:Yes, of course it will. (Score:3, Informative)
InfoWorld [infoworld.com]
PC World [idg.com.au]
Letter to Editor... (Score:5, Informative)
kernel panic (Score:3, Informative)
Not at all like Unix (Score:1, Informative)
As long as everything stays open source, this won't be a problem. When you get an application now it is likely as a binary for a particular distro. I don't see that changing. You will still run urpmi or apt-get and for most people, things won't change. Really, how would forking create a different situation than we currently have with Gnome/KPI?
Unix in the 80's was not open source and that makes all the difference.
Re:I'd Like to Run Linux -- Just No Time (Score:3, Informative)
Install SuSE, RedHat, or Ubuntu: they are easier to install than Windows XP and come with tons of applications. They even come with excellent printed documentation in case you do need to look something up.
Even easier, buy a PC with Linux pre-installed: you just plug it in and it works.
Re:It is Linus's fault. (Score:4, Informative)
The first digit is the major version; aka 1.x, 2.x. The second digit is known as the minor version. From your examples, you appear to be asking for a consistent driver API across each minor version.
HTH, HAND.
Re:From the article... (Score:5, Informative)
Because installers for Windows programs silently replace DLLs with the versions they require... which can and does cause earlier programs to suddenly fail, because they depended upon a particular DLL's quirks. It's called "DLL Hell".
Linux programs are more proactive about checking library versions. But, you can install multiple versions, because the shared libraries usually have different names. Not so under Windows, and windows will only load the first version of a named DLL it finds, and hang onto it until you reboot. If that version fits your program, life is good; if not, well...
Re:From the article... (Score:1, Informative)
And, BTW, have you ever noticed that little pesky message on you beloved Windows "unable to locate foo.dll"? Same thing. Except for when program bar needs foo version 1.2 and program baz need foo version 1.4. Then you fucked because, without renaming the library (aka: access to the source of the application) you cant have multiple versions of a library installed. Linux, however, you'll link against libfoo.so-1.2 and libfoo.so-1.4 and you PACKAGE MANAGER will make symlinks to the correct lib.
Yes, I realize it can be annoying. However, you know that jpeg problem in windows? and the corresponding png problem in libpng? Well, in linux its a matter of putting a fixed libpng. In windows its a matter of recompiling all of those static applications you love so much, downloading and reinstalling each and every one of them.
Then again, your an idiot replying to an idiot article. Does that make me an idiot for replying to you? Most likely.
Re:From the article... (Score:1, Informative)
Have you ever tried compiling software on Windows without experiencing dependency hell?
Re:From the article... (Score:3, Informative)
So if you switch from gcc 3.3.1 to 3.3.1-r1 or something, you compile your new nvidia module with it, then you *also* need to recompile your kernel, otherwise the module won't load...
Really, this is the same for every kernel module, so I don't know what the big deal is with that nvidia module bitching all the time...
Re:It is Linus's fault. (Score:4, Informative)
I completely agree and wish the kernel API were kept more stable. Which is saying a lot, as the Linux kernel API is currently way more stable than glibc, GCC, and most user-space libraries. Virtually all of my Linux trouble-shooting time over the last few years has been caused by API versioning issues in glibc and/or GCC.
People should write in instead of calling "FUD" (Score:4, Informative)
Re: Is Linux about to fork?
Dear Kieren McCarthy,
I cannot believe this article:
http://www.techworld.com/opsys/news/index.cfm?New
The Linux kernel has historically alternated between stable
(even-numbered) sets: 2.0, 2.2, 2.4, 2.6, and odd-numbered development
sets. For this to be cast as a major disaster now that the next
development kernel is expected to be starting up is extremely odd. If
this is forking, it is forking only in the most pedantic sense, and yet
Paul Krill's article paints this as a major problem. This portrays a
simple lack of understanding of the Linux development process. The
article is therefore more confusing than informative.
Yours sincerely,
Re:Yes, of course it will. (Score:3, Informative)
If you look a level deeper -- ie read the article linked in the /. blurb -- you'll find that what they said was "2.7 will only be created when it becomes clear that there are sufficient patches which are truly disruptive enough to require it." Must be that this critical mass of patches is about to be reached.
Re:From the article... (Score:5, Informative)
You don't understand. A package manager is a piece of software that does resolve dependency, download packages (from the Internet or local media) and install them for you. That is why they are called package manager. Using these, you never have to "chase" down package, it's all automated. There are many of them : apt, yum, up2date, urpm, emerge, etc.
Please get current instead of making a fool of yourself on the Web; this problem have been solved a few years ago. Your favorite distro probably use one, and you don't even know. Which one is it, anyway, so I can give you the executive summary on it's usage ?
Re:Why fork 2.6? (Score:3, Informative)
I think there are some assumptions going around that because linux is monolithic, it is also a mess of spaghetti code with no real structure. This is simply not true. My guess at why linux undergoes a complete branching process rather than a subsystem splitup is that it is simply easier and safer. Packaging everything in modules would require plenty of cross-version testing before anything is considered stable. Why bother? Who wants just a single chunk of new technology when you can have the whole thing.
Plus, a fully split up design would make it far harder to make fast structural changes without huge amounts of negociating between teams. As it stands Torvalds can just do what he wants and everything goes right.
Re:From the article... (Score:5, Informative)
Just for fun, search for files named "mfc42.dll" on your disk (or any other common Windows dll; I'm not very up-to-date on these). How many are there ? Are all of these up-to-date ? Does any of them have security issues (known buffer overflow, for example) ? How much disk space do they use collectivelly ?
You could distribute application the same way on Linux, but people don't because it would break the architecture of having your libraries centrally stored and managed. The Linux architecture to libraries management is much superior but have the drawback that it require that you use a dependencies-aware package manager correctly. Apparently, you don't.
They make it out like this is something NEW... (Score:3, Informative)
These guys are making it out like some majorly new thing's going to happen that's going to change everything. Did everyone suddenly forget about how 2.4 forked to 2.5 which became 2.6? Give me a break.
Re:From the article... (Score:3, Informative)
For his enlightenment: apt (the debian package manager) does all the "dependency-chasing" for you. You say "apt-get install kde" and it happens.
apt is VERY useful... (Score:3, Informative)
Re:From the article... (Score:2, Informative)
There. Was that so difficult?
Re:From the article... (Score:5, Informative)
apt will not put you into 'dependency hell' unless at least one of the following preconditions is met:
1) You are running debian/unstable
2) You are overriding warnings (using the --force switch)
3) You are doing something stupid, as root
sincerly,
the truth
Re:From the article... (Score:2, Informative)
>> That's basically the Linux solution (short of recompiling everything).
If you've read the other comments you'll know this is NOT the way linux manages it! Linux distributions have binary packages that contain ONLY the applications code, and they depend on other library packages, which are shipped and updated separately. There is usually only 1 copy of a library package, shared by all the apps on the system.
Kernel Fork (Score:5, Informative)
In the Linux Kernel Development Summit back in July, the core developers announced they weren't creating a 2.7 development kernel any time soon (discussed here [linuxjournal.com] and here [slashdot.org]).
Developers liked the way things were going with the new BitKeeper in use by Linus and at the time, they didn't see the need to fork a 2.7.
Traditionally before BitKeeper, kernel maintainers would send Linus 10-20 patches at once, then wait for him to release a snapshot to determine whether or not the patch made it in. If not, they would try again. During the 2.5 development cycle, problems started over dropped patches and that is when Linus decided to try BitKeeper.
According to Greg Kroah-Hartman, kernel maintainer, Bitkeeper has increased the amount of development and improved efficency. From 2.5 and 2.6, they were doing 1.66 changes per hour for 680 days. From 2.6.0 to 2.6.7 they were at 2.2 patches per hour thanks to the ability of wider range of testing of patches that went into the tree. The new process is - 1) Linus releases a 2.6 kernel release. 2) Maintainers flood Linus with patches that have been proven in the -mm tree 3) After a few weeks, Linus releases a -rc kernel 4) Everyone recovers from a load of changes and starts to fix any bugs found in the -rc kernel 5) A few weeks later, the next 2.6 kernel is released and the cycle starts again.
Because this new process has proved to be pretty efficient and is keeping mainters happy, it was predicted that no new 2.7 kernel was to be forked any time soon unless a set of changes appeared big enough and intrusive that a 2.7 fork is needed. If that is the case, Linus will apply the experimental patches to the new 2.7 tree, then he will continue to pull all of the ongoing 2.6 changes into the 2.7 kernel as the version stabilizes. If it turns out that the 2.7 kernel is taking an incorrect direction, the 2.7 will be deleted an deveryone will continue on the 2.6. If 2.7 becomes stable, it will be merged back into 2.6 or will be declared 2.8.
In conclusion, there was no plan for a 2.7 any time soon thanks to maintainers working well in the current setup but this was not carved in stone. It might just be that big enough changes are calling for a fork.
Libraries? (Score:1, Informative)
I was not aware that Linux had this install mechanism. It makes sense though, its kinda like CVS compiling one time where I had to get some code from this other project's tree since they used code from it. ( The first and currently the last compile I did
So these libraries are like DLLs in that they are required to make the program run? What was this recompiling stuff I heard?
No, the kernel is not forking (Score:4, Informative)
Re:From the article... (Score:3, Informative)
Because installers for Windows programs silently replace DLLs with the versions they require... which can and does cause earlier programs to suddenly fail, because they depended upon a particular DLL's quirks. It's called "DLL Hell".
This hasn't been true since Windows 98. See Windows File Protection [microsoft.com] on MSDN.
Linux programs are more proactive about checking library versions. But, you can install multiple versions, because the shared libraries usually have different names. Not so under Windows, and windows will only load the first version of a named DLL it finds, and hang onto it until you reboot. If that version fits your program, life is good; if not, well...
This hasn't been true since Windows 3.1.
Somebody Should Point Out (Score:3, Informative)
Re:It is Linus's fault. (Score:3, Informative)
Weblogic 7
JDK 1.3.x
Now, our Weblogic people, whom we had no control over, decided to upgrade to Weblogic 8. Weblogic 8 requires JDK 1.4. It won't work with 1.3, or at least, I couldn't get it to work in my hours of trying (Note, 1.3 to 1.4 is just a "minor" version change, not a major revision like 1.x to 2.x (which, I might add, usually signals a significant break in API compatibility, so you probably should have known), although with Sun's naming conventions, you never really know).
Now, when you use our particular version of the collaboration software with JDK 1.4 and Weblogic 8, it breaks various parts of the application. The solution? "We don't know what the problem is but it's fixed in our upgrade release." Now, we could just upgrade our suite of collaboration software, except it will break tons of stuff that we built on top of it, I assure you.
I had to dig through their source and track down what was broken and fix it myself. Keep in mind, this suite is the best in its realm, so with anyone else, you'd probably be worse off.
So don't tell me that Open Source has problems with version compatibility that commercial software just doesn't have. These systems are all produced by big corporations, and they just flat don't work between revisions. It's a struggle whatever you use.
Re:From the article... (Score:5, Informative)
This is a problem with glibc (Score:3, Informative)
Groklaw comes through in the clutch (Score:4, Informative)