Linux Kernel to Fork? 578
Ninjy writes "Techworld has a story up about the possibility of the 2.7 kernel forking to accomodate large patch sets. Will this actually happen, and will the community back such a decision? "
"Nuclear war can ruin your whole compile." -- Karl Lehenbauer
From the article... (Score:5, Insightful)
> compiled specifically for it.
FUD FUD FUD. No. no no no. NO!. Who writes this generic shit?. There's no truth behind the above statement and it just implies something that is not a problem.
Re:From the article... (Score:5, Insightful)
Perhaps he is not talking about applications such as "Emacs" or "vim" ? (Or, he just finished his crackpipe
Re:From the article... (Score:3, Insightful)
Re:From the article... (Score:3, Insightful)
Re:From the article... (Score:3, Insightful)
Re:From the article... (Score:5, Insightful)
Most other operating systems do this and it's about time Linux provided some standards for drivers.
As much as we hate it, we do need to support binary only drivers.
It pisses me off that I can no longer use my webcam because the driver maintainer can't keep up with every variation of the kernel, and for legal reasons can't release the source code.
-Aaron
Re:From the article... (Score:5, Insightful)
If we want to maintain the quality and stability that the Linux kernel has, we need to resist binary drivers. Many of the stability issues remaining with Windows today I believe are in fact driver issues.
Giving in to the hardware companies' (pointless) fear of losing so-called "intellectual property" by opening up their drivers would pass part of the control of the kernel from Linus & co. to countless programmers who may or may not have special interest in improving Linux specifically. The quality assurance that currently takes place for the free software drivers that get into the kernel is valuable.
Giving up on free/open source software at every turn where it is convenient would lead us to having an OS that is an assortment of non-free parts a bit like the current proprietary UNIXes. It might even lead to someone eventually getting into a position where they could charge for an essential part of the system thus rendering it non-free even in the beer sense.
For a kernel developer's take on this, read this, it's from Greg Kroah-Hartmann's blog [kroah.com]:
Re:From the article... (Score:3, Insightful)
Re:From the article... (Score:4, Interesting)
If we want to maintain the quality and stability that the Linux kernel has, we need to resist binary drivers.
Firstly, I agree. BUT if need to allow third party vendors to ship binary drivers (and maybe we do, in the IP crazy world) then a QNX [qnx.com] user-space driver model might be smarter?
Re:From the article... (Score:4, Insightful)
A better implementation would allow binary drivers, without any of these issues.
While many of the issues out there may BE the binary drivers.. but, if Joe User goes out and buys a piece of hardware for his computer, plugs it in, and can just install a little driver program, to make it work, then Joe User is happy.
When I, someone who has been using Unix since System III was common, and have been using computers for the last 25 years (i'm 28..), and have done kernel hacking, and worked on major products, even gained quite a bit of income from my own projects back in the day, has to set aside an entire day just because I want to operate my USB Webcam in Linux (which I Haven't done yet, because I am too busy to have an entire day to spend fucking with my computer), and I --know-- that it's not going to be as simple as "plug it in, install the driver, recompile".
To Greg's points:
re: fixing bugs. Yes, you fix the api bugs, you fix the drivers you have control over, you bump the API version, and now drivers that can't work with that change refuse to load. I highly doubt that my Diamond Stealth 64 Windows '95 video driver will work if I try to load it into XP. It ain't gonna happen.
re: building better apis. Yes, you fix the API, bump the version number, and now drivers that can't work with the new version refuse to load.
re: CONFIG_SMP There must be an API to deal with "core kernel structures". What the fuck is a driver doing with "core kernel structures"?
re: GCC alignment issues- GCC is obviously not the best compiler to do this with. GCC is quite obviously not the best at anything, except being compatible across a bazillion operating systems. That's ALL GCC is good for.
re: drivers outside the kernel tree give nothing back; Maybe they have nothing useful to give, either? WHY is it important that we know how everything the hardware manufacturer does works, if they are competent enough to make it work? YES, I agree that open source drivers and being available with the kernel are the BETTER option, but that doesn't mean that some people aren't going to want a different option.
re: deleting functions; See also API versioning. I think I've repeated that a few times now.
Why do you hate Linux? Why do you not want it to succeed?
Or do you enjoy spending an entire day or longer just making some USB gadget work?
Serious question: (Score:5, Insightful)
"It pisses me off that I can no longer use my webcam because the driver maintainer can't keep up with every variation of the kernel..."
Since this is a webcam I am making an assumption that this is more of a personal/desktop/workstation type role. With that in mind, is there any compelling reason that you must upgrade to the latest greatest kernel as opposed to sticking with a previous kernel that has worked along with your "webcam" driver that worked as well?
I am under the assumption that there are a lot of users that upgrade/acquire the latest greatest software and that in and of itself is not a bad thing but not always the smart thing. I'm referring to a "if it ain't broke, don't fix it" line of thinking.
Can you or someone else inform me what the other part of this issue is I seem to completely miss?
Re:Serious question: (Score:3, Insightful)
-Leigh
Re:Why upgrade... (Score:3, Insightful)
Under Fedora (on the other hand), the NTFS driver (fully open, and PART OF the kernel) is not a default-included module (Fedora is not alone in this distinction) - so the module must be rebuilt (or wait for a new RPM, and download that). It's not the fault of 'Linux', per se, but the kernel developers coul
Re:From the article... (Score:3, Informative)
So if you switch from gcc 3.3.1 to 3.3.1-r1 or something, you compile your new nvidia module with it, then you *also* need to recompile your kernel, otherwise the module won't load...
Really, this is the same for every kernel module, so I don't know what the big deal is with that nvidia module bitc
Re:From the article... (Score:3, Insightful)
A very few of the top Kernel developers are actually paid to do what they do. For the rest of the developers (the countless number of real folks with other things to do) who submit patches (many of which actually end up in the Kernel after a few bounces back-and forth with a lead).
For the perspective of these folks, the kernel does exist for them to code.
I think what you are forgetting, is that nobody can lock the
Re:From the article... (Score:5, Informative)
Re:From the article... (Score:5, Insightful)
Ours is not to wonder why. (Score:5, Interesting)
Re:Ours is not to wonder why. (Score:4, Insightful)
Re:From the article... (Score:5, Interesting)
Yet, where I work, the applications have to be specifically recompiled for each of the three versions of the Linux distribution currently in use.
While it may be mainly the in-house distribution designers fault, it is a real mess, and a major reason for many of the engineers staying away from Linux.
Re:From the article... (Score:3, Interesting)
Oops! The link: http://elektra.sourceforge.net/
Please, have a look at it. Its perspective is smarter than it seems at a first glance, and very promising as well.
This is a problem with glibc (Score:3, Informative)
Re:From the article... (Score:5, Informative)
Because installers for Windows programs silently replace DLLs with the versions they require... which can and does cause earlier programs to suddenly fail, because they depended upon a particular DLL's quirks. It's called "DLL Hell".
Linux programs are more proactive about checking library versions. But, you can install multiple versions, because the shared libraries usually have different names. Not so under Windows, and windows will only load the first version of a named DLL it finds, and hang onto it until you reboot. If that version fits your program, life is good; if not, well...
Re:From the article... (Score:5, Interesting)
Untrue - there are SEVERAL rulesets that allow side-by-side loading of DLL's in Windows ever since Windows 2000 in fact.
Ruleset for library loading underneath Windows2000/XP/2003 (assuming developer/oem of the software didn't "hardcode" his library path define or LoadLibrary API call with a path to a file), is as follows:
"DLL HELL" may occur on Win32 OS when 2 of SAME named "Dynamic Link Libraries" (.dll extension, executable with no loader header that cant self initialize) exist on a system and are accessible to a program. Since same name it can cause a program to "grab hold" of wrong version build to init for call functions it uses from it!
Can be problem that causes crash in programs because program expects function it calls to return integer return from a function but latest build of DLL of same name sends back pointer data instead!
Microsoft overcame a great deal of "DLL Hell" using ActiveX controls (OLE Servers, which have
This is a 128-bit UNIQUE generated number in the registry that summons the CORRECT build the calling program requires by internally checking the
Microsoft has also put in "Side-by-Side" loading in their newer Operating Systems (2000/XP/2003) which can load the older type DLLs technically by name but into RAM separately, where they can be accessed separately by programs (the program that calls and finds the one it loads) so no "collisions" occur.
This is a GOOD move, but still can be problematic if the program finds the wrong version build of the
Order of seeks used by Win32 Portable
NT based Os by default, use different approaches for 32-bit vs. 16-bit apps:
1.) For 32-bit apps, NT/2000/XP/2003 search for implicitly loaded DLLs at:
a.
b. Current folder
c. %SystemRoot%\SYSTEM32 folder
d. %SystemRoot% folder
e. %Path% environment variable
* BUT if a DLL is listed as a KnownDLLs here in registry:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Cont ro l\Session Manager
As REG_SZ entry type & Value of DLL name w/out the extension + data value of DLLname w/
aa. %SystemRoot%\SYSTEM32.
bb.
cc. Current folder.
dd. %SystemRoot% folder.
ee. %Path%.
KnownDLLs are mapped at boot time. Rernaming or moving during a session has no effect.
You can alter this behavior by including the 8.3 DLL name in the ExcludeFromKnownDlls entry, a REG_MULTI_SZ value, & one per line in that comma delimited listing.
(This makes NT believe that the DLL is not listed in KnownDLLs.)
2.) For 16-bit apps, Windows NT uses KnownDLLs for both implicitly and explicitly load DLLs. The value is at:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Cont ro l\WOW.
Here in that key, KnownDLLs is a REG_SZ value that lists 8.3 DOS formatted DLL names, & is separated by spaces. Without a KnownDLLs entry, WOW searches:
a. The current directory.
b. The %SystemRoot% directory.
c. The %SystemRoot%\SYSTEM directory.
d. The %SystemRoot%\SYSTEM32 directory.
e. The
f. The directories in your Path.
With
Re:From the article... (Score:4, Insightful)
But how exactly does that collide with grandparents point?
You have stated yourself that any installer is free to use any of the quierks you've described (in short: rely on registry in hope it's not messed up yet again, overwrite DLLs that other programs may be using or waste diskspace and memory by dumping yet another copy of bozo.dll to be loaded at runtime).
So it's only a matter of time until you run into a piece of software that picks the route that breaks your system.
Re:From the article... (Score:3, Informative)
Because installers for Windows programs silently replace DLLs with the versions they require... which can and does cause earlier programs to suddenly fail, because they depended upon a particular DLL's quirks. It's called "DLL Hell".
This hasn't been true since Windows 98. See Windows File Protection [microsoft.com] on MSDN.
Linux programs are more proactive about checking library versions.
Re:From the article... (Score:5, Informative)
Just for fun, search for files named "mfc42.dll" on your disk (or any other common Windows dll; I'm not very up-to-date on these). How many are there ? Are all of these up-to-date ? Does any of them have security issues (known buffer overflow, for example) ? How much disk space do they use collectivelly ?
You could distribute application the same way on Linux, but people don't because it would break the architecture of having your libraries centrally stored and managed. The Linux architecture to libraries management is much superior but have the drawback that it require that you use a dependencies-aware package manager correctly. Apparently, you don't.
Re:From the article... (Score:3, Insightful)
So then I'm at the mercy of the distro people? I have to wait for them to support the app (if they ever do); I have to wait for them for new versions, long after the creator has released it. I thought free software was supposed to be decentralized.
The people who are motivated to create the app (the author) should be the one releasing packages, not some third party. Think what it would be li
Re:From the article... (Score:5, Funny)
ever used apt?
Re:From the article... (Score:3, Informative)
For his enlightenment: apt (the debian package manager) does all the "dependency-chasing" for you. You say "apt-get install kde" and it happens.
Re:From the article... (Score:5, Informative)
apt will not put you into 'dependency hell' unless at least one of the following preconditions is met:
1) You are running debian/unstable
2) You are overriding warnings (using the --force switch)
3) You are doing something stupid, as root
sincerly,
the truth
apt is VERY useful... (Score:3, Informative)
Re:From the article... (Score:5, Informative)
You don't understand. A package manager is a piece of software that does resolve dependency, download packages (from the Internet or local media) and install them for you. That is why they are called package manager. Using these, you never have to "chase" down package, it's all automated. There are many of them : apt, yum, up2date, urpm, emerge, etc.
Please get current instead of making a fool of yourself on the Web; this problem have been solved a few years ago. Your favorite distro probably use one, and you don't even know. Which one is it, anyway, so I can give you the executive summary on it's usage ?
Re:From the article... (Score:5, Insightful)
You aren't going through bullshit. How is 'apt-get install foo' or 'yum install foo' or 'emerge foo' going through bullshit? It's one command! Do you want something easier? Must the OS read your mind and install the package for you?
These "200 other barely-related packages" are called dependencies. Pakcage managers don't just start downloading other packages willy-nilly. It installs those packages that your new package is dependant on. Some package managers can also download packages marked as suggested or recommended, but that is easily changed via a config option, menu choice, or dialog box.
Re:From the article... (Score:5, Insightful)
Going thru this "bullshit" is actually easier than installing software in Windows. Assuming you use and apt-based distro, just type apt-get install foo. You don't need to even download the software, apt does it for you. The only interaction it require is a confirmation if your package have dependencies. A minute or two later (depending on the size of the software and the speed of your connection), the magic happen : the software is installed ! No chasing software on the Web, no downloading, almost no interaction (don't you find clicking Next, Next, Next stupid at last ?). It's the best thing since sliced bread, yet you fail to see it. Again, which distro do you use so I can give you clear instructions on how to use your package manager properly ?
Re:From the article... (Score:3, Insightful)
Re:From the article... (Score:5, Insightful)
One of my biggest gripes, was how when you try to install the latest version of Foo, it requires the latest version of Bar, which in turn requires newer versions of X and Y and so on.
If you are using a more recent distro, this is far less of a problem, but the moment you move back to something older that cannot be updated as far as required... you end up with problems.
Specifically, I was trying to get some things working under Red Hat 6.2, a 5 year old distro. Many called me dumb for even trying such a thing, which I find quite entertaining considering how many still use 6.2 in server back ends, not unlike how many still use NT4, because it works.
Speaking of NT4, I found it far easier to back port a Windows based app written for XP or 2k back to NT4, jumping back 5-9 years in terms of age, than it is to go from Fedora Core 2 to Red Hat 6.2, a jump of only 5.
This is why I so love Windows, consistent targets (within reason), where the # of system updates is finite and can be controlled.
As for this so called 'Dll hell' people like to bad mouth Windows for... I can't say I've ever had that issue myself... however I did find it worse than hell to try to figure out how to run 2 different versions of GLIBC on a system without recompiling every single application requiring one or the other... Windows has many simple solutions for a problem like that.
Re:From the article... (Score:3, Insightful)
If you want to target Red Hat 6.2, target Red Hat 6.2. If you want to have it both ways and depend on something that's much newer (and thus has lots of dependencies that have to be updated) that's your choice. You can't have it both ways - you want to target an old OS but you want to use the newest libraries to save yourself so
Re:From the article... (Score:3, Interesting)
RH6.2 is stable enough when running software designed for RH6.2. Try forcing the install of fedora softare on it and you'll have to be damn careful not to break things. You say yourself in your message that you're not upgrading because it works. Well, guess what, installing your fancy new package on it is damn close to the full upgrade you're trying to avoid.
Essentially, 6.2 = stable,
Re:From the article... (Score:3, Insightful)
Re:kernel modules are not applications (Score:3, Insightful)
Yes your right, my mistake.
strong bias toward requiring people and organizations to release their software in source form
And this is the childishness of Linux. My Linux system has a nvidia tnt2 card because thats what I had around when I put the system together. Now I have 2 choices of drivers, nvidias official one, or the barely works nv in X. Now if I felt like being a childish zealot, the nv driver would be a no
Nothing weird in that. (Score:3, Interesting)
Business as usual.
Huh? (Score:2, Interesting)
But it will happen, and probably this year (or early next).
About time.... (Score:5, Insightful)
Re:About time.... (Score:4, Interesting)
And, besides, we're approaching the time Linux kernel's typically fork: a few versions into to the series, the developers are starting to feel restricted by what they can't change in a stable kernel.
I just want to know how crap like this makes it to Slashdot. You'd think Taco would know better.
Re:About time.... (Score:3, Funny)
You must be new here.
Yes, of course it will. (Score:4, Insightful)
Even if this was a more hostile type of fork it wouldn't matter. Some amount of forking is healthy in open source.
Re:Yes, of course it will. (Score:3, Informative)
InfoWorld [infoworld.com]
PC World [idg.com.au]
Re:Yes, of course it will. (Score:3, Informative)
If you look a level deeper -- ie read the article linked in the /. blurb -- you'll find that what they said was "2.7 will only be created when it becomes clear that there are sufficient patches which are truly disruptive enough to require it." Must be that this critical mass of patches is about to be reached.
Utter bunk (Score:5, Informative)
Why fork 2.6? (Score:3, Interesting)
What seems to me like a good idea is to modularize the code so that you can just plug things in and out. That way, if the kernel got forked it wouldn't be much work to remove and add support. I would also like to see projects dedicated to only certain parts of the kernel. For exampmle, one group does networking and another does video and maybe one that check and approves the code. From then on the code would be piecet together in whatever way it suits people and because there's ony one group working on a particular part of the kernel, there would be no repetition. "One fit's all" sort of spreak. One "driver" or piece of code to support some hardware would work an all forks. Then each fork would be kind of like a distribution of pieced together code.
Re:Why fork 2.6? (Score:3, Informative)
I think there are some assumptions going around that because linux is monolithic, it is also a mess of spaghett
Re:Why fork 2.6? (Score:3, Interesting)
Forking another kernel tree will split the developers apart and slow down the developement of the 2.6 kernel.
Ideally, actual development should have been all over with at 2.6.0. Patchlevels would only fix bugs, not introduce new capabilities and thus unstable code.
Too bad it doesn't work that way with Linux.
What seems to me like a good idea is to modularize the code so that you can just plug things in and out. That way, if the kernel got forked it wouldn't be much work to remove and add support.
W
New linux development process (Score:5, Insightful)
In fact, out of all the news articles out there about linux 2.7, it seems (not that this surprises me) that slashdot went out of its way to pick one laden with the most possible negative FUD and the least possible useful information about what really is news with 2.7. A much better writeup can be found at LWN [lwn.net]. In summary, the present situation is:
Idiot. (Score:5, Insightful)
i don't get it (Score:3, Insightful)
A couple of months ago there was a general upheavel over the fact that Torvalds et al. had decided not to fork a developement tree of of 2.6.8, but rather do feature developement in the main kernel tree. The message of the article (brushing aside the compiling-applications-for-each-kernel-FUD) seems to be that they have made up their mind and fork an unstable kernel branch of anyway.
What am I missing?
Pretty baseless article. (Score:5, Interesting)
There are plenty of forked kernel trees out there. Most continually merge in changes from Linus' tree, though.
A fork doesn't matter. What matters is what it represents. If there is enough popularity that the Linux community ends up using incompatable forks, then yes, we have a problem.. but forking in no way necessarily leads to this.
As always, the available kernels in wide use will reflect what people actually want to use.
Beginning of FreeLinux, OpenLinux and NetLinux? (Score:2, Funny)
Is this the beginning of FreeLinux, OpenLinux and NetLinux?
What about SCOLinux or MSLinux?
Run, Chicken Little, Run! (Score:5, Funny)
Then where would we be?
My favorite quote from the article..... (Score:2)
I'm sorry but that's utter bullshite[sic]. I've never had to recompile applications because I upgraded the kernel...... have you?
--buddy
Letter to Editor... (Score:5, Informative)
Re:Letter to Editor... (Score:3, Insightful)
Christ.
I'm not making fun of you. What you said was completely accurate, but when you're dealing with clueless people, you need to speak simply and plainly. "holy pengiun pee?" C'mon.
Quick example:
To Whomever:
Your most recent article regarding the upcoming linux fork may be confusing to your readers. The current version of Linux is 2.6. As new enhancements and bug fixes are
It is Linus's fault. (Score:4, Insightful)
There needs to be a consistent driver API across each major version of the kernel.
A driver compiled for 2.6.1 should work, in its binary form, on 2.6.2, 2.6.3, and 2.6.99. If Linus wants to change the API, he should wait until 2.7/2.8 to do so.
The current situation is completely ridiculous. Anything which requires talking to the kernel (mainly drivers, but there are other things) needs either driver source code (watch your Windows people laugh at you when you tell them that) or half a dozen different modules compiled for the most popular Linux distributions. These days, that usually means you're going to get a RHEL version, and possibly nothing else. What happens when you're competent enough to maintain Fedora or Debian, but you don't have driver binaries? (Yeah I know, White Box or Scientific, but that's not the point.)
In fact, I recently had to ditch Linux for a project which required four different third-party add-ons, because I couldn't find a Linux distribution common to those supported by all four. We had to buy a Sun machine and use Solaris, because Sun has the common sense to keep a consistent driver API across each major version.
Yes, I've heard all the noise. Linus and others say that a stable driver API encourages IHV's to release binary-only drivers. So what? They're going to release binary-only drivers anyway. Others will simply avoid supporting Linux at all. LSB is going to make distributing userland software for Linux a lot easier, but until Linus grows up and stabilizes the driver API, anything which requires talking to the kernel is still stuck in the bad old days of 1980's-1990's. Come on people, it's 2004 and it's not too much to expect to be able to buy a piece of hardware that says "Drivers supplied for Linux 2.6" and expect to be able to use those drivers.
Re:It is Linus's fault. (Score:4, Insightful)
That's deliberate...
In fact, I recently had to ditch Linux for a project which required four different third-party add-ons, because I couldn't find a Linux distribution common to those supported by all four. We had to buy a Sun machine and use Solaris, because Sun has the common sense to keep a consistent driver API across each major version.
Overall, please either by from open source friendly hardware vendors, or pay the price for a proprietary operating system. You have chosen the second option, so deal with it.
Look at it this way (Score:3, Insightful)
break them late.
If all 2.6.x. kernels supported a driver, you'd just
accept that driver... until the 2.8.0 kernel comes
out. Then what? The vendor doesn't care; they got
their money. They either want to sell you new
hardware, or they've gone out of business. So you'd
then expect Linus to add some serious bloat for
supporting a driver ABI translation layer to let you
run ancient drivers on modern kernels.
Then what if you upgrade to a 64-bit processor?
You want Linux
Re:It is Linus's fault. (Score:4, Informative)
The first digit is the major version; aka 1.x, 2.x. The second digit is known as the minor version. From your examples, you appear to be asking for a consistent driver API across each minor version.
HTH, HAND.
Re:It is Linus's fault. (Score:3, Insightful)
Re:It is Linus's fault. (Score:4, Informative)
I completely agree and wish the kernel API were kept more stable. Which is saying a lot, as the Linux kernel API is currently way more stable than glibc, GCC, and most user-space libraries. Virtually all of my Linux trouble-shooting time over the last few years has been caused by API versioning issues in glibc and/or GCC.
Re:It is Linus's fault. (Score:3, Insightful)
I think you may be missing the point of OSS. These things (breaks to backwards compatibility) aren't really as much of a problem on Linux as they would be on Windows because virtually all of the code in question is available in source form. You can always fix the problem by recompiling stuff. On Windows if an operating system API changes you have to wait for whoever made all of your software to fix and recompile it and then redownload/repurchase it. This is part of why Microsoft is unable to fix many longst
Re:It is Linus's fault. (Score:3, Informative)
Weblogic 7
JDK 1.3.x
Now, our Weblogic people, whom we had no control over, decided to upgrade to Weblogic 8. Weblogic 8 requires JDK 1.4. It won't work with 1.3, or at least, I couldn't get it to work in my hours of trying (Note, 1.3 to 1.4 is just a "minor
Re:It is Linus's fault. (Score:3, Interesting)
How did this get modded insightful? Are you saying you know more about designing a kernel than Linus? Most hardware either has GPL drivers embedded in the kernel which automatically get updated to new changes in the API, or no driver at all. For those binary-only models I don't see nVidia having any problems. Maybe the people making the binary-only drivers need to learn how to do their job. Ever think of that?
Come on people, it's 2004 and it's n
Re:It is Linus's fault. (Score:3, Insightful)
Why? You tell a nice little story about running 4 different binary only drivers. You represent a very, very niche market.
Of anyone I've ever read or heard of, you are absolutely unique. I've run and installed Linux on hundreds if not thousands of computers. The only binary only driver I've ever had to install is the nVidia one. Even that is merely just because I wanted better performance. I could easily use the m
Irresponsible (Score:3, Interesting)
kernel panic (Score:3, Informative)
News in disguise ... (Score:3, Interesting)
erm
"We all assume that the kernel is the kernel that is maintained by kernel.org and that Linux won't fork the way UNIX did..right? There's a great story at internetnews.com about the SuSe CTO taking issue with Red Hat backporting features of the 2.6 Kernel into its own version of the 2.4 kernel. "I think it's a mistake, I think it's a big mistake," he said. "It's a big mistake because of one reason, this work is not going to be supported by the open source community because it's not interesting anymore because everyone else is working on 2.6." My read on this is a thinly veiled attack on Red Hat for 'forking' the kernel. The article also give a bit of background on SuSe's recent decision to GPL their setup tool YAST, which they hope other distros will adopt too."
CC.
Is Mr. Krill some sort of AI? (Score:5, Funny)
Wow, this article is pure uneducated guesswork... (Score:5, Insightful)
Well, this was fun to read. This article is about as educated about the subject as the average donkey.
Uhm, what gave MS the edge in the 80s was cheap i386 (well, actually 8088) hardware, and a relatively cheap OS (MS-DOS). Unix servers cost an arm and a leg in those days, and many companies/people wanted a pc as cheap as possible. Buying an i386 in those days meant running DOS, and the "marketplace" standardized around MS-DOS.
Utter bull. Upgrade kernels as much as you like, it won't break software unless you change major/minor numbers perhaps. The same thing will happen to windows if you start running stuff for win2k on win'95. But this is rather a matter of features in the kernel, not compilation against the kernel.
And the big news is? This happens every couple of years, with stable versions having even minor version numbers and unstable versions having odd minor version numbers. This helps admins and users to effectively KNOW which versions are good for everyday use, and which versions are experimental and for developers.
Well, imagine a Beowulf cluster... How long have those patches existed? There's several ways to build a cluster as long as you patch your kernel.
And why on earth would they want to do that? Linux is on the right track, so why bother with an entire rewrite of good functional code with good design.
It's also focussed on multimedia (xmms, mplayer, xine), webservers (apache), mailservers (sendmail, qmail, postfix)... I'd rather have people say that open source has focussed on internet servers than stuff it needs to make an OS run and wordprocessors. This like saying that an oven is primarily being used for making toast, while actually it also bakes cake, pizza and whatever you toss inside.
I'm sorry, this kind of article belongs in the trashbin. Either the journalist doesn't know what he's writing about, or he's being paid to know nothing about the subject. One of the things that keeps suprising me in business IT journalism is the lack of knowledge these people have about the subjects they're writing about.
Re:Wow, this article is pure uneducated guesswork. (Score:3, Insightful)
I'm not sure he's ever actually followed kernel development before.
For all those wanting to know whats going on without reading the linux-kernel mailing list, just run over to Kernel Traffic [kerneltraffic.org] -- a summary of the week's happenings on the list.
FUD (Score:3, Insightful)
The fact that patches exist, large or small, is what keeps the main kernel working. So for special implementations, patched kernels exist and everyone is cool with that. I have yet to see a patch that isn't from the main kernel and I don't forsee a situation necessitating that it not be.
I think we should look into the motivation of this article that cites no specific information or sources. It's pure speculation.
So Some Want a Spoon.... (Score:4, Funny)
Forking is a better evolution process as forking is only part the the process. The other part is re-integration of new and wonderful things resulting from forking..
Kernel forking (Score:5, Funny)
But I can see things deteriorating rapidly: someone will want vfork for kernels, someone else will implement kernel-to-kernel pipes, someone else will make vfork obsolete, someone will complain about kernels not getting SIGCHLDs from their child kernels, etc.
What? No, of I course I didn't read the fsck'n article
People should write in instead of calling "FUD" (Score:4, Informative)
Re: Is Linux about to fork?
Dear Kieren McCarthy,
I cannot believe this article:
http://www.techworld.com/opsys/news/index.cfm?New
The Linux kernel has historically alternated between stable
(even-numbered) sets: 2.0, 2.2, 2.4, 2.6, and odd-numbered development
sets. For this to be cast as a major disaster now that the next
development kernel is expected to be starting up is extremely odd. If
this is forking, it is forking only in the most pedantic sense, and yet
Paul Krill's article paints this as a major problem. This portrays a
simple lack of understanding of the Linux development process. The
article is therefore more confusing than informative.
Yours sincerely,
It's called a "branch"!!! So much FUD for nothing (Score:4, Interesting)
Odd numbered kernels do not get released, only even numbered ones. The scheme in Linux development is odd = unstable, even = stable.
I won't be suprised to see something from OSDL calling this article a piece of crap by tomorrow.
GJC
They make it out like this is something NEW... (Score:3, Informative)
These guys are making it out like some majorly new thing's going to happen that's going to change everything. Did everyone suddenly forget about how 2.4 forked to 2.5 which became 2.6? Give me a break.
The Reporter Doesn't Seem to Understand OSS (Score:3, Insightful)
"Top contributors to the Linux kernel have been Red Hat and SuSE, he said. Also contributing have been IBM, SGI, HP, and Intel."
Usually, when talking about the Kernel, it's valid to at least note some individuals, such as, say, Linus.
Kernel Fork (Score:5, Informative)
In the Linux Kernel Development Summit back in July, the core developers announced they weren't creating a 2.7 development kernel any time soon (discussed here [linuxjournal.com] and here [slashdot.org]).
Developers liked the way things were going with the new BitKeeper in use by Linus and at the time, they didn't see the need to fork a 2.7.
Traditionally before BitKeeper, kernel maintainers would send Linus 10-20 patches at once, then wait for him to release a snapshot to determine whether or not the patch made it in. If not, they would try again. During the 2.5 development cycle, problems started over dropped patches and that is when Linus decided to try BitKeeper.
According to Greg Kroah-Hartman, kernel maintainer, Bitkeeper has increased the amount of development and improved efficency. From 2.5 and 2.6, they were doing 1.66 changes per hour for 680 days. From 2.6.0 to 2.6.7 they were at 2.2 patches per hour thanks to the ability of wider range of testing of patches that went into the tree. The new process is - 1) Linus releases a 2.6 kernel release. 2) Maintainers flood Linus with patches that have been proven in the -mm tree 3) After a few weeks, Linus releases a -rc kernel 4) Everyone recovers from a load of changes and starts to fix any bugs found in the -rc kernel 5) A few weeks later, the next 2.6 kernel is released and the cycle starts again.
Because this new process has proved to be pretty efficient and is keeping mainters happy, it was predicted that no new 2.7 kernel was to be forked any time soon unless a set of changes appeared big enough and intrusive that a 2.7 fork is needed. If that is the case, Linus will apply the experimental patches to the new 2.7 tree, then he will continue to pull all of the ongoing 2.6 changes into the 2.7 kernel as the version stabilizes. If it turns out that the 2.7 kernel is taking an incorrect direction, the 2.7 will be deleted an deveryone will continue on the 2.6. If 2.7 becomes stable, it will be merged back into 2.6 or will be declared 2.8.
In conclusion, there was no plan for a 2.7 any time soon thanks to maintainers working well in the current setup but this was not carved in stone. It might just be that big enough changes are calling for a fork.
No, the kernel is not forking (Score:4, Informative)
Somebody Should Point Out (Score:3, Informative)
Groklaw comes through in the clutch (Score:4, Informative)
Re:Uh-oh (Score:5, Insightful)
Secondly, linux (the kernel) already "forks" every time a new development version is opened. ie, 2.1, 2,3, 2.5 etc. All this is saying is that 2.7 is about to open.
"Fork" is not a dirty word.
Re:Uh-oh (Score:4, Interesting)
Tell me about it. When I try installing older programs I get compile errors because the libraries aren't backwards compatible, or
I think at some point everyone needs to get together and say OK. Everything from this point on will be compatible with everything from this point on. No more of this crap. One standard installation procedure for every distribution (but each distribution does things its own way). If RPMs are so horrible, then stop releasing everything as RPMs!
Re:Uh-oh (Score:4, Insightful)
Someone decided that this is "bad" (and which finally opened the market for DOS/Windows), which I still don't fully get. If the software/system is still usable to me, I keep on using it (I'm still running my trusty old Atari in the studio for average MIDI sequencing). If I need to get a more powerful machine and/or the software will only be supported on this new machine -- what is this any different to todays Windows/Office situation?
With each new Windows the user interface changes (think of 3.11->95; XP anyone?), new data formats which are not backward compatible are introduced (.doc), and all they ensure is that you can load your old documents and please, please use the new formats as quickly as possible to make a lot of people buy the latest release...
If your Linux application breaks because it requires some stoneage whatever library, then just install it. For instance, people are used to carry a shitload of same-but-of-different-version DLLs on Windows systems and don't seem to object it.
With wide acceptance of RPMs we also accepted the breaks-if-lib-version-of-the-day-is-not-present kind of behavior... (The next logial step would be including required libraries in the RPMs just as every Windows program will come with all required DLLs.)
Re:Uh-oh (Score:3, Insightful)
You see there's a flaw in your logic.
Not fixing a bug to allow some bad code that uses said bug to run is just plain ignorant.
Re:I'd Like to Run Linux -- Just No Time (Score:2)
What you don't explain to my satisfaction is what accepting large patch sets into the Linux kernel has to do with easy Linux configuration.
Changes to the Linux kernel rarely require the user, or even the sysadmin, to learn anything.
Re:I'd Like to Run Linux -- Just No Time (Score:3, Insightful)
erm.. when did you last try installing linux, and which distro did you use?
I have recently installed ubuntu and fedora 3 on hardware ranging from a fairly old PII 400 with matrox gfx and scsi to an amd64 3000 with radeon 9200 gfx and serial ata, to an ibm thinkpad r40e.
All of these installed with almost no effort and Just Worked. (apart from power management on the laptop which took about 3
Re: Run Windows? -- Just No Time (Score:3, Funny)
I imagine a future where I can buy a copy of Windows and it would work just like Linux. If this could be a reality today, I would maybe consider Windows for som
Re:I'd Like to Run Linux -- Just No Time (Score:5, Insightful)
Dude - just stick with Winblows. You have no time to "know linux", as you put it, so just stick with what you know. You can post on Slashdot either way.
Please, developers, don't dumb Linux apps/distros down so much that it looks and feels like Windows.
Re:I'd Like to Run Linux -- Just No Time (Score:3, Insightful)
Because some people are overzaelous in their free software speeches to the masses. Linux users have a bad rep for a few bad elements.
Everyone should use what they want to use (at home at least). You like MacOS? Be my guest. Windows? Go right ahead. Linux? Hell yeah
Re:I'd Like to Run Linux -- Just No Time (Score:3, Informative)
Install SuSE, RedHat, or Ubuntu: they are easier to install than Windows XP and come with tons of applications. They even come with excellent printed documentation in case you do need to look something up.
Even easier, buy a PC with Linux pre-installed:
Re:I'd Like to Run Linux -- Just No Time (Score:3, Insightful)
I think linux gets the blame, but you wouldn't expect microsoft to write drivers for your camera.
Case in point, I bought a HP scanner/copier/printer about a week ago, and it took about 2 hours of constant reboots, driver conflict errors, and other problems to get it to work correctly. The end resu