Linus Torvalds: Backporting Is A Good Thing 232
darthcamaro writes "Looks like we don't need to speculate on what Linus' opinion is on backporting. Internetnews.com is running a story this morning that includes Linus' comments on the issue which was a /. topic yesterday.
When asked by e-mail to comment for internetnews.com, Torvalds wrote:
'I think it makes sense from a company standpoint to basically "cherry-pick" stuff from the development version that they feel is important to their customers. And in that sense I think the back-porting is actually a very good thing.'"
Personally, I find it underrated (Score:2, Interesting)
It's vividely overlooked by pros!
Quit idolizing Linus Torvalds (Score:5, Insightful)
Re:Quit idolizing Linus Torvalds (Score:5, Insightful)
Re:Quit idolizing Linus Torvalds (Score:3, Interesting)
Eh. And then there's the mutants like me who reject all authority and don't get the celebrity thing.
--
Re:Quit idolizing Linus Torvalds (Score:2, Insightful)
I don't know which group is more annoying.
Re:Quit idolizing Linus Torvalds (Score:4, Funny)
I bet you're not Catholic.
Re:Quit idolizing Linus Torvalds (Score:2, Funny)
Surely you jest!
Every time Linus farts, 1001 Linux fan-boys are there to analyze the substance of said statement...
Re:True story... (Score:2)
If some guy was stalking me in a IHOP, following me into the bathroom, makeing an effort to sit in the next stall, and paying that much attention to my bathroom activities, my main motivation would be to leave the bathroom quickly too. If he forgot to flush, so be it, some freak was following him around. Stop stalking the man.
If you're such a great Windows lover, go stalk Bill Gates instead. But, you may not see him in a IHOP.
So does this become the party line? (Score:5, Interesting)
Re:So does this become the party line? (Score:2)
Re:So does this become the party line? (Score:5, Informative)
Hell no. Somewhat tangentially, I was having this discussion the other day with someone:
A machine I work on had been upgraded to 2.4.21-pre5, and I was a bit pissy because anything < -pre6 has the ptrace priv escalation flaw.
It turned out that he was using some kind of kerazy Debian kernel with the fix backported. Without eventually finding him and asking him this, I had no way to know this, because: I wasn't allowed to test it and see if it worked ( I don't know PPC shellcode anyway ), The upgrader had not left his source tree or a changelog handy, the kernel didn't have any indicitative flags in its name, he hadn't installed it from a package.
Now, of course, you should be able to do anything you like, which includes cherry picking features into old releases, but in my opinion, this can create a lot of confusion. It'd be really embarassing if the software you wrote only worked on your customised kernel if you didn't know it had been customised.
Version numbers allow us to identify the patch level and feature set of a piece of software and we use them to specify minimum requirements for packages. I think at the very least, if you're going to backport stuff, change the version number somehow ( private fork ) - your patched software and the original can no longer be treated as the same entity.
Ok, er, rant off. My point is that people not in favour of backports usually have some kind of reason for it, even if it's a crappy one like mine, and you'd need to convince me that my reasoning is bad before I'd drop the point.
Re:So does this become the party line? (Score:5, Funny)
I was charged with upgrading a kernel, remotely, over the weekend, at a customer site. I did so, and I even remembered to ask first if there was anything special I should consider before going through with the task. No, just use the old configuration file, upgrade and let her rip.
Ok, while I was kinda nervous about doing this, I felt balls-ey enough to do it anyways. I took the proper precautions. I reconfigured lilo to boot off the copied off old kernel by typing in "emergency" at the lilo prompt. Worse case scenario, I could call in, ask the local operator to walk over to the machine, hit Ctrl-Alt-Del, type "emergency" at the promt, and all would be well. Remember the words "worst case scenario".
It happened. All went well during compilation, and I went ahead and hit "shutdown -r now" at the root prompt over my ssh connection. The connection was subsequently reset by peer. Ok, I expected that. I'll go grab a beer and wait for the ping to start responding again.
I waited, waited... um, okay it's still not responding over the internet. Okay, where's that number... um, where did I put that number?
You can see where this goes from here.
Two hours later, I had no way of reaching the operator. The number I had in hand disappeared somewhere, and I had no idea where it went. To this day, I have no idea where I put that little slip of paper. Did it get folded into the infinite nooks that existed in my old, torn up wallet? Did it go to the same place where half of a good number of pairs of socks have disappeared to over the years? Where, where, where, where, where?
Fortunately, all ended well. They had our number at least, and I apologized, gave them the emergency procedure, and everything was working again. Hooray for the forces of good!
To this day, my heart still skips a beat whenever I reboot a server remotely.
------
P.S. as it turned out I wasn't told that the kernel module for the network card being used wasn't officially supported by the official Linux kernel at the time, and that needed to be downloaded separately and recompiled along with the new kernel. It did boot successfully. It just did so without network support. D'oh!
Re:So does this become the party line? (Score:4, Interesting)
Well, with our own machines. We have a standing rule, the person that hoses a kernel upgrade is the one that gets to drive down to the colo and fix it. Needless to say, we practice on the machine that are in the nearest colo first, before we do the distant remote ones. No one wants to foot the bill of flying from Florida to New York to fix a mistake they made.
(fingers crossed) we've never hosed a machine too terribly far away. A few times we've forgotten to put in the network driver for particular cards, especially on odd-ball machines.
On a late-night "we have to have this server up *NOW*" install, I built out the server, threw the kernel together (on the console), and drove the machine to the colo. I plugged it in, and turned it on, with the assumption everything was right (usually at about 2am) to get home and find it isn't on the network. An hour and another kernel compile later, it's working.
Pretty much, our distant remote machines are very redundant, so if we hose one, it's not a big deal. If we hose 5, well someone is in for a plane ride.
It's usually worth taking the plane ride anyways, there's usually something non-urgent that's waiting to be done that can be done while we're there.
We did have an urgent one-task plane ride once. One of the facilites we're in had a brown-out. When the power came back, our connectivity didn't. The switch was unhappy. The colo's site tech tried resetting it, but that did nothing for us, so someone (me) took a plane ride carrying a switch and laptop. 20 minutes in the colo, 2 hours in taxi's, and 8 hours on planes before I go t home. That was a long night. Exhausted, I did get to have breakfast in the McDonalds in Times Square though (stopped by to see someone before they went to work).
Re:So does this become the party line? (Score:2)
In an ideal world a remote kernel install would go something like this:
1) Complie & install new kernel image.
2) Reboot Fails
3) Remotely power cycle.
4) Lilo/Grub detects failed boot attempt and loads known good kernel
5) Re-compile & install kernel properly this time
6) Reboot
A procedure like that would probably save countless plane flights/car trips.
Re:So does this become the party line? (Score:2)
4) Lilo/Grub detects failed boot attempt and loads known good kernel
That's a good idea for a Lilo/Grub feature, possibly with some sort of watchdog. The watchdog would need to be quite a comprehensive one to deal with network problems, so perhaps just stick with remote powercycle.
The closest now is a combination of remote serial console and remote powercycle.
Re:So does this become the party line? (Score:2)
Something I would like is if there was an "only next boot" option in Lili. Maybe there is, but I've never seen it. Do something like this:
shutdown -r now --try-image=NewKernel
If it works, you change lilo.conf to use this new kernel, if it d
Re:So does this become the party line? (Score:4, Funny)
You recompile your Windows NT kernel?
Re:So does this become the party line? (Score:2)
Shut up and get back to work. I don't care if there's a time difference between our offices, you're suppose to be working, not reading
Re:So does this become the party line? (Score:3, Informative)
Re:So does this become the party line? (Score:5, Informative)
I *also* set up a cron job to reboot the machine every 20 minutes or so, so if something happens like it comes up without networking, it'll reboot back into the old kernel in 20 min. If it comes up, I can kill the cron job and remove the entry for the old kernel.
Saved my life more than once. Particularly on those pesky cheap co-lo boxes where you have to pay someone to reboot it for you.
Re:So does this become the party line? (Score:5, Insightful)
The whole mess would have been avoided if he had set the EXTRAVERSION variable in the kernel's Makefile to something meaningful (i.e. make the kernel version 2.4.21-pre5_custom_04apr04) and posted his specific notes on that kernel someplace where all can find them (I can personally recommend an internal Wiki for this - it works wonders).
Also, if you release software after testing it on only one kernel, methinks there are some testing procedures to be beefed up!
Don't knock backports for their own sake - knock those who misuse them. (Upside the back of the head, preferably.)
Re:So does this become the party line? (Score:2)
Re:So does this become the party line? (Score:3, Informative)
I view things the opposite way than you WRT security fixes.
Each time there is a security fix they issue a new kernel version 2.4.x -> 2.4.(x+1) but I think that there should be an additional number to represent security fixes so that you can have a new version without the security flaw but with the same functionality (hence, less chances to have things break).
Ideally we should have the feature set and the implementation numbers be different.
The feature set would evolve with a new minor number for eac
Re:So does this become the party line? (Score:2)
Re:So does this become the party line? (Score:3, Insightful)
I keep pointing this out on Slashdot, and for some reason people keep missing it: What comes out of the Linux Kernel Developers is a development release. Just like any development release structure of sufficient size, they have several working branches and several stable branches, but that doesn't mean that what you get from a "stable branch" is a valid production release.
When a vendor releases a Linux system, I expect their kernel to be a valid production release.
Re:Who cares about Mr Thorwaldes? (Score:2, Insightful)
*Very new here.
*Very brave--this usually is posted as AC
*Very stupid (note that this is not exclusive of other options.
*A troll (also not exclusive).
*porting balls of steel the size of a semi truck.
*trying to be funny. I really hope this is what it is, because you are going to get flamed.
BTW, care to provide links and or sources? (in case you aren't trying to be funny.
Re:Who cares about Mr Thorwaldes? (Score:2)
Don't worry, I wash my hands frequently. But years of thumbing through a book, and it'll show wear.
Re:Who cares about Mr Thorwaldes? (Score:2)
Nah, just set it to verbose.
Re:Who cares about Mr Thorwaldes? (Score:2, Insightful)
That proves nothing. Actually, it may speak negatively about your skills. I passed an MCSE at age 12, and I sucked at age 12. I was a huge newb who thought that hackers were bad people etc. Reading your post, so do you. Unfortunately, I doubt you have the convenient excuse of being 12.
These are hard numbers and 100% FACTS! There are several more where these came from.
Oh, boo-hoo. Not to
Re:Who cares about Mr Thorwaldes? (Score:2, Funny)
I smile every time I see that.
As long as it doesnt b0rk my boxen.. (Score:5, Interesting)
Having a list of what exactly is backported would be optimal, that way when device X b0rks after 3 months of uptime, you know its possibly related to the newest version of that rock "stable" kernel you put into production.
Re:As long as it doesnt b0rk my boxen.. (Score:5, Insightful)
The beauty of Open Source. (Score:5, Insightful)
People seem to think of forking as bad. I think of it as "market research" -- whichever distro has the "best" philosophy will get the most users and/or customers (not necessarily the same thing - hense "best" was in quotes).
Re:The beauty of Open Source. (Score:5, Interesting)
First of all, speaking as a professional software developer, forking is bad. Forking inevitably involves extra work integrating changes from branch to branch, and can be justified only by some technical or business need. Forking also multiplies testing requirements.
I think we're talking about unnecessary forking as bad. For example, if vendor R backports features A, B, C, and D, while vendor S backports features A, C, D, and E, and vendor D backports features A, B, and E, writing software that'll work on "Linux" can already become complicated. In my example, you can only count on feature A being present, despite the collective effort of distros to backport 5 features 11 times!
The Linux software market, particularly on the desktop, is small enough as it is. If the market demand for backporting comes mainly from the desktop, then it might be better to establish a common "desktop branch" somewhere between the development and stable branches.
Re:The beauty of Open Source. (Score:5, Informative)
Now the userland libraries on the other hand....
Checkpoint & Trend Micro come to mind. (Score:2)
Re:The beauty of Open Source. (Score:2)
See this is the problem, you're thinking about it as a professional software developer. You imagine a development team and a product, it's not like that.
Forking inevitably involves extra work integrating changes from branch to branch, and can be justified only by some technical or business need.
And what if I want to do something to satisfy my own intellectual curiousity? Is someone supposed to stop me from doing it "for t
Re:The beauty of Open Source. (Score:2)
Further, the patches are all available to anyone who wants to apply them. I frequently fold in a sort of regular patchset consisting of some of the vendor patches and my own. When I use the extra features, I know very well that I am introducing a dependancy that a vanilla kernel won't meet.
Hopefully, developers consider those dependancies carefully and either provide for backwards compatibility (perhaps with reduced functionality) using ifdefs or know that what they are doing is some sort of niche and hav
Re:The beauty of Open Source. (Score:2)
Instead, speak as a free and open source developer. Forking is good, because it leads to development which otherwise would never have occurred. The "extra work" reconciling the two branches later is a real issue, but since there would be nothing to reconcile without a fork, it seems cl
Re:As long as it doesnt b0rk my boxen.. (Score:5, Insightful)
The problem is that Linux serves three major customers: developers, desktops, and servers. The developers are well-served by the odd-numbered development branch. The servers need a rock solid branch, but tend to have very little need to support new hardware, so they should be happy with the even-numbered branch. The desktops still need stability, but also have to work with new hardware. Since the kernel developers don't have a formal process for this demographic, it's up to the distro maintainers to backport changes from the cutting edge.
This is not a good thing, though. If each desktop Linux distro picks a slightly different subset of features to backport, desktop Linux can become even more fractured than the Gnome/KDE division. If they can manage to work together, it might be better to establish a new common branch between the two traditional ones.
Re:As long as it doesnt b0rk my boxen.. (Score:4, Interesting)
A bigger problem is that the even numbers from Linus really aren't "stable", in the commercial sense. The early versions aren't bug-free enough and the later versions change too much. Furthermore, Linus' timing isn't the same as RedHat's. Linus doesn't care about 5-year support contracts, so they can't use his tree.
Re:As long as it doesnt b0rk my boxen.. (Score:2)
Oh really?
http://www.ussg.iu.edu/hypermail/linux/kernel/0
This is for the *official* branch, not a fork but it has obviously been requested (and is happening).
Cheers
Stor
Re:As long as it doesnt b0rk my boxen.. (Score:3, Informative)
It really is a good thing (Score:5, Interesting)
Re:It really is a good thing (Score:2)
HAPPY 420!!!!
Re:It really is a good thing (Score:5, Insightful)
The Linux kernel is forked anyway (Score:5, Interesting)
But that's nothing new. The kernel has forks in it anyway. The PowerPC kernel, for instance, exists as its own set of patches to the main kernel tree. Linux can't be everything to everyone so this is an inevitable development.
I think that's the point of open-sourcing your code. If someone else can write a better (more appropriate) one, more power to them!
Backporting has proven... (Score:5, Interesting)
However, for my own personal systems, I don't favor backporting over a kernel upgrade.
SCO fixes (Score:4, Funny)
(ducks to avoid flying objects)
Re:SCO fixes (Score:2, Funny)
Re:SCO fixes (Score:5, Funny)
cd
echo "" >
patch -p1 <
Do this and you'll have a kernel free of any SCO code.
Backporting a Good Thing (TM) (Score:5, Insightful)
The practicality here is that not everyone needs to upgrade to the latest kernel. Some production systems are stable enough as is and don't need the upgrade. Some may even become unstable as they get upgraded. Thus if some features are needed from the newer versions, backporting allows people to utilize just the features they need.
All part of that Open Source GPL Free as in Freedom thing. Even for those who consider it a waste of time and effort, those are things that the GPL entitles anyone to put effort into. Those who are adamantly against such wasted manpower should probably consider visiting SourceForge for a coronary.
Suse (Score:5, Insightful)
The power of the GPL is that you can never truly fork the way Unix was forked. If Suse wanted to be compatible with redhats kernel, they can easily cherry pick the changes necessary, and redistribute them themselves.
All very intresting coming from a company that had a propriatary installer. As far as I know RedHat has shipped everything open source for a very long time now.
Yup (Score:3, Informative)
Yea, RedHat ships everything GPL (or compatible) with the exception of their artwork. I installed Fedora last week for the first time (had been running mdk 9) and it's great. It's stable, runs great, highly configurable, etc. And, it seems to me to be among the "freer" of the distros.
I was SOOO irritated at RedHat stripping mp3 support at first, until I read why they did it. I gladly bit the bullet (and downloaded the
Re:Yup (Score:4, Informative)
try rpm -qi redhat-artwork and you'll see the following:
Name: redhat-artwork
License: GPL
Description: redhat-artwork contains the themes and icons that make up the Red Hat default look and feel.
Re:Suse (Score:3, Insightful)
They don't, not at all. Somehow I suspect that Novell CTO either:
1. Said something that makes more sense in context
2. Was speaking too generally and regrets what he said
Actually he probably regrets it either way.
Re:Suse (Score:4, Insightful)
Re:Suse (Score:2)
When he said "Unix(tm) was forked" he probably didn't mean BSD, which is not Unix(tm) anymore, he probably meant the Unix wars of the 80s, where a myrid of firms made SysV commercial Unixes that were all incompatable with each other. This fragmentation persists in the trible memory and makes Unix hackers wary of any forks.
(The BSDs do just fine even with forks, I'm runn
Microsoft does it too sometimes (Score:5, Interesting)
Unlike what Linus advocates though, Microsoft doesn't do that routinely and users have to bitch and moan pretty bad to get what they need.
Welcome to Open Source (Score:5, Interesting)
You don't want them? FINE. Download and build a vanilla kernel at any time. It only takes a few minutes. Talk about a tempest in a teapot....
Re:Welcome to Open Source (Score:2)
Well, now that I've posted, I won't have that problem. This time.
BackPorting is a bad thing in general, but ... (Score:5, Interesting)
I believe Linus touched on this point pretty eloquently.
The basic issue that I believe is the root of the problem is that at the end of the day, the majority of Linux users and developers are generally in synch and moving along at a brisk pace, while the backported and modified kernels are effectively not supported except by the specific vendor that created the fork. This basically will always either lock the customer in or make it more difficult to integrate new features if the customer wishes to switch vendors. This is like turning forks into a mini Windows.
Just my $.02
Re:BackPorting is a bad thing in general, but ... (Score:2)
Re:BackPorting is a bad thing in general, but ... (Score:5, Insightful)
I know that it's a hypothetical situation, but I see it every day at work. The vendor that we are using has built their software and applications in such a way that we cannot migrate any of our applications off of Microsoft platforms because of very specific tie-ins to SQL Server, IIS, and Windows 2000.
The data could move just fine, but all the business logic would be toast.
I just can see this kind of thing happening with a forked and backported kernel. I don't think it is anywhere near as likely, but something to consider.
Re:BackPorting is a bad thing in general, but ... (Score:2)
Of course, if you are depending on a closed source application, then you may be out of luck, and stuck at the current kernel version forever. That can be the price of a frozen application. (OTOH, I'm still running Alpha Centuari, and the last time I checked CivCTP still worked on my Debian unstable. So it ain't necessarily so.)
You have, however, pointed to one of the reasons that I have
Re:BackPorting is a bad thing in general, but ... (Score:3, Insightful)
Let me explain. We're running a DB2/WAS installation. We bought all the hardware from IBM down to the IBM branded FC cards and FC switches. We then purchased several RHAS2.1 licenses for this installation.
Why? Enterprise. Pure and simple. We need immediate support from IBM and they have a very specific list of "supported configurations". Deviate and they won't touch you.
RedHat backporting fixes has on
Wow, four (Score:5, Funny)
"I believe"
"root of the problem"
"at the end of the day"
At the beginning of one sentence, you used four of the most overused means of beginning a sentence that I know of - impressive!
Re:Wow, four (Score:4, Funny)
I've been consumed by the corporate lingo machine. Comes from talking to the CEO too much.
Re:Wow, four (Score:3, Interesting)
i deal with several companies in the states, and email from their CTO and CEO's are always peppered with 'moving forward' and 'move forward'.
it drives me insane! when Darl McBride kept telling open source folks shit like, 'yes, i know you're all concerned with weather or not our IP is in the kernel, but let's just move forward.'
how freakin assinine is that?
I have to disagree on a few grounds (Score:5, Insightful)
Often times I've had to administer an older RedHat linux machine that may be running a version two or more years out of date. A vulnerability comes up in a service that hasn't been patched in God knows when, and I have to fix the hole. The security advisory says version a.b.c is vulnerable and that I should upgrade to a.b.d or a.e.X. So I log onto that machine and check to see what version it's running and I see:
a.b.c-g
So is a.b.c-g vulnerable or not? Did RedHat back-port something from the a.e.X branch that fixes this? Now I have to dig through some RedHat mailing lists which I may not be subscribed to to find out. Now I know for a fact that when I see an a.b.c-h version for download from RedHat's site, that I've need to upgrade.
But what if it's the other way around?
What if I hear about a vulnerability in version a.e.X of that same software, but that the a.b.X version is safe. Did the vendor back-port some vulnerable bit of code from a.e.X into their a.b.c-g binaries? How am I to know?
Back-porting things like this makes it hell on a sysadmin who then has to subscribe to lots of different mailing lists, particularly if you're running different distributions.
Re:I have to disagree on a few grounds (Score:5, Insightful)
That's what the errata pages are for. One quick stop at redhat.com/errata will answer all your questions.
What if I hear about a vulnerability in version a.e.X of that same software, but that the a.b.X version is safe. Did the vendor back-port some vulnerable bit of code from a.e.X into their a.b.c-g binaries? How am I to know?
Again, errata pages
Back-porting things like this makes it hell on a sysadmin who then has to subscribe to lots of different mailing lists, particularly if you're running different distributions.
Let's just think about Apache as an example. Say a bug comes out in Apache 1.3.26, theres a fix in 1.3.29. Now let's say that you also bought an apache mod ala Chilisoft to handle ASP, but it only works with 1.3.26. Would you feel good about RH updating to 1.3.29, instead of moving over those 2 or 3 lines that fix some buffer overflow in some
In addition there are open source modules. Imagine a problem with Apache 1.3.26 so RH puts out a fix for 1.3.29 in addition you'd have to release errata for php + all it's modules, mod_ssl, mod_perl, mod_python, and more...
Backporting is the best way to run a stable and secure system. Micro changes to known good subsystems. In fact if you notice, Debian Stable is secure and stable because of the backporting of fixes and those releases last for decades.
Re:I have to disagree on a few grounds (Score:2)
Debian is just so
Re:I have to disagree on a few grounds (Score:2)
Well, let's see.... At the very end of Red Hat's Errata Page [redhat.com] you will see the following text:
Advisories for unsupported products
Errata that have been previously released for unsupported and End of Life Products are also available.
In that text, there is a link to this URL:
http://www.redhat.com/security/archives.htm [redhat.com]
Re:I have to disagree on a few grounds (Score:3, Informative)
Re:I have to disagree on a few grounds (Score:5, Interesting)
I typically do just that, but it isn't always as easy as it should be. RPM based distributions (of which RedHat by definition is) tend to have obscure, hard to trace dependencies in their packages. Compiling from known good source downloaded from the software project's FTP site isn't always the best solution, particularly if you've let other system updates lapse.
Case in point. I came across a RedHat machine running a vulnerable version of OpenSSH. It was no longer being supported by RedHat, so I downloaded the latest release of OpenSSH Portable. The configure script complained that zlib was old and possibly insecure. This means I had to go in an compile a new zlib, and then make sure everything worked properly when linked with the new zlib. But now, my entire RPM tree is completely hosed. I might as well not even have RPM, since nearly every damn thing relies on zlib.
In checking RedHat's FTP sites, they had apparently also back-ported security fixes to the older version of zlib (IIRC), which of course meant OpenSSH would have still complained when I re-compiled, but I could be modestly sure it wouldn't be vulnerable, or could I?
Of course practicies like that enventually force you upgrade your machine to a new version at some point in time, or hose the RPM database by compiling all new updates and their dependencies from source.
Thank God and Patrick for Slackware, where these problems are few and far between, and typically MUCH easier to resolve.
Linus is the human voice of the kernel (Score:4, Insightful)
Re:Linus is the human voice of the kernel (Score:2)
One could say that, since he GPLed that creation, he waived the right to be an "Authoritative" voice. Nothing stops me (except for my refusal to touch GPLed code) or Redhat/Slackware/Joe Hacker from implementing something that Linus is dead set against, and he can't do a thing about it.
Re:Linus is the human voice of the kernel (Score:2)
Not quite. Having GPLed it and as others have contributed to it he no longer holds the entire copyright for linux kernel, that's true. But he owns the trademark for the name linux. So he is in every possible way still the a
Yeah, well... (Score:3, Funny)
Suckers! (Score:5, Funny)
TRFA - instead of going for the big headline (Score:5, Informative)
His final comments are in fact: "So you win some, you lose some, so far I suspect it's been mostly positive."
Here are some extracts from the article that illustrate this in a more even handed light:
"And even Torvalds' support of the practice comes with some caveats. "There are parts of it that worry me logistically," Torvalds wrote in the e-mail to internetnews.com. "What usually ends up happening is that the back-ported patches aren't being very cleanly maintained, and that ends up making it harder for people to do a good job of maintaining a coherent base for the stable kernel." "
"Although kernel 'coherency' is a victim of backported features, according to Torvalds, its impact is not long lived. "That lack of 'coherency' makes long-term maintenance harder (and is probably why the SuSE people aren't thrilled, because it also makes it harder to keep different trees reasonably well in sync)," Torvalds continued."
""But as long as the long-term goal ends up to drop the old stable kernel in favour of the development kernel anyway, the pain is likely to be fairly temporary.""
Bruce Perens also contributes some fairly even handed comments:
"However, Bruce Perens, a former Debian Project Leader and author of the Open Source Definition, wasn't as quick to compliment Red Hat.
"In a public post, Perens wrote, "I have a large customer who refuses to run Red Hat's kernel even when they run Red Hat's distribution. And it's just for the reason that [SUSE] talks about. The kernel is so far diverged from the main thread of Linux that it's a dead-end, and there's no hope of getting it supported from anyone but Red Hat. I don't know if they meant it as a lock-in play, but it works out that way. And my customer doesn't have patience for Red Hat's support.""
"Despite his comments, Perens told internetnews.com he didn't think the issue was that big a deal and hoped the community wouldn't over-react."
Seems everybody agrees now... (Score:5, Insightful)
The more standardized the installed Linux kernels around the world are, the easier it is for application developers to develop and test for all Linux platforms. Why do you think don't we have an Oracle certification for Debian? Because the debian vanilla kernel is different enough from the RedHat kernel that all their testing is invalidated. Also, remember that there is not even a standardized way to test whether a certain feature is available way in an installed kernel.
I think Linus Torvalds himself is always underestimating the importance of his vanilla kernel. His claim is always that it is not very important for a patch to be "in", as everyone who needs it can apply it himself. But as a matter of fact, it doesn't make sense to make an application dependent on a kernel feature, unless this feature is part of the vanilla kernel. Or unless you are willing to develop for "RedHat only", at which point the /. crowd will certainly cry foul.
The other point is, of course, that many forks imply a diversion of kernel development resources. For the record, one of the reasons Andrew Morton has given for accepting the 4G/4G patch into -mm is that he is aware that distributions will need it anyway, and he doesn't want to have distribution kernels diverge from vanilla as quickly as in 2.4. (Actually, now that objrmap is in -mm, it might not be necessary any more.)
GPL gives you choice (Score:5, Interesting)
This is great (Score:3, Interesting)
Go Linus!
Obligatory (Score:3, Funny)
You must be new here...
Forking is almost always ugly (Score:5, Insightful)
When we ported to a new version of unix, we had scripts that would compile test programs for each of 100s of known features that differentiated these unii (plural of unix?). Results of the test programs would auto-create the config program.
It was a nightmare, one that I have not had to deal with as much in the Windows world. (re-reads sentence, sighs, puts on flame suit). It was one of the early strengths highlighted by the MS marketing dept ("There is only one windows, but hundreds of unixes").
I was hoping Linux wouldn't go down that path. Just the thought of YAST vs RPM etc gives me the willies. Forks can only lead the distros further apart.
Re:Forking is almost always ugly (Score:2)
There's a handful of words ending in -ice that were backformed from plurals in -ices, the correct singular of which was -ix. Therefore, a generic word for 'Unix' could be 'Unice' (Youness). (Unfortunately I can't remember the examples I was given.)
Re:Forking is almost always ugly (Score:2, Funny)
Re:Forking is almost always ugly (Score:2)
Re:Forking is almost always ugly (Score:3, Funny)
LOL man
What's wrong with it ?
Just Like This (Score:4, Insightful)
Source of Bruce Perens Comment (Score:4, Interesting)
>> However, Bruce Perens, a former Debian Project Leader and author of the Open Source Definition, wasn't as quick to compliment Red Hat.
In a public post, Perens wrote, "I have a large customer who... <<
The public post mentioned was actually this Slashdot comment here [slashdot.org].
Work vs. Home (Score:5, Interesting)
We dont want a similar situation for linux users, that they dont upgrade because of possible hassle. Backporting ease upgrading while you still get access to new features.
At home its a whole different matter for us who love to tinker at our free time. I use gentoo of that very reason. I want the latest and gratest at home but damnit not at work.
MFC (Score:3, Informative)
FreeBSD has been back-porting stuff from their development branch (CURRENT) into their STABLE branch (which is where FreeBSD releases are forked from) for years. They even have their own TLA for it, "MFC" == Merged From Current. Makes STABLE... well,... stable. Very stable. And secure.
here's why I do and don't (Score:2)
at work
1) on our cluster, because every redhat kernel we've run had some problem, either w/ performance or stability. I'm sure if I took the time to compile it w/ all the correct
No. (Score:5, Funny)
Perhaps one day people will be able to understand his thoughts and passions but, sadly, today isn't that day.
Re:why the fuck should we care (Score:3, Funny)
Actually no, his skills are much below the "hello world level". Pretty much right under the libc6 layer in fact.
You, on the other hand, seem like you couldn't even pass a urine test...
Re:So says God! (Score:2)
So I believe he'd give a nod of approval.
Re:So says God! (Score:3, Insightful)
Re:So says God! (Score:2)