Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Business Software Linux

Linus Torvalds: Backporting Is A Good Thing 232

darthcamaro writes "Looks like we don't need to speculate on what Linus' opinion is on backporting. Internetnews.com is running a story this morning that includes Linus' comments on the issue which was a /. topic yesterday. When asked by e-mail to comment for internetnews.com, Torvalds wrote: 'I think it makes sense from a company standpoint to basically "cherry-pick" stuff from the development version that they feel is important to their customers. And in that sense I think the back-porting is actually a very good thing.'"
This discussion has been archived. No new comments can be posted.

Linus Torvalds: Backporting Is A Good Thing

Comments Filter:
  • I'm glad someone prominent like L Torvalds is voicing pro-support of this.

    It's vividely overlooked by pros!
    • by Anonymous Coward on Tuesday April 20, 2004 @09:56PM (#8924630)
      Come on guys, stop looking for what Linus has to say to make up your mind, it's ridiculous. Although I think he is right most of the time, many Linux users and developers seem to take his word for some Sacred Truth and that's annoying ! Striving for an alternative OS while letting yourselves be sheparded by some high-tech guru is quite paradoxal...

      • by xoran99 ( 745620 ) on Tuesday April 20, 2004 @10:11PM (#8924758)
        People seek leadership. It's simply a part of human nature. I can understand where people might develop a "What Linus says goes" mentality when he's already done so much.
        • People seek leadership. It's simply a part of human nature.

          Eh. And then there's the mutants like me who reject all authority and don't get the celebrity thing.

          --

          • by Anonymous Coward
            You're not a mutant, you're just in the group that rebels against everything popular. There are millions out there that are just like you.

            I don't know which group is more annoying.

      • by Laser Lou ( 230648 ) on Tuesday April 20, 2004 @10:28PM (#8924871)
        Come on guys, stop looking for what Linus has to say to make up your mind, it's ridiculous. Although I think he is right most of the time, many Linux users and developers seem to take his word for some Sacred Truth and that's annoying ! Striving for an alternative OS while letting yourselves be sheparded by some high-tech guru is quite paradoxal...

        I bet you're not Catholic.


      • Surely you jest!

        Every time Linus farts, 1001 Linux fan-boys are there to analyze the substance of said statement...

  • by Anonymous Coward on Tuesday April 20, 2004 @09:50PM (#8924572)
    A lot of people stated they didn't like the idea of back porting. How many of you have changed now that Linus has stated his favor?
    • Which discussion were you reading? Seriously, I read the previous discussion, and it seemed to me that most people appreciated backporting.
    • by YOU LIKEWISE FAIL IT ( 651184 ) on Tuesday April 20, 2004 @11:15PM (#8925152) Homepage Journal

      Hell no. Somewhat tangentially, I was having this discussion the other day with someone:

      A machine I work on had been upgraded to 2.4.21-pre5, and I was a bit pissy because anything < -pre6 has the ptrace priv escalation flaw.

      It turned out that he was using some kind of kerazy Debian kernel with the fix backported. Without eventually finding him and asking him this, I had no way to know this, because: I wasn't allowed to test it and see if it worked ( I don't know PPC shellcode anyway ), The upgrader had not left his source tree or a changelog handy, the kernel didn't have any indicitative flags in its name, he hadn't installed it from a package.

      Now, of course, you should be able to do anything you like, which includes cherry picking features into old releases, but in my opinion, this can create a lot of confusion. It'd be really embarassing if the software you wrote only worked on your customised kernel if you didn't know it had been customised.

      Version numbers allow us to identify the patch level and feature set of a piece of software and we use them to specify minimum requirements for packages. I think at the very least, if you're going to backport stuff, change the version number somehow ( private fork ) - your patched software and the original can no longer be treated as the same entity.

      Ok, er, rant off. My point is that people not in favour of backports usually have some kind of reason for it, even if it's a crappy one like mine, and you'd need to convince me that my reasoning is bad before I'd drop the point.

      • by Surazal ( 729 ) on Tuesday April 20, 2004 @11:31PM (#8925243) Homepage Journal
        Something like this happened to me once. It was a comical chain of events, and admittedly the most embarrassing moment of my early career in things Unix-related.

        I was charged with upgrading a kernel, remotely, over the weekend, at a customer site. I did so, and I even remembered to ask first if there was anything special I should consider before going through with the task. No, just use the old configuration file, upgrade and let her rip.

        Ok, while I was kinda nervous about doing this, I felt balls-ey enough to do it anyways. I took the proper precautions. I reconfigured lilo to boot off the copied off old kernel by typing in "emergency" at the lilo prompt. Worse case scenario, I could call in, ask the local operator to walk over to the machine, hit Ctrl-Alt-Del, type "emergency" at the promt, and all would be well. Remember the words "worst case scenario".

        It happened. All went well during compilation, and I went ahead and hit "shutdown -r now" at the root prompt over my ssh connection. The connection was subsequently reset by peer. Ok, I expected that. I'll go grab a beer and wait for the ping to start responding again.

        I waited, waited... um, okay it's still not responding over the internet. Okay, where's that number... um, where did I put that number?

        You can see where this goes from here.

        Two hours later, I had no way of reaching the operator. The number I had in hand disappeared somewhere, and I had no idea where it went. To this day, I have no idea where I put that little slip of paper. Did it get folded into the infinite nooks that existed in my old, torn up wallet? Did it go to the same place where half of a good number of pairs of socks have disappeared to over the years? Where, where, where, where, where?

        Fortunately, all ended well. They had our number at least, and I apologized, gave them the emergency procedure, and everything was working again. Hooray for the forces of good!

        To this day, my heart still skips a beat whenever I reboot a server remotely.

        ------

        P.S. as it turned out I wasn't told that the kernel module for the network card being used wasn't officially supported by the official Linux kernel at the time, and that needed to be downloaded separately and recompiled along with the new kernel. It did boot successfully. It just did so without network support. D'oh!
        • by JWSmythe ( 446288 ) <jwsmythe@nospam.jwsmythe.com> on Wednesday April 21, 2004 @02:51AM (#8926267) Homepage Journal
          Been there, done that. :)

          Well, with our own machines. We have a standing rule, the person that hoses a kernel upgrade is the one that gets to drive down to the colo and fix it. Needless to say, we practice on the machine that are in the nearest colo first, before we do the distant remote ones. No one wants to foot the bill of flying from Florida to New York to fix a mistake they made. :)

          (fingers crossed) we've never hosed a machine too terribly far away. A few times we've forgotten to put in the network driver for particular cards, especially on odd-ball machines.

          On a late-night "we have to have this server up *NOW*" install, I built out the server, threw the kernel together (on the console), and drove the machine to the colo. I plugged it in, and turned it on, with the assumption everything was right (usually at about 2am) to get home and find it isn't on the network. An hour and another kernel compile later, it's working. :)

          Pretty much, our distant remote machines are very redundant, so if we hose one, it's not a big deal. If we hose 5, well someone is in for a plane ride.

          It's usually worth taking the plane ride anyways, there's usually something non-urgent that's waiting to be done that can be done while we're there.

          We did have an urgent one-task plane ride once. One of the facilites we're in had a brown-out. When the power came back, our connectivity didn't. The switch was unhappy. The colo's site tech tried resetting it, but that did nothing for us, so someone (me) took a plane ride carrying a switch and laptop. 20 minutes in the colo, 2 hours in taxi's, and 8 hours on planes before I go t home. That was a long night. Exhausted, I did get to have breakfast in the McDonalds in Times Square though (stopped by to see someone before they went to work).

          • Wow. You just describe our situation to a T.

            In an ideal world a remote kernel install would go something like this:

            1) Complie & install new kernel image.
            2) Reboot Fails
            3) Remotely power cycle.
            4) Lilo/Grub detects failed boot attempt and loads known good kernel
            5) Re-compile & install kernel properly this time
            6) Reboot

            A procedure like that would probably save countless plane flights/car trips.
            • 4) Lilo/Grub detects failed boot attempt and loads known good kernel

              That's a good idea for a Lilo/Grub feature, possibly with some sort of watchdog. The watchdog would need to be quite a comprehensive one to deal with network problems, so perhaps just stick with remote powercycle.

              The closest now is a combination of remote serial console and remote powercycle.

            • That does sound like a good idea. I wonder how they'd implement it? What's the difference between rebooting because someone upgraded the kernel and failed, and someone smacking the machine with their APC MasterSwitch because someone did something stupid.

              Something I would like is if there was an "only next boot" option in Lili. Maybe there is, but I've never seen it. Do something like this:

              shutdown -r now --try-image=NewKernel

              If it works, you change lilo.conf to use this new kernel, if it d
        • i'm fairly sure that lilo has somesorta lilo -R command or somthing that will try rebooting once wiht a new config. and if it fails wiht the new config it falls back to the old. if you had anotehr computer hooked up to the machine you are upgrading's ups you could just kill the power the the machine you were upgrading with a new kernel and then you could start over.
          • by moon-monster ( 712361 ) on Wednesday April 21, 2004 @05:56AM (#8926861) Homepage Journal
            Yep. Lilo -R configname makes it reboot into 'configname' on the next reboot only. - I do this every time I upgrade the kernel on a remote machine.

            I *also* set up a cron job to reboot the machine every 20 minutes or so, so if something happens like it comes up without networking, it'll reboot back into the old kernel in 20 min. If it comes up, I can kill the cron job and remove the entry for the old kernel.

            Saved my life more than once. Particularly on those pesky cheap co-lo boxes where you have to pay someone to reboot it for you.
      • by psychosis ( 2579 ) on Tuesday April 20, 2004 @11:47PM (#8925337)
        You're 100% justified in your frustration with the case you detailed, but the fault lies with your kernel developer/upgrader's kernel compile process.
        The whole mess would have been avoided if he had set the EXTRAVERSION variable in the kernel's Makefile to something meaningful (i.e. make the kernel version 2.4.21-pre5_custom_04apr04) and posted his specific notes on that kernel someplace where all can find them (I can personally recommend an internal Wiki for this - it works wonders).
        Also, if you release software after testing it on only one kernel, methinks there are some testing procedures to be beefed up!

        Don't knock backports for their own sake - knock those who misuse them. (Upside the back of the head, preferably.)
      • It seems like the main problem with backporting is the lack of documentation of the backport rather than the backport itself.

      • I view things the opposite way than you WRT security fixes.

        Each time there is a security fix they issue a new kernel version 2.4.x -> 2.4.(x+1) but I think that there should be an additional number to represent security fixes so that you can have a new version without the security flaw but with the same functionality (hence, less chances to have things break).

        Ideally we should have the feature set and the implementation numbers be different.

        The feature set would evolve with a new minor number for eac
      • Really, any patch that gets put into a kernel which gets distributed in binary form should modify the version string to indicate that it's there. I personally think the security fixes should never change the main version number, but should be distributed in the form of a patch (or a set of patches) that applies to every kernel version that had the issue and that adds something to the version string to indicate that it fixed it. Security fixes should be released already backported to previous versions, which
    • Development-stable vs. production-stable.

      I keep pointing this out on Slashdot, and for some reason people keep missing it: What comes out of the Linux Kernel Developers is a development release. Just like any development release structure of sufficient size, they have several working branches and several stable branches, but that doesn't mean that what you get from a "stable branch" is a valid production release.

      When a vendor releases a Linux system, I expect their kernel to be a valid production release.
  • by SCSi ( 17797 ) <(moc.tpedav) (ta) (suvroc)> on Tuesday April 20, 2004 @09:50PM (#8924573) Homepage
    Then more power to them. My fear is always that development/new stuff backported to a "stable" kernel is going to cause system unstability and weird stuff.

    Having a list of what exactly is backported would be optimal, that way when device X b0rks after 3 months of uptime, you know its possibly related to the newest version of that rock "stable" kernel you put into production.
    • by Lehk228 ( 705449 ) on Tuesday April 20, 2004 @10:01PM (#8924674) Journal
      well if you switch kernels and a device fails you should at least suspect the new kernel
    • by ron_ivi ( 607351 ) <sdotno@cheapcomp ... s.com minus poet> on Tuesday April 20, 2004 @10:02PM (#8924682)
      We don't need a consensus. If RedHat has the means to support backports and the customers who want them, more power to them. If Debian Stable picks only the security patches and has an audience who likes that, awesome.

      People seem to think of forking as bad. I think of it as "market research" -- whichever distro has the "best" philosophy will get the most users and/or customers (not necessarily the same thing - hense "best" was in quotes).

      • by GlassHeart ( 579618 ) on Tuesday April 20, 2004 @10:33PM (#8924905) Journal
        People seem to think of forking as bad.

        First of all, speaking as a professional software developer, forking is bad. Forking inevitably involves extra work integrating changes from branch to branch, and can be justified only by some technical or business need. Forking also multiplies testing requirements.

        I think we're talking about unnecessary forking as bad. For example, if vendor R backports features A, B, C, and D, while vendor S backports features A, C, D, and E, and vendor D backports features A, B, and E, writing software that'll work on "Linux" can already become complicated. In my example, you can only count on feature A being present, despite the collective effort of distros to backport 5 features 11 times!

        The Linux software market, particularly on the desktop, is small enough as it is. If the market demand for backporting comes mainly from the desktop, then it might be better to establish a common "desktop branch" somewhere between the development and stable branches.

        • by dmaxwell ( 43234 ) on Tuesday April 20, 2004 @11:30PM (#8925237)
          The kernel doesn't make much difference as far as binary compatibility goes. Very few binaries directly interact with the kernel. Going from a 2.4 to 2.6 kernel didn't cause a single piece of software I use to quit functioning. Neither did going from 2.2 to 2.4. I once dropped a RedHat kernel onto a Mandrake machine. Everything worked.

          Now the userland libraries on the other hand....
        • First of all, speaking as a professional software developer, forking is bad.

          See this is the problem, you're thinking about it as a professional software developer. You imagine a development team and a product, it's not like that.

          Forking inevitably involves extra work integrating changes from branch to branch, and can be justified only by some technical or business need.

          And what if I want to do something to satisfy my own intellectual curiousity? Is someone supposed to stop me from doing it "for t
          • Further, the patches are all available to anyone who wants to apply them. I frequently fold in a sort of regular patchset consisting of some of the vendor patches and my own. When I use the extra features, I know very well that I am introducing a dependancy that a vanilla kernel won't meet.

            Hopefully, developers consider those dependancies carefully and either provide for backwards compatibility (perhaps with reduced functionality) using ifdefs or know that what they are doing is some sort of niche and hav

        • First of all, speaking as a professional software developer, forking is bad. Forking inevitably involves extra work integrating changes from branch to branch, and can be justified only by some technical or business need.

          Instead, speak as a free and open source developer. Forking is good, because it leads to development which otherwise would never have occurred. The "extra work" reconciling the two branches later is a real issue, but since there would be nothing to reconcile without a fork, it seems cl

    • by GlassHeart ( 579618 ) on Tuesday April 20, 2004 @10:16PM (#8924789) Journal
      My fear is always that development/new stuff backported to a "stable" kernel is going to cause system unstability and weird stuff.

      The problem is that Linux serves three major customers: developers, desktops, and servers. The developers are well-served by the odd-numbered development branch. The servers need a rock solid branch, but tend to have very little need to support new hardware, so they should be happy with the even-numbered branch. The desktops still need stability, but also have to work with new hardware. Since the kernel developers don't have a formal process for this demographic, it's up to the distro maintainers to backport changes from the cutting edge.

      This is not a good thing, though. If each desktop Linux distro picks a slightly different subset of features to backport, desktop Linux can become even more fractured than the Gnome/KDE division. If they can manage to work together, it might be better to establish a new common branch between the two traditional ones.

      • by Anonymous Coward on Tuesday April 20, 2004 @10:42PM (#8924962)
        That's incompelete because the original story was about RHEL, which is for heavly-loaded servers (and has the new threading and other 2.6 features backported). I don't think anyone really cares about backporting drivers (eg SATA).

        A bigger problem is that the even numbers from Linus really aren't "stable", in the commercial sense. The early versions aren't bug-free enough and the later versions change too much. Furthermore, Linus' timing isn't the same as RedHat's. Linus doesn't care about 5-year support contracts, so they can't use his tree.
        • I don't think anyone really cares about backporting drivers (eg SATA).

          Oh really?

          http://www.ussg.iu.edu/hypermail/linux/kernel/04 04 .2/0019.html :)

          This is for the *official* branch, not a fork but it has obviously been requested (and is happening).

          Cheers
          Stor
      • Kernel patches generally won't cause software incompatibility. These "fractures" are generally caused by system libraries, most notably the glibc. New versions of the GCC can also be a major source of headaches.
  • by Rosco P. Coltrane ( 209368 ) on Tuesday April 20, 2004 @09:51PM (#8924575)
    When 2.4 wasn't stable, I was glad to take advantage of USB with my 2.2 kernels using the 2.2.16 USB backport (no longer available from linux-usb.org apparently).
  • by gevmage ( 213603 ) on Tuesday April 20, 2004 @09:51PM (#8924582) Homepage
    My understanding of people's main complaint about the backporting that companies were doing was that it forks kernel development.

    But that's nothing new. The kernel has forks in it anyway. The PowerPC kernel, for instance, exists as its own set of patches to the main kernel tree. Linux can't be everything to everyone so this is an inevitable development.

    I think that's the point of open-sourcing your code. If someone else can write a better (more appropriate) one, more power to them!
  • by chipster ( 661352 ) on Tuesday April 20, 2004 @09:56PM (#8924624)
    ...to be very valuable at my company - especially for the servers we have that are connected to SAN with Emulex HBA's. Without backporting, we'd spend lots of time hacking the kernels ourselves - which is fun and all - but not when project owners want their environments built yesterday.

    However, for my own personal systems, I don't favor backporting over a kernel upgrade.

  • SCO fixes (Score:4, Funny)

    by the MaD HuNGaRIaN ( 311517 ) on Tuesday April 20, 2004 @09:57PM (#8924634)
    Then perhaps someone should back-port the fixes that remove the SCO code.

    (ducks to avoid flying objects)

  • by Eberlin ( 570874 ) on Tuesday April 20, 2004 @09:58PM (#8924643) Homepage
    The argument against backporting is that a lot of wasted time/effort goes to something that could've been taken care of by upgrading to the latest/greatest kernel.

    The practicality here is that not everyone needs to upgrade to the latest kernel. Some production systems are stable enough as is and don't need the upgrade. Some may even become unstable as they get upgraded. Thus if some features are needed from the newer versions, backporting allows people to utilize just the features they need.

    All part of that Open Source GPL Free as in Freedom thing. Even for those who consider it a waste of time and effort, those are things that the GPL entitles anyone to put effort into. Those who are adamantly against such wasted manpower should probably consider visiting SourceForge for a coronary.
  • Suse (Score:5, Insightful)

    by augustz ( 18082 ) on Tuesday April 20, 2004 @09:58PM (#8924647)
    Very few vendors ship a TOTALLY plain kernel. I'm not sure why Suse makes such a big deal of theirs (if they even do ship a clean one, hard to beleive).

    The power of the GPL is that you can never truly fork the way Unix was forked. If Suse wanted to be compatible with redhats kernel, they can easily cherry pick the changes necessary, and redistribute them themselves.

    All very intresting coming from a company that had a propriatary installer. As far as I know RedHat has shipped everything open source for a very long time now.

    • Yup (Score:3, Informative)

      As far as I know RedHat has shipped everything open source for a very long time now.

      Yea, RedHat ships everything GPL (or compatible) with the exception of their artwork. I installed Fedora last week for the first time (had been running mdk 9) and it's great. It's stable, runs great, highly configurable, etc. And, it seems to me to be among the "freer" of the distros.

      I was SOOO irritated at RedHat stripping mp3 support at first, until I read why they did it. I gladly bit the bullet (and downloaded the
      • Re:Yup (Score:4, Informative)

        by DA-MAN ( 17442 ) on Tuesday April 20, 2004 @11:42PM (#8925306) Homepage
        Yea, RedHat ships everything GPL (or compatible) with the exception of their artwork.

        try rpm -qi redhat-artwork and you'll see the following:

        Name: redhat-artwork
        License: GPL
        Description: redhat-artwork contains the themes and icons that make up the Red Hat default look and feel.
    • Re:Suse (Score:3, Insightful)

      by justins ( 80659 )
      Very few vendors ship a TOTALLY plain kernel. I'm not sure why Suse makes such a big deal of theirs (if they even do ship a clean one, hard to beleive).

      They don't, not at all. Somehow I suspect that Novell CTO either:
      1. Said something that makes more sense in context
      2. Was speaking too generally and regrets what he said

      Actually he probably regrets it either way. :)
  • by Rosco P. Coltrane ( 209368 ) on Tuesday April 20, 2004 @09:59PM (#8924661)
    Microsoft too sometimes care to backport things. For example, IPP support in XP has been backported to Windows 95 [microsoft.com] and Windows 98 [microsoft.com] after many requests from companies like Brother and from users.

    Unlike what Linus advocates though, Microsoft doesn't do that routinely and users have to bitch and moan pretty bad to get what they need.
  • by Anonymous Coward on Tuesday April 20, 2004 @10:04PM (#8924692)
    What are people bitching about? It's OPEN SOURCE. Redhat has made a business decision to backport functionality/fixes to an older kernel. They feel their customers need those fixes/features and they're supporting their customers. They're also making those fixes/features available to anyone else who wants to download them.

    You don't want them? FINE. Download and build a vanilla kernel at any time. It only takes a few minutes. Talk about a tempest in a teapot....

  • by CrackHappy ( 625183 ) on Tuesday April 20, 2004 @10:06PM (#8924710) Journal
    can be good in specific instances.

    I believe Linus touched on this point pretty eloquently.

    The basic issue that I believe is the root of the problem is that at the end of the day, the majority of Linux users and developers are generally in synch and moving along at a brisk pace, while the backported and modified kernels are effectively not supported except by the specific vendor that created the fork. This basically will always either lock the customer in or make it more difficult to integrate new features if the customer wishes to switch vendors. This is like turning forks into a mini Windows.

    Just my $.02

    • Couldn't the customer just upgrade to the newer kernel and then use any vendor? If a customer needs a feature (and has the money or man-power), the customer is going to get it one way or another.
      • by CrackHappy ( 625183 ) on Tuesday April 20, 2004 @10:16PM (#8924794) Journal
        That's true, but at what expense? Let's say the vendor that a customer is using goes out of business and has done some significant backporting and customization of their kernel. Some of the vendor's applications depend upon this and thus would need some modification to make it work with a vanilla kernel. At that point, there could be significant cost to the customer.

        I know that it's a hypothetical situation, but I see it every day at work. The vendor that we are using has built their software and applications in such a way that we cannot migrate any of our applications off of Microsoft platforms because of very specific tie-ins to SQL Server, IIS, and Windows 2000.

        The data could move just fine, but all the business logic would be toast.

        I just can see this kind of thing happening with a forked and backported kernel. I don't think it is anywhere near as likely, but something to consider.
        • Well, you wait a year, and then move on to the next new stable kernel. That will have all the backported fixes in it.

          Of course, if you are depending on a closed source application, then you may be out of luck, and stuck at the current kernel version forever. That can be the price of a frozen application. (OTOH, I'm still running Alpha Centuari, and the last time I checked CivCTP still worked on my Debian unstable. So it ain't necessarily so.)

          You have, however, pointed to one of the reasons that I have
      • You've obviously, as most of slashdot it seems, have never heard the two words "supported configuration".

        Let me explain. We're running a DB2/WAS installation. We bought all the hardware from IBM down to the IBM branded FC cards and FC switches. We then purchased several RHAS2.1 licenses for this installation.

        Why? Enterprise. Pure and simple. We need immediate support from IBM and they have a very specific list of "supported configurations". Deviate and they won't touch you.

        RedHat backporting fixes has on
    • Wow, four (Score:5, Funny)

      by Anonymous Coward on Tuesday April 20, 2004 @10:16PM (#8924792)
      "The basic issue"
      "I believe"
      "root of the problem"
      "at the end of the day"

      At the beginning of one sentence, you used four of the most overused means of beginning a sentence that I know of - impressive!
      • by CrackHappy ( 625183 ) on Tuesday April 20, 2004 @10:21PM (#8924823) Journal
        Oh gawd, you're right.

        I've been consumed by the corporate lingo machine. Comes from talking to the CEO too much.
        • Re:Wow, four (Score:3, Interesting)

          by golgotha007 ( 62687 )
          well if that's the case, then you need to throw in a few 'move forwards' in there.

          i deal with several companies in the states, and email from their CTO and CEO's are always peppered with 'moving forward' and 'move forward'.

          it drives me insane! when Darl McBride kept telling open source folks shit like, 'yes, i know you're all concerned with weather or not our IP is in the kernel, but let's just move forward.'

          how freakin assinine is that?
  • by Alan Hicks ( 660661 ) on Tuesday April 20, 2004 @10:07PM (#8924723) Homepage
    While I don't believe that back-porting security fixes, or even new features is a major danger to forking an open source project (be it the kernel or something else doesn't matter), I do find it a danger as a sysadmin.

    Often times I've had to administer an older RedHat linux machine that may be running a version two or more years out of date. A vulnerability comes up in a service that hasn't been patched in God knows when, and I have to fix the hole. The security advisory says version a.b.c is vulnerable and that I should upgrade to a.b.d or a.e.X. So I log onto that machine and check to see what version it's running and I see:

    a.b.c-g

    So is a.b.c-g vulnerable or not? Did RedHat back-port something from the a.e.X branch that fixes this? Now I have to dig through some RedHat mailing lists which I may not be subscribed to to find out. Now I know for a fact that when I see an a.b.c-h version for download from RedHat's site, that I've need to upgrade.

    But what if it's the other way around?

    What if I hear about a vulnerability in version a.e.X of that same software, but that the a.b.X version is safe. Did the vendor back-port some vulnerable bit of code from a.e.X into their a.b.c-g binaries? How am I to know?

    Back-porting things like this makes it hell on a sysadmin who then has to subscribe to lots of different mailing lists, particularly if you're running different distributions.
    • by DA-MAN ( 17442 ) on Tuesday April 20, 2004 @11:56PM (#8925376) Homepage
      So is a.b.c-g vulnerable or not? Did RedHat back-port something from the a.e.X branch that fixes this? Now I have to dig through some RedHat mailing lists which I may not be subscribed to to find out. Now I know for a fact that when I see an a.b.c-h version for download from RedHat's site, that I've need to upgrade

      That's what the errata pages are for. One quick stop at redhat.com/errata will answer all your questions.

      What if I hear about a vulnerability in version a.e.X of that same software, but that the a.b.X version is safe. Did the vendor back-port some vulnerable bit of code from a.e.X into their a.b.c-g binaries? How am I to know?

      Again, errata pages

      Back-porting things like this makes it hell on a sysadmin who then has to subscribe to lots of different mailing lists, particularly if you're running different distributions.

      Let's just think about Apache as an example. Say a bug comes out in Apache 1.3.26, theres a fix in 1.3.29. Now let's say that you also bought an apache mod ala Chilisoft to handle ASP, but it only works with 1.3.26. Would you feel good about RH updating to 1.3.29, instead of moving over those 2 or 3 lines that fix some buffer overflow in some .c file on an older version?

      In addition there are open source modules. Imagine a problem with Apache 1.3.26 so RH puts out a fix for 1.3.29 in addition you'd have to release errata for php + all it's modules, mod_ssl, mod_perl, mod_python, and more...

      Backporting is the best way to run a stable and secure system. Micro changes to known good subsystems. In fact if you notice, Debian Stable is secure and stable because of the backporting of fixes and those releases last for decades.
      • Red Hat apparently takes the errata pages down for distros that are no longer supported. This is a real PITA and is just unforgivable, I dealt with this the other day, trying to figure out whether a RH 7.3 box was vulnerable or not. The only errata i could find was for RH9 and the enterprise series. If you can tell me where the errata pages are for older versions I'll shit a brick, 'cause they don't seem to be indexed by the search engine and if they're there, they're buried somewhere.

        Debian is just so
    • try "rpm -q --changelog |more" If you know what is vuln you will see the changes there.
  • by darthcamaro ( 735685 ) * on Tuesday April 20, 2004 @10:16PM (#8924788)
    Way too many voices from anonymous cowards in this discussion dissing Linus. Linus is the voice of the Linus kernel. Period. Sure many,many others contribute, but it's his original creation. He holds the copywrite to the name Linux so he has the 'EARNED' right to the authoritative voice. Nuff said.
    • He holds the copywrite to the name Linux so he has the 'EARNED' right to the authoritative voice. Nuff said.

      One could say that, since he GPLed that creation, he waived the right to be an "Authoritative" voice. Nothing stops me (except for my refusal to touch GPLed code) or Redhat/Slackware/Joe Hacker from implementing something that Linus is dead set against, and he can't do a thing about it.
      • One could say that, since he GPLed that creation, he waived the right to be an "Authoritative" voice. Nothing stops me (except for my refusal to touch GPLed code) or Redhat/Slackware/Joe Hacker from implementing something that Linus is dead set against, and he can't do a thing about it.

        Not quite. Having GPLed it and as others have contributed to it he no longer holds the entire copyright for linux kernel, that's true. But he owns the trademark for the name linux. So he is in every possible way still the a
  • by big_groo ( 237634 ) <groovisNO@SPAMgmail.com> on Tuesday April 20, 2004 @10:17PM (#8924800) Homepage
    Slashdot [slashdot.org] seems to agree with Perens...
  • Suckers! (Score:5, Funny)

    by Anonymous Coward on Tuesday April 20, 2004 @10:26PM (#8924854)
    I love it when all the Linux drones bitch and moan as they follow Torvalds down the primrose path. Now us Mac users, for instance, think diff...hold on...Steve's doing another keynote...be right back...
  • by Magickcat ( 768797 ) on Tuesday April 20, 2004 @10:26PM (#8924858)
    Linus' opinion appears to be much more balanced than your selected excerpts and comments portray. The article is quite even handed, and you appear to have completely misrepresented or perhaps misunderstood the complex ideas in it.

    His final comments are in fact:
    "So you win some, you lose some, so far I suspect it's been mostly positive."

    Here are some extracts from the article that illustrate this in a more even handed light:
    "And even Torvalds' support of the practice comes with some caveats. "There are parts of it that worry me logistically," Torvalds wrote in the e-mail to internetnews.com. "What usually ends up happening is that the back-ported patches aren't being very cleanly maintained, and that ends up making it harder for people to do a good job of maintaining a coherent base for the stable kernel." "

    "Although kernel 'coherency' is a victim of backported features, according to Torvalds, its impact is not long lived. "That lack of 'coherency' makes long-term maintenance harder (and is probably why the SuSE people aren't thrilled, because it also makes it harder to keep different trees reasonably well in sync)," Torvalds continued."

    ""But as long as the long-term goal ends up to drop the old stable kernel in favour of the development kernel anyway, the pain is likely to be fairly temporary.""

    Bruce Perens also contributes some fairly even handed comments:
    "However, Bruce Perens, a former Debian Project Leader and author of the Open Source Definition, wasn't as quick to compliment Red Hat.

    "In a public post, Perens wrote, "I have a large customer who refuses to run Red Hat's kernel even when they run Red Hat's distribution. And it's just for the reason that [SUSE] talks about. The kernel is so far diverged from the main thread of Linux that it's a dead-end, and there's no hope of getting it supported from anyone but Red Hat. I don't know if they meant it as a lock-in play, but it works out that way. And my customer doesn't have patience for Red Hat's support.""

    "Despite his comments, Perens told internetnews.com he didn't think the issue was that big a deal and hoped the community wouldn't over-react."
  • by greppling ( 601175 ) on Tuesday April 20, 2004 @10:31PM (#8924897)
    ...so while I am not completely against lots of forking, it seems worthwile to reexplain the problems with it:

    The more standardized the installed Linux kernels around the world are, the easier it is for application developers to develop and test for all Linux platforms. Why do you think don't we have an Oracle certification for Debian? Because the debian vanilla kernel is different enough from the RedHat kernel that all their testing is invalidated. Also, remember that there is not even a standardized way to test whether a certain feature is available way in an installed kernel.

    I think Linus Torvalds himself is always underestimating the importance of his vanilla kernel. His claim is always that it is not very important for a patch to be "in", as everyone who needs it can apply it himself. But as a matter of fact, it doesn't make sense to make an application dependent on a kernel feature, unless this feature is part of the vanilla kernel. Or unless you are willing to develop for "RedHat only", at which point the /. crowd will certainly cry foul.

    The other point is, of course, that many forks imply a diversion of kernel development resources. For the record, one of the reasons Andrew Morton has given for accepting the 4G/4G patch into -mm is that he is aware that distributions will need it anyway, and he doesn't want to have distribution kernels diverge from vanilla as quickly as in 2.4. (Actually, now that objrmap is in -mm, it might not be necessary any more.)

  • GPL gives you choice (Score:5, Interesting)

    by richard_za ( 236823 ) on Tuesday April 20, 2004 @10:39PM (#8924946) Homepage Journal
    GPL gives the right to fork/backport the code, nobody is forcing you to use a forked/backported kernel. If your current installation is stable and you only need that feature - what is stopping you?
  • This is great (Score:3, Interesting)

    by ErichTheWebGuy ( 745925 ) on Tuesday April 20, 2004 @10:56PM (#8925043) Homepage
    Personally, I prefer backporting. I see no reason for me to upgrade my installations of kernel 2.4.x, etc. when the system runs just fine. It adds a lot of value to Linux if (at the very least) patches are backported for at least 2 or 3 major revisions. Look at the outcry when our pals in Redmond said no more Win98 support. That only underscores the need for packporting and supporting software for an extended period after the last "official" release.

    Go Linus!
  • Obligatory (Score:3, Funny)

    by cpu_fusion ( 705735 ) on Tuesday April 20, 2004 @10:58PM (#8925055)
    Looks like we don't need to speculate [..]

    You must be new here...

  • by spagetti_code ( 773137 ) on Tuesday April 20, 2004 @11:26PM (#8925217)
    I worked on a unix product in the late 80's early 90s. We supported 35 different variants/versions of Unix. Each one had a set of #defines throughout the code dealing with slight variations in libraries, in tools, in compilers and so on.

    When we ported to a new version of unix, we had scripts that would compile test programs for each of 100s of known features that differentiated these unii (plural of unix?). Results of the test programs would auto-create the config program.

    It was a nightmare, one that I have not had to deal with as much in the Windows world. (re-reads sentence, sighs, puts on flame suit). It was one of the early strengths highlighted by the MS marketing dept ("There is only one windows, but hundreds of unixes").

    I was hoping Linux wouldn't go down that path. Just the thought of YAST vs RPM etc gives me the willies. Forks can only lead the distros further apart.
    • Plural of Latinate English words ending in -ix is either -ixes (native) or -ices (pronounced 'iseez') e.g. matrix -> matrixes/matrices, appendix -> appendixes/appendices. I would therefore suggest Unices if you refuse to touch Unixes.

      There's a handful of words ending in -ice that were backformed from plurals in -ices, the correct singular of which was -ix. Therefore, a generic word for 'Unix' could be 'Unice' (Youness). (Unfortunately I can't remember the examples I was given.)
      • by Anonymous Coward
        God I hate your stupid sig
    • > When we ported to a new version of unix, we had scripts that would compile test programs for each of 100s of known features that differentiated these unii (plural of unix?). Results of the test programs would auto-create the config program

      LOL man :) You just described GNU autoconf.
      What's wrong with it ?
  • Just Like This (Score:4, Insightful)

    by SomeOtherGuy ( 179082 ) on Tuesday April 20, 2004 @11:59PM (#8925393) Journal
    The freedom and power to backport, sideport, crossport, etc...Is the reason why the Linux kernel is now running on everything from Tosters and Parking meters to Rocket Ships and Space Stations. How can that be a bad thing? Millions of devices are running on this stuff...how cool is that?
  • by tkittel ( 619119 ) on Wednesday April 21, 2004 @02:43AM (#8926230)
    From the article:

    >> However, Bruce Perens, a former Debian Project Leader and author of the Open Source Definition, wasn't as quick to compliment Red Hat.

    In a public post, Perens wrote, "I have a large customer who... <<

    The public post mentioned was actually this Slashdot comment here [slashdot.org].

  • Work vs. Home (Score:5, Interesting)

    by miffo.swe ( 547642 ) <daniel@hedblom.gmail@com> on Wednesday April 21, 2004 @02:52AM (#8926272) Homepage Journal
    I think many of those who complain about backporting doesnt have to manage many servers as i have to. When i have installed and made the things sparkle i dont want to be forced to upgrade. I want my servers to last some time to keep my work load down. Constant upgrading and installation takes valuable time that i doesnt have away. I suspect RedHat backports for preciely those reasons, too keep the upgrade threadmill at bay. Look at how many poeple still uses NT, last i saw some statistics it was something like 60% still on NT. I presume upgrading those servers would demand much work and labour from the admins.

    We dont want a similar situation for linux users, that they dont upgrade because of possible hassle. Backporting ease upgrading while you still get access to new features.

    At home its a whole different matter for us who love to tinker at our free time. I use gentoo of that very reason. I want the latest and gratest at home but damnit not at work.
  • MFC (Score:3, Informative)

    by Anonymous Coward on Wednesday April 21, 2004 @05:28AM (#8926791)
    Torvalds wrote: 'I think it makes sense from a company standpoint to basically "cherry-pick" stuff from the development version that they feel is important to their customers.

    FreeBSD has been back-porting stuff from their development branch (CURRENT) into their STABLE branch (which is where FreeBSD releases are forked from) for years. They even have their own TLA for it, "MFC" == Merged From Current. Makes STABLE... well,... stable. Very stable. And secure.
  • ( note : we're a redhat/fedora shop )

    at work :

    1) on our cluster, because every redhat kernel we've run had some problem, either w/ performance or stability. I'm sure if I took the time to compile it w/ all the correct .config options, stripped out a lot of the crap we don't need, etc., it would probably run just fine - but w/out the functionality that redhat's patches give you. So I take vanilla kernels, patch in what I need ( it's a lot easier to add patches and figure out what breaks, than to remove p

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...