Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Business Software Linux

2.4, The Kernel and Forking 384

darthcamaro writes "We all assume that the kernel is the kernel that is maintained by kernel.org and that Linux won't fork the way UNIX did..right? There's a great story at internetnews.com about the SuSe CTO taking issue with Red Hat backporting features of the 2.6 Kernel into its own version of the 2.4 kernel. "I think it's a mistake, I think it's a big mistake," he said. "It's a big mistake because of one reason, this work is not going to be supported by the open source community because it's not interesting anymore because everyone else is working on 2.6." My read on this is a thinly veiled attack on Red Hat for 'forking' the kernel. The article also give a bit of background on SuSe's recent decision to GPL their setup tool YAST, which they hope other distros will adopt too."
This discussion has been archived. No new comments can be posted.

2.4, The Kernel and Forking

Comments Filter:
  • by MartinG ( 52587 ) on Monday April 19, 2004 @12:32PM (#8905643) Homepage Journal
    News at 11.
    • by fr2asbury ( 462941 ) * on Monday April 19, 2004 @12:35PM (#8905705)
      You're moderated as funny, but it makes the point I was going to make. The venders routinely do not ship with a vanilla kernel. I do not believe that RedHat/Fedora is not alone in shipping with a heavily patched and customized kernel. It's hardly forking, it's just the way they package up the kernel.
      • by mahdi13 ( 660205 ) <icarus.lnx@gmail.com> on Monday April 19, 2004 @12:47PM (#8905913) Journal
        You can say their kernels are patched, but from what I've seen they are more customized then patched. Most of their patches do not apply very well to other vendors kernels or even systems. Every try to install a SUSE kernel on a Red Hat install? Sure they are both RPM based, but their systems are truely unique and you will get many boot errors at the least.

        I am not against vendors making custom kernels at all, it's really a good idea. They make kernels that are designed for a specific purpose, Red Hat aims more for the server support/performance and SUSE has been focusing more on the Desktop install. There are optimizations done for servers that would be silly, or even degrading, for a desktop.

        I agree that this is not a matter of 'forking' the kernel, but packaging.
        • Every try to install a SUSE kernel on a Red Hat install? Maybe I'm ignorant but why would anyone try this? Just to see if it could be done? I liken this to installing a Ford motor in a Chevy car or vice versa... you may get the motor to fit but the transmission won't bolt up without some modifications or an adapter kit... and in the end, what do you have? Is there a good reason for wanting to have a SUSE kernel on an RH install?
      • by CrankyFool ( 680025 ) on Monday April 19, 2004 @12:56PM (#8906053)
        Double negatives do not sometimes fail to not make your point clearer.

        For example, in this comment you seem to suggest that you believe RH/FEdora is alone in shipping with a heavily patched and customized kernel.

  • by Unregistered ( 584479 ) on Monday April 19, 2004 @12:32PM (#8905656)
    There are tons of kernel patchsets out there. some (ck-sources for example) include 2.6 code as well.
    • by dominator ( 61418 ) on Monday April 19, 2004 @12:55PM (#8906017) Homepage
      Definitely. I wonder if he's aware that the latest SuSe 2.4.x kernel has no fewer than 2400 patches, many of which were backported from the 2.6 series...
      • Of course he is. I really don't think he even cares about the practice. He's just trying to drum up more business for SuSE and take some away from Red Hat. Nothing more than capitalism at work.

        Move along, nothing to see here.
        • by Pharmboy ( 216950 ) on Monday April 19, 2004 @09:19PM (#8911930) Journal
          As a fellow capitalist (20 years in marketing), I can verify your statement. This applies to durable products in my case, but could apply to software as well.

          If the competition has it, and you don't, its because it is not reliable enough, will cause potential problems, not fully compatible or affects performance/comfort/durability in a negative way.

          If you have it and the competition doesn't, it is because they are technologically behind, outdated, incapable of incorporating change, or they just don't care about you.

          If you both have it, yours is better tested, proven, the correct version, or better documented.

          If they got it before you did, it was because you care enough about your customers to fully test it to prevent any potential problems. If you got it before they did, it is because you have better facilities/personel for testing so you can get it to market faster.

          Steel is stronger than plastic, unless mine is plastic. Then plastic is lighter than steel, and stronger, pound for pound. Bigger is better, unless mine is smaller. Then we use more modern parts, instead of old technology, so ours is smaller.

          Any feature my product has or doesn't have, I can give you a very good explanation that will demonstrate why we are better for having it / not have it. No matter the circumstances, we did it on purpose, and we did it because we care more than the evil/incompetent/small competition. If you give me at least 30 minutes, I will also produce graphs and charts that clearly demonstrate this point.

          As to what the magical "it" I keep referring to, it doesn't matter. What ever "it" is, we have a reason for having / not having "it" and why we implimented it first / last. (please refer to the image for obvious proof.)

          You don't have to be evil to be in Marketing, but it really does help ;)
  • Since when? (Score:5, Insightful)

    by Progman3K ( 515744 ) on Monday April 19, 2004 @12:34PM (#8905695)
    Since when is it OK to develop a new kernel and abandon one that many users are still betting on?

    2.4 can have new things added to it, there's now law that says it can't.

    And if the 2.4 maintainers have found some good additions, well, all the better for users of 2.4
    • Re:Since when? (Score:5, Insightful)

      by n1ywb ( 555767 ) on Monday April 19, 2004 @12:48PM (#8905929) Homepage Journal
      Features from new kernels have always been backported to old kernels. Backporting is nothing new and it's often a Good Thing(tm). Lots of stuff from 2.3/2.4 has been backported to 2.2, and lots of stuff from 2.5/2.6 has been backported to 2.4, and hopefully more good stuff will be backported so that the people that for whatever reason won't or can't upgrade to 2.6 will not be left out in the cold.

      I don't know much about RedHat's backporting efforts specifically, although some people seem to think they've done a cob job of it. Perhaps that's the point the SUSE guy was trying to make? Not so much chiding RedHat for backporting, but for doing a crappy job of it.
  • Who cares? (Score:5, Insightful)

    by rehabdoll ( 221029 ) on Monday April 19, 2004 @12:35PM (#8905716) Homepage
    Why is this really interesting? Open Source / Free software is designed for forking. Why dont they just call it "RedHat kernel" or something?
  • Yesterday's news? (Score:5, Insightful)

    by ivan256 ( 17499 ) * on Monday April 19, 2004 @12:35PM (#8905717)
    RedHat has been backporting patches forever. That doesn't make it a fork any more than the actual kernel forks. Look at the LinuxPPC tree for an example of a real fork. Look at rtLinux, uClinux, and all the other actual kernel forks before crying wolf.

    Kernel forks don't kill the kernel.
    • Re:Yesterday's news? (Score:3, Informative)

      by n1ywb ( 555767 )
      uCLinux has been merged back into 2.6.
      • Re:Yesterday's news? (Score:5, Interesting)

        by ivan256 ( 17499 ) * on Monday April 19, 2004 @12:54PM (#8906013)
        LinuxPPC is merged back in periodically too. Hence the reason that forks of Linux don't have the effect forks of Unix did. They're not all hiding their work from each other, and they're all allowed and willing to take the good from another fork and incorporate it into their own trees. Even if they don't, users are free to if they wish. Forking can be healthy in a free software environment.
  • Oh, please (Score:5, Insightful)

    by JoeBuck ( 7947 ) on Monday April 19, 2004 @12:35PM (#8905719) Homepage

    Things have always been this way. None of the major distributors ship a pure Linus kernel, including SUSE. Everyone includes patches. Backporting 2.6 features helps everyone because it subjects those features to more testing, meaning that 2.6 will be better as a result.

    Red Hat has more kernel hackers than anyone else, which means that they have the ability to support kernels with more hacks. So what SUSE is really saying is "How dare Red Hat use its competitive advantage?"

    Finally, it's not true that "everyone else is working on 2.6". People in the "open source community" are still maintaining 2.2, remember. Future 2.4 releases may well include some of the backported stuff developed by Red Hat and others.


    • Backporting 2.6 features helps everyone because it subjects those features to more testing, meaning that 2.6 will be better as a result.


      But is this testing in a different context or enviroment -- i.e., of a patch or feature in 2.4 instead of 2.6 -- useful? More precisely, is such testing as useful as the testing of the patch or feature in the enviroment for which it was designed, i.e., 2.6?

      • by wfberg ( 24378 ) on Monday April 19, 2004 @12:50PM (#8905954)
        But is this testing in a different context or enviroment -- i.e., of a patch or feature in 2.4 instead of 2.6 -- useful? More precisely, is such testing as useful as the testing of the patch or feature in the enviroment for which it was designed, i.e., 2.6?

        I'd think it's more useful to test it in 2.4 as well as 2.6 rather than only testing in 2.6. Sure, it's more work (work that RedHat is willing to do) but it may turn up bugs in conditions that do not occur in 2.6 yet (or not reproducibly, etc.)
    • Re:Oh, please (Score:5, Informative)

      by ananke ( 8417 ) on Monday April 19, 2004 @12:43PM (#8905866)
      "None of the major distributors ship a pure Linus kernel"

      actually, slackware does ship vanilla kernel.
    • Re:Oh, please (Score:5, Informative)

      by ivan256 ( 17499 ) * on Monday April 19, 2004 @01:03PM (#8906150)
      Backporting 2.6 features helps everyone because it subjects those features to more testing, meaning that 2.6 will be better as a result.

      Unlikely. Testing of features that have been hacked back into an older kernel won't provide representative results. You'll only find the most glaring of bugs through that kind of testing, and the hope typically is you find those before you put them into production anyway.

      The real effect of backporting features is that it scares off third party developers. Companies that want linux drivers for their devices have to pick a version to work with. RedHat's backports are notorious for changing things in the driver interfaces. That means a vendor, who may not be informed as to the dynamics of the kernel development process, may choose to support only RedHat's version of the kernel, to speciffically not support RedHat's version, or worst and most likely, to not support linux at all.

      I've done consulting and contracting for all three types of companies, as well as one who tried to support both RedHat's tree and Linus' tree from the same code base, and believe me when I say that it's a mess. Let's just hope that somewhere along the way RedHat decides to pick a versioning scheme that makes it easy to tell their features are in there at compile time, and starts providing change logs so you can figure out what they've done. As of right now their stuff is a nightmare.
    • Gentoo has vanilla 2.4 and 2.6 kernels.
  • by Anonymous Coward on Monday April 19, 2004 @12:36PM (#8905722)
    before 2.6 existed, their 2.4.x kernels looked WAY more like 2.5.x kernels. I always thought this was dangerous, as what they were effectively doing was dressing up "alpha" 2.5.x code as "stable" 2.4.x code and letting it run riot on people's production servers.
    • What is the point in modding up an Anonymous Coward with no proof to back up his or hers claim? Red Hat does not put out beta code an call it production. In fact, RH has 6 of the top 10 Linux kernel developers working for them. They know what they are doing. I personally don't care if RH puts 16,000 patches in their kernel and calls it Red Hat's super duper kernel 99.7. What matters is that RHEL is stable, and in my experience, RHEL is damn stable.
  • by Anonymous Coward on Monday April 19, 2004 @12:36PM (#8905729)
    ... What happens when the Linux kernel starts spooning? We will never see him again, because he will be spending all his time with his new girlfriend. That is until she kicks him to the curb, and he comes crawling back looking for his old friends again.

    You know you have all seen this happen a million times before.
  • by aussersterne ( 212916 ) on Monday April 19, 2004 @12:36PM (#8905732) Homepage
    Red Hat's applying a few patches.

    I use Red Hat's distribution.

    I don't, however, use their kernel; instead, I use a kernel.org kernel that I compile myself.

    The fact that this isn't just possible, but is easily (i.e. drop-in) possible, indicates that There Is No Problem Here.

    The kernel is binary compatible. The .config files are compatible (i.e. make oldconfig). The config/menuconfig/xconfig interfaces are the same. Red Hat's kernels track kernel.org version numbers, but just apply extra patches.

    This is not a "fork" of the kernel in any meaningful way.
    • by justins ( 80659 ) on Monday April 19, 2004 @02:22PM (#8907132) Homepage Journal
      The kernel is binary compatible.

      Whatever gave you that idea? Redhat has created kernels in the past with threading features that nobody else had. Software using those features would not run on a kernel without those nonstandard patches. That's binary incompatibility.

      Redhat has a history of doing stuff like this, as with their GCC 2.96 fiasco.
      • GCC may not have been really GCC but it its g++ was much more of a c++ compiler than 2.95. All they did was branch from the development tree. Things that didn't compile with it didn't compile because of broken code not because of a broken compiler. Furthermore as far as compiled c++ binaries goes, due to the magic of versioned libraries, c++ binaries compiled with all recent "real" gcc releases worked fine.

        I don't see how it's anyone else's buisness what compiler redhat uses. I know some project maintainer
  • by realdpk ( 116490 ) on Monday April 19, 2004 @12:37PM (#8905749) Homepage Journal
    Redhat is supporting a kernel they've used for some time now, by backporting patches. What's the big deal? *Lots* of people are going to be running 2.4.x for a long time, and having vendor support still available is great. We should be supportive of Redhat here.

    The worst thing they could do is drop support for 2.4.x entirely and mandate everyone upgrades to 2.6.x. Why make such a major change to something that works?
  • by Erik_ ( 183203 ) on Monday April 19, 2004 @12:39PM (#8905773)
    I *might* agree with the CTO of SUSE if Red Hat backported features, but didn't support them. Yet that is not the case. Red Hat promises a 5 year support for their Enterprise Linux releases, and I'm willing to pay for such a support. For my company's systems, I don't need to stay on the edge of new features, tools and other improvements. I NEED a stable operating system, requirering low change management (expect for security issues).
  • .. and we like it. (Score:4, Insightful)

    by Outland Traveller ( 12138 ) on Monday April 19, 2004 @12:39PM (#8905777)
    Redhat backporting features into 2.4 for their own customers is a win for everyone and yet another victory for open source. Case closed.
  • GMAFB!!! (Score:4, Insightful)

    by tm2b ( 42473 ) on Monday April 19, 2004 @12:40PM (#8905806) Journal
    This is insane. What is the GPL about if not the freedom for an individual or business to make changes to the kernel and distribute those changes? If Linus wanted to maintain a single point of control, which is what this guy is indirectly advocating, he would have used a different license.

    This is a very dangerous attitude from a company that is supposed to be steeped in the GPL. "Work it our way or don't work it" is not an attitude that helps the open source movement. "Let a thousand flowers bloom" should be the theme.

    Sounds to me like SuSe is upset that they will have to either duplicate this work or use Red Hat's work in order to stay competitive.
    • Re:GMAFB!!! (Score:3, Insightful)

      by trashme ( 670522 )
      You seem to be taking this argument too far.
      "Work it our way or don't work it"
      I did not pick up that sort of attitude from the article. I gathered that his message was simply this: It would be better for the entire community if Red Hat used the 2.6 kernel so that the linux communities resources can be spent moving forward with the new kernel.
      You may or may not agree with that, but don't go stretching his argument to an extreme. That's just false.
    • The article didn't argue that nobody had the right to fork anything or that the GPL wasn't about freedom.

      He merely said it wasn't a good idea to be backporting. Freedom also includes having opinions on the choices people make.

      I love when someone criticizes something, and people jump on it claiming, "but they have the RIGHT to do that!" Nobody was saying they didn't have the right--they were just criticizing the choice they made with that right. Free opinion, man.
  • The fear (Score:5, Interesting)

    by Effugas ( 2378 ) on Monday April 19, 2004 @12:41PM (#8905829) Homepage
    The fear is that a version of Oracle will come out that depends on 2.6-ish kernel features but doesn't actually work on 2.6 proper (i.e. it has dependencies on 2.4-era semantics). At that point, the only way to run Oracle -- no matter your toolchain -- is to use the Redhat kernel.

    --Dan
    • Re:The fear (Score:5, Informative)

      by GoofyBoy ( 44399 ) on Monday April 19, 2004 @12:46PM (#8905893) Journal
      If you are serious about Oracle + Linux, then you will run it under RedHat.

      When its something like Oracle, you choose the application, then the OS to match.
      • Re:The fear (Score:5, Informative)

        by leandrod ( 17766 ) <l@dutras . o rg> on Monday April 19, 2004 @01:05PM (#8906167) Homepage Journal
        >
        If you are serious about Oracle + Linux, then you will run it under RedHat.

        Not true. UnitedLinux and SuSE are also certified. In fact Oracle is compiled not on Red Hat, but on SuSE.

      • Re:The fear (Score:3, Interesting)

        by bigjnsa500 ( 575392 )
        If you are serious about Oracle + Linux, then you will run it under RedHat.

        Really? Why? Works fine with Slackware here.

        • Re:The fear (Score:3, Informative)

          by GoofyBoy ( 44399 )
          Because if you are using Oracle, you are generally doing something serious, which means that product support is critical. If Oracle does not support your Oracle/OS combination that means you don't get support.

          And for the record here are the Linux distributions which Oracle will support;

          http://otn.oracle.com/tech/linux/htdocs/linux_te ch supp_faq.html#Linux_Distributions
  • by bc90021 ( 43730 ) * <bc90021 AT bc90021 DOT net> on Monday April 19, 2004 @12:42PM (#8905843) Homepage
    # diff base.c base.c.original
    1417c1417
    real_parent; p != &init_task; p = p->real_parent)
    ---
    > for (p = current->p_opptr; p != &init_task; p = p->p_opptr)

    It seems that RedHat's testing methods weren't so good, and they neglected to see that certain things had had their names changed. Since they didn't test their kernel, it made it difficult to track down that particular error when trying to recompile the kernel.
  • USB (Score:5, Interesting)

    by CaptainZapp ( 182233 ) * on Monday April 19, 2004 @12:45PM (#8905890) Homepage
    Wasn't it the friendly folks at SuSE that implemented a backport for USB from 2.3. to 2.2 at the time (and some USB devices really did work)?

    Was that different or are they the most recent victims of marketing doublespeak?

  • by Godeke ( 32895 ) * on Monday April 19, 2004 @12:46PM (#8905900)
    This just goes to show that SUSE is relying on a full steam ahead adoption of any new version rather than a more carefully planned transition between versions. I still run 2.4 (conversion is set for a couple of months from now) and appreciate backported stable features. Providing the latest and greatest is a good thing I guess if you are a in this individually or as a hobby, but I'm not interested in upgrading until a product matures and I have regression tested everything. SUSE seems to not understand that, which would disqualify it for me as an enterprise vendor.
  • yeah .. and.. (Score:5, Informative)

    by josepha48 ( 13953 ) on Monday April 19, 2004 @12:48PM (#8905926) Journal
    Redhat has done their own thing as far as kernels go for the past I don't know how many years.

    There 2.4 kernel has support for lmsensors, which is not in 2.4 default. They have support for more drivers to. So what. Redhat will support these features if they put them in their kernel. They have to, especially since there new business modle is selling redhat OS for a pretty penny.

    I would think that Fedora would just make their system 2.6 asnd 2.4 compatible when Fedora core 2 comes out.

    I've had issues with Redhat doing things like this in the past, and you can still use the default kernel with Redhat, you just have to know what you are doing.

    SuSE has their own kernel too. They are just upset cause they didn't think of it first. Some people will not want to upgrade to 2.6 because of its newness, but they will want the features. If these can be ported back, and supported by Redhat then what is the big deal? Its open source people and as long as Redhat gives the source code away also they are well within their rights under the GPL. Remeber the GPL says something about "use and modify as long as you give the source ...". They always have done this and always will. So what!

  • Not a fork (Score:3, Insightful)

    by kundor ( 757951 ) <kundor@mem[ ].fsf.org ['ber' in gap]> on Monday April 19, 2004 @12:49PM (#8905940) Homepage
    Maintaining a separate patchset is normal, accepted, and not considered forking. They'll still be just applying these patches to the mainline tree, not severing development. Forking would be if they took the codebase and began changing it with a different set of developers and not just adding some code to each release of the kernel. EVERY distro does that. Perhaps not slackware or debian, I don't know, but Gentoo, Mandrake, Gobo, all have heavily patched kernels.

    I'd see this mostly as SuSe posturing.

  • by jdavidb ( 449077 ) on Monday April 19, 2004 @12:49PM (#8905941) Homepage Journal

    We all assume that the kernel is the kernel that is maintained by kernel.org and that Linux won't fork the way UNIX did..right?

    First of all, some of us assume "the kernel" is /kernel/genunix or something else, because we're working on Solaris or something. (There's one assumption on you're part that was unspoken: we're not all Linux users.) Secondly, I don't assume the kernel will never fork. Forking has often been very productive for Free Software programs, and the right to fork is one of the most valuable incentives for development. The kernel has forked all the time (remember the -ac tree from Alan Cox? how about uCLinux?), and that's a good thing.

    So your explicit assumptions that "we" "all" have, that the kernel will never fork, are wrong, as well as your implicit assumptions that we all use Linux and that forking is a bad thing. Thus I'm not sure what the big deal is.

  • by miguel ( 7116 ) on Monday April 19, 2004 @12:51PM (#8905970) Homepage
    As a happy user of a 2.4 kernel with backported
    features from 2.6, I love the fact that Red Hat
    went the extra mile to provide this feature.

    We have been using NPTL extensively in the Mono
    debugger. Without it, it would be much harder
    to write the debugger for Mono.

    Miguel.
    • Ximian/SuSE (Score:3, Insightful)

      by noda132 ( 531521 )

      Is it just me, or does Novell really have a problem with the images of these two companies? It seems to me they're trying to give the impression that Ximian and SuSE are in competition....

      First that weird article about adopting QT across the board, now this. And I'm sure I'm forgetting some other such issues too. It gives me the impression that SuSE people and Ximian people have never even had a conversation with each other.

      • Re:Ximian/SuSE (Score:4, Interesting)

        by MenTaLguY ( 5483 ) on Monday April 19, 2004 @01:41PM (#8906660) Homepage
        I think Slashdot holds a lot of responsibility in this case for publishing unverified sources like the Qt article (and others).

        I might say that Slashdot also bears a lot of responsbility for publishing a summary that miscasts the SuSE CEO's argument -- he's more concerned about an extreme level of backporting (and discouraging adoption of 2.6 to stay on 2.4 with backported features) than about backporting in general. SuSE backports stuff too.

        Not sure if I agree with him or not, but that's a separate issue.
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Monday April 19, 2004 @12:53PM (#8905989) Homepage Journal
    I have a large customer who refuses to run Red Hat's kernel even when they run Red Hat's distribution. And it's just for the reason that SuSE talks about. The kernel is so far diverged from the main thread of Linux that it's a dead-end, and there's no hope of getting it supported from anyone but Red Hat.I don't know if they meant it as a lock-in play, but it works out that way. And my customer doesn't have patience for Red Hat's support.

    If you have a problem and you bring it to the kernel hacker who made the subsystem you're using, it's really very difficult for them to support Red Hat's thread. Generally they just say to look to the vanilla 2.6 kernel.

    Bruce

    • This does have advantages. In particular, when a backport is done, small bugs tend to get found. Basically, if redhat takes the time to backport and they find bugs, who cares.

      That does not mean that I will be running redhat any time soon.
  • by mjh ( 57755 ) <(moc.nalcnroh) (ta) (kram)> on Monday April 19, 2004 @12:53PM (#8905998) Homepage Journal
    RedHat is not alone in backporting changes to current software into a previous version. Debian does this too - albeit not with the kernel. Security patches comes out all of the time for current software. But debian may have a version of that software in it's stable tree that isn't current but still vulnerable and require that patch. The debian folks simply backport the patch and release an update.

    This is one of the things that makes debian's stable tree live up to it's name. It isn't a bug in opensource, it's a feature. Now, of course, this puts additional pressure on debian to ensure that their stable branch continues to work as expected considering that the stable software is patched in a way that's unique to debian. But if they want to do that, good for them. It's up to their users to decide if this is a good practice. And historically, it's been an excellent practice.

    Is SuSe saying that they don't do this? Are they saying that if you're using a piece of software that they distribute that's slightly older than current and a patch comes out for current, that they won't patch the old software? If so, that leaves SuSe customers with a horrible choice:

    1. Upgrade to the most recent software and possibly change features that you rely on, or
    2. Live with the vulnerability

    I wouldn't think that'd be good for business. legacy piece of software on their distro, and a patch for a current version comes out, that they won't support it? I would think that'd be bad for business.

  • Kernel API changes (Score:5, Informative)

    by _Eric ( 25017 ) on Monday April 19, 2004 @12:57PM (#8906062)
    As a kernel module developer, I saw that those backports included API changes in the kernel. The API seen from a module is not the same in RH's kernel and the vanilla one (with the same version number). This is not something that one cannot overcome, but code gets bloated by this kind of constructs:
    #if LINUX_VERSION_CODE > KERNEL_VERSION(2,4,18) && RED_HAT_LINUX_KERNEL
    if (remap_page_range (vma, start, offset, len, vma_page_prot)) {
    return -EAGAIN;
    }
    #else
    if (remap_page_range (start, offset, len, vma_page_prot)) {
    return -EAGAIN;
    }
    #endif
    And it got even worse with RHEL3.
  • Stability (Score:5, Insightful)

    by crlf ( 131465 ) on Monday April 19, 2004 @12:59PM (#8906094)
    Both Red Hat and SuSE have been backporting fixes into older kernel versions and shipping 'older' versions of kernels is primarily due to stability requirements.

    Distributions elect to use a given kernel version every once in a while. By not keeping up to date with the latest kernel.org tree, they gain the advantage that their codebase is much slower moving and they are less likely to have new bugs introduced from outside sources. Doing so also gives them the ability to accrue intimate knowledge of the inner workings of that specific kernel revision.

    As distributions support a kernel, new bugs, vulnerabilities, hardware incompatibilities, and scalability issues arise. By selectively culling those single bits and pieces and patching their supported kernel, they are able to easily test the fixes without the larger risk of regressing in other areas.

    At first, this practice may appear to make the distributions look 'unfriendly' towards the opensource development nature of the Linux kernel, however this is far from the truth. As issues arise in the distro-supported kernel, fixes are also created which are later pushed upstream to the Linux kernel proper (as long as they aren't considered gross hacks that is).

    In essence, distributions settling on supporting specific kernel versions and patching them is very much in the open source spirit. OSS has the advantage that you may use any code drop you want, and if you fix something, the neighborly thing to do is to share the fix (which under certain license is enforced by law under some conditions).
  • by Anonymous Coward on Monday April 19, 2004 @01:01PM (#8906126)
    I work for the Dept of Defense. 3 years after I say we should go Linux, the shop has abandoned Windoze for our production (web, jakarta, Oracle) sites. Before we were running OpenBSD for firewalls and such but I was FT then and we could get away with patching and recompiling stuff.

    Now that I'm off-site and PT the responsible thing was to use a package system that was commercially supported. Enter Redhat. We run v2.1AS and v3ES/WS.

    This backporting stuff in kernel-land is nothing. It's WAY WORSE when it's userland stuff. eg. Apache. RedHat updates to 1.3.29 because of a security bug but they don't actually upgrade to .29. They backport the changes to .26 and leave all their package information, banners, the whole kit-kaboodle the SAME! Just a very minor build number gets cranked. Not even the Changelog bothers to specify what CVN was addressed or that it's even a security update. Ditto OpenSSL, Mod_SSL and everything else. The *ONLY* way I have of confirming the security patch is there is to download the SRPM and diff it against .29 and see if it's there.

    Naturally all the security check software is looking at banners and falls all over itself giving me warnings about vulnerable software when I know it's all patched. It makes a lot of work for me when our network minders run probes against our boxes and come up with all the errors and they run screaming to the dept heads with "hundreds of vulnerabilities!" and I have to go PROVE my boxes are up to date.

    THANKS a FREAKIN' LOT Redhat!!! How come the rest of your enterprise customers haven't tarred and feathered you over this STUPID practice? Track the damn source revisions, would you? It's one thing to want to provide "stability" but point releases are just that, fixes for broken features or security updates. The damn package should be clearly labeled 1.3.29 everywhere. It's one thing to force customers to go from 1.3 family to say 1.4 family (yes, I know, doesn't exist) and I can appreciate not being put down that path, but the current setup is just a disaster.

    According to my machines I'm runing OpenSSL 0.6.9b though the code is actually 9m.
  • by deanj ( 519759 ) on Monday April 19, 2004 @01:02PM (#8906141)
    OK, so I sit here and read many postings about why OSS Java would be a "good thing", and then I run across something like this.

    I have to say, the uproar over this doesn't make any of the "oh, it'll be fine" arguments that pro-OSS Java folks have been throwing around sound all that great.

    I mean, if the Linux kernel itself has this happening to it, what sort of chance does Java have from preventing it, if it goes OSS?
  • inevitable (Score:3, Interesting)

    by ignavusincognitus ( 750099 ) on Monday April 19, 2004 @01:03PM (#8906149)
    Let's face it. If linux is really to be widely adopted, the big players will push their own features. Someone like Red Hat or SuSE wants a unified look and feel, with no interoperability problems. Think about having the printer config tool from one open-source project, and the print deamon from another. If they want them to work together, they have to exert a lot of control over the individual components.

    This is something the SuSE does as well. And so will IBM - just wait until a patch they write for mainframes isn't accepted by linus for some reason.

  • by marz007 ( 72932 ) on Monday April 19, 2004 @01:05PM (#8906164) Homepage
    Folks,

    2.4 Kernals are still being widely used in applications that are doing real work for real world applications. Just because the bleeding edge is well into 2.6 doesn't mean the rest of us who have better things to do besides compile kernals on a nightly basis need to upgrade. A lot of applications require stability, long periods of time that you can't make major changes so as to not upset the development or even production envionment.

    RedHat is just trying to keep their Enterprise customers happy and patched with security fixes and some minor feature enhancements. Like it or not, they are a real company and have to make real $$ which means they have to listen to their customers who pay that $$$. The customers can't or won't upgrade to the new 2.6 kernals right away, they need to bring it in-house, test and redo their programs that are running production databases, programs,etc.

    Hell, RedHat 8.0 to RedHat 9.0 is painful enough for most folks. Now going to RedHat Enterprise or SUSE or Mandrake..etc. That's painful, read expensive in time and money.

    Get over yourselves.. I can compile customer kernals, but frankly I have a lot more better things to do with my time. RedHat knows this..and they're helping their customers do the job of actually getting business done.

    I'm thinking of starting the process of going to a 2.6 based distro probably sometime in the Fall. This means it probably won't be in any production server until after New Years at the earliest.

    -=TekMage
  • by -tji ( 139690 ) on Monday April 19, 2004 @01:11PM (#8906251) Journal
    Enterprise customers are generally very careful about making significant upgrades to their servers. Security patches and application fixes are expected, but a new kernel throws them into a huge process of integration, compatibility, and stability testing that they don't want to be forced into. The same thing applies to application vendors.

    So, RedHat backports desired pieces from the 2.6 kernel, so they can give their customers a more manageable update process.

    While fast paced updates are great from the hobbyist perspective, enterprise customers have a whole different set of prioritites. This is one of the big things they touted for the RH Enterprise Edition.. it is supposed to have a more manageable update process, sticking with the same core kernel for longer periods of time to ease support and management.
  • This is a good thing (Score:3, Informative)

    by Mike McTernan ( 260224 ) on Monday April 19, 2004 @01:25PM (#8906449)

    RedHat explain it here [redhat.com], and as a paying user of RHES3.0 in an enterprise environment, I think this is a good approach for them to have. The features they have left out feel to me to be the more risky sounding things that aren't essential like the new IO sub-system and scheduler tuning, while the things they have taken seem to be more applicable to the apps they may expect users to run e.g. O(1) scheduler, native POSIX library and Huge TLBFS

    Interestingly on their page they also list 2.6 as not having Hyperthreading support, while their 2.4 does.

  • by jaylee7877 ( 665673 ) on Monday April 19, 2004 @01:26PM (#8906457) Homepage
    RedHat backports 2.6 features (actually they'd be 2.5 features) to provide the most powerful kernel that they can support (i.e. make it run stable). If RedHat was planning on taking 2.4 and moving in a different direction that would be a fork and it would be a problem. But RedHat has already announced that RHEL 4 will use the 2.6 kernel. Any vendor who builds an app that depends on backport patches and won't run on 2.4 or 2.6 vanilla is just plain stupid. Yeah, it can be done, heck you can lock yourself into pretty much any platform you want as a developer, but why? RedHat has made it clear that 2.6 is the future. That's good enough for me
  • FUD (Score:5, Insightful)

    by RichiP ( 18379 ) on Monday April 19, 2004 @01:33PM (#8906557) Homepage
    I'm surprised that someone from an opensource-supporting company would sling FUD like that. This statement sounds something an old-school business practitioner would say to sell their product and discredit their competition.

    First of all, forking is not a bad thing per se. In fact, it sometimes leads to better code. In this case, Red Hat is not doing anything divisive. They're merely maintaining their old code.

    As for interfering with standardization, RedHat has done nothing but push for standardizing on the latest stable code to come out. They pushed gcc 3 back when people were bullish about it. They pushed for kernel 2.4 when people were saying nothing's wrong with 2.2. Even now in their Fedora product, they're pushing 2.6 early in the game.

    If anything, they're bringing the 2.4 crowd slowly into the 2.6 world by backporting features.

    Who is this CTO of SuSE? Sounds old-school to me.

    That said, I also noticed that there were no quotes in the article from Juergen Geck. I've become wary of news articles that try to capitalize on sensationalizing news stories. Perhaps this is just the author's interpration, eh?
  • by blueZ3 ( 744446 ) on Monday April 19, 2004 @01:45PM (#8906719) Homepage
    We've used forking RedHat here for quite a while, but things just keep getting more and more forked up. If this doesn't forking stop soon, we're going to switch to some other, less forked-up distro.
  • by jkixonia ( 647795 ) on Monday April 19, 2004 @02:51PM (#8907469)
    One cannot fork unless the upstream kernel is will not contain the backported functionality... Redhat claims they verify that the backports are accepted upstream before they backport. However, managing this could be complicated.

For God's sake, stop researching for a while and begin to think!

Working...