Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
SuSE Novell Linux

Novell Changes Enterprise Linux Kernel Mid-Stream 96

darthcamaro writes "Enterprise Linux kernels, from Red Hat or Novell, don't change version numbers inside of a release, right? While that has been the case for the last decade of Red Hat and Novell releases, Novell is breaking the mold with SUSE Linux Enterprise 11 service pack one. Instead of backporting new kernel features to the kernel they originally shipped with — which maintains software and hardware vendor certification — they've re-based their Linux kernel version altogether. '"There were some things that led us to update the kernel itself, which is something that we normally don't do: Neither SLES 9 or SLES 10 got a kernel update," Markus Rex, director of open platform solutions at Novell, told InternetNews.com. "But in this particular case, after deep discussion with our ISV and hardware vendors that gave us certifications, we felt in this case a kernel update was the appropriate step to take.'"
This discussion has been archived. No new comments can be posted.

Novell Changes Enterprise Linux Kernel Mid-Stream

Comments Filter:
  • That would be more consistant.
    • Re: (Score:1, Informative)

      by Anonymous Coward

      12 is going to have a major update to the GI, and some other obvious changes to make it more marketable. on the back end they have several teams working on projects with XEN, SAP... and they dont want to deviate for the timeline that they have presented.

  • It's not like it's a small jump either -- it's from 2.6.27 to 2.6.32, which means a whole boatload of changes, some of which can really mess up a production environment (IME, anyone?)

    • by cynyr ( 703126 )
      why are you updating that package on production servers then? maybe the test/dev box.
      • And what do you do about security updates and sometimes dependancy nightmare when yast throughs in SP1 as a dependancy.

        TFA:"The biggest thing is that, as a server operating system, we have to make sure that we run on the appropriate server chips," Rex said. "So the key decision factor for us was that we wanted to make sure we supported the newest hardware to the maximum capabilities."

        First part sounds cocky to me . Second is understandable as long as Hardware vendors play ball. Whole thing smells fishy to m

        • And what do you do about security updates and sometimes dependancy nightmare when yast throughs in SP1 as a dependancy.

          The easiest way to fix that is by upgrading sles servers to redhat. Does Novell still use 3 separate and incompatible update mechanisms?

          • The easiest way to fix that is by upgrading sles servers to redhat. Does Novell still use 3 separate and incompatible update mechanisms?

            Exactly I always recommend RHEL for Enterprise. Personally I really don't care for Novell , especially with all their Red Hat bashing of late. Novell support sucks , and the community forum is not much better.

            To get back to your question asfaik it is only zypper now which is also used in the opensuse world. Which obsoletes rug zmd(formerly red carpet) and zypper +yast frontend which made 9 and 10 such a mess.

        • by spun ( 1352 )

          Are you seriously using 11 on production servers? We've yet to upgrade to SP3 of 10. 11 breaks a lot of things. Where did HA go? Replaced by some proprietary package. Will VMware tools work with the 11 kernel? Nope, sorry. Does it add anything in the way of security or stability? Ah, not so much.

          • Im not , but some customers of mine are, what I was told had to do with licensing between sles and the oes version of sles.

          • Re: (Score:3, Interesting)

            We're using SLES 11 on production servers with no problems. We decided to jump straight to 11 after getting burned for being 9.4 too long and nobody in the world supports it any more.

            As far as VMware Tools support on SLES 11, according to VMWare's official documentation it is:

            http://www.vmware.com/pdf/osp_install_guide.pdf [vmware.com] start on page 18

            • by spun ( 1352 )

              We upgraded all our SLES 9 to SLES 10 a few years ago and have been pretty happy with it.

              As for the tools, ah, have you tried it? The tools they provide for 11 are the new, mostly untested and unfinished open source tools. They don't work right for me, vmxnet3 is broken and the tools come up as 'unmanaged' in VSphere Center. Time synch seems broken, too, but that may be a different problem. In any case, they seem like open beta quality, not production ready. Don't get me wrong, I'm glad VMWare is open sourc

              • by jschrod ( 172610 )
                The broken time sync is a VMware problem that's independent of the respective distribution. A lot of my customers have it with RHEL as well.

                Interestingly, newer VMware workstation products don't have it any more, while the server products still have that issue. Lately, I'm sincerely waiting for other virtualization products to catch up, to be able to evaluate a switch.

                • by spun ( 1352 )

                  The broken time synch is a kernel timekeeping issue, you can change the Mhz setting for interrupts to something around 100 instead of 250 or 1000. This allows the simulated clock hardware to keep up. The newer kernels use different timekeeping methods and should not have the same problem.

                  • by jschrod ( 172610 )
                    Changing the MHz setting doesn't help always; sometimes the clock source (hpet vs. pit) is an issue as well. Sometimes none of the methods listed in VMware's tech note about timekeeping issues helps.
                    • by spun ( 1352 )

                      Yup, I've had luck with pmtmr. But if your Mhz setting is 1000, as it is with some older SLES and Red hat kernels, nothing will work. We used to have some servers that were running ntpdate every minute as a cron job :(

          • by Vairon ( 17314 )

            What did 11 break? It's broken nothing for me.

            SLES 11 HAE is not some proprietary package. It's 100% OSS. It's made up of:
            Pacemaker (http://www.clusterlabs.org/)
            OpenAIS (http://www.openais.org/doku.php)
            DRBD (http://www.drbd.org/)
            OCFS2 (http://oss.oracle.com/projects/ocfs2/)

            You can get it here: http://download.novell.com/Download?buildid=jC1wpkedb7A~ [novell.com]

            VMware tools DO work with the SLES 11 kernel. You can get them here:
            http://packages.vmware.com/tools/esx/4.0latest/sles11/index.html [vmware.com]

            As for security, it adds seli

            • by spun ( 1352 )

              Hmph. Thanks for making me look like an amateur, buddy. :P

              I meant that the HA in 11 is not drop in compatible with 10. I thought it was a proprietary package, glad it isn't.

              And the vmware tools don't work 'out of the box,' like you point out, you have to go download them. And have you looked at them? Tried using them? Those are the new open source vmware tools, they will compile, yes, but they have not worked correctly with Vsphere center, at least not for me. The tools still come up as 'unmanaged' in the s

              • Try going Red Hat , maintenance/update licenses are cheaper and your e-Directory will run on that also. Also runs on Solaris. Plus you have a lot less headaches linux wise with Red Hat. Do yourself a favor look into it, sles isnt all that. Although If your using their proprietary stuff then you have vendor lock in , their NSS Filesystems come to mind since you mentioned Netwhere.

                • by spun ( 1352 )

                  As we have a Novell Netware contract (which isn't going away), we get about 60 Linux licenses for free, so, no. Not going to Red Hat. Not that I think it's any better anyway. I like SuSE. I'd prefer something Debian based if I was going to switch.

          • Are you seriously using 11 on production servers? We've yet to upgrade to SP3 of 10. 11 breaks a lot of things. Where did HA go? Replaced by some proprietary package.

            HA was not replaced by a proprietary package at all. It was moved out of the default SLES install into an "add-on product" called SLES 11 HAE (HA Extension). It's still based on 100% open source components. Of course, it's an enterprise release and subscriptions are licensed separately. That's really the big difference, it's not included with SLES any longer. I believe this is the same model RHEL/RHCS uses. For what it's worth, she HAE actually one of the best improvements in SLES 11. Now based on Op

            • by spun ( 1352 )

              We are on version 4, and I can state with certainty the tools will not compile and work correctly. While the new open source VMware tools will work slightly better, they still do not function correctly.

              As we are a state agency, our upgrade time tables are a little slow. We're testing SP3 now. We had to upgrade a test server to VMWare 4.1 to get the tools to work, the downloadable open source tools package for SP3 does not work as advertised, and the tools available on VSphere 4 do not work at all. If you se

        • TFA:"The biggest thing is that, as a server operating system, we have to make sure that we run on the appropriate server chips," Rex said. "So the key decision factor for us was that we wanted to make sure we supported the newest hardware to the maximum capabilities."

          To the maximum capabilities

          That sounds suspiciously like, "we could backport to support all the hardware that RHEL supports, but that's hard, expensive work we can skip if we just drop in a newer kernel. Yeah, so we're passing on our costs to

          • I used to work with Novells ancient backported to death 2.6.8 kernels. Thousands of patches applied over a period of over five years. Supposedly, rock-solid, enterprise grade, robust, deployment-ready, yadda yadda, kernels. Except they weren't. They kernel oopsed much more frequently than the 2.6.20 kernels that were bleeding edge at the time. And good luck trying to debug the crashes. Because there were two totally different development lines, the Novell and the mainline one, it was impossible to know if t

            • Good info, thanks. Was there a lack of testing that led to the instability? No offence, but poor coding practices or lack of skill among some contributors?

              I won't argue against the superiority of the open model for a second, but many customers want a buck-stops-here solution. I'll readily grant that commercial distro support doesn't always get you that either (sometimes I pick up work filling in those gaps).

  • by Josh Triplett ( 874994 ) on Thursday May 20, 2010 @12:47PM (#32282266) Homepage

    Enterprise distributions avoid kernel version upgrades for two distinct reasons: perceived stability and fixed API/ABI for third-party modules. In this case, upgrading from 2.6.27 to 2.6.32 may well improve stability, particularly since many other distributions plan to ship 2.6.32 in their next release as well. As always, any upgrade can lead to the occasional regression that enterprise customers hate, but hey, paid support means they'll get a fix. So, that just leaves API/ABI issues, hence the discussions with ISVs and such. Third-party modules keep becoming less and less important with each new kernel version, and I can readily believe that the pain of dealing with API/ABI issues no longer outweighs the benefits of new hardware support and features provided by 2.6.32.

    • by cynyr ( 703126 ) on Thursday May 20, 2010 @01:05PM (#32282548)
      Third partys should be working to get their code included in the kernel, or should just deal with the changes. This has been said many times by the kernel developers.
      • Re: (Score:1, Insightful)

        by Anonymous Coward

        Kernel developers should use and support a stable ABI. This has been said many times by everyone else.

      • Re: (Score:2, Insightful)

        by Sycraft-fu ( 314770 )

        The kernel devs can say that all they like. 3rd parties are then free to say "Fuck you," and either not support Linux or do it poorly.

        Maybe there's something to be said for not being jerks about it and trying to meet people half way. Try to give the hardware companies what they want, maybe they support your OS more.

    • by Kjella ( 173770 )

      Enterprise distributions avoid kernel version upgrades for two distinct reasons: perceived stability and fixed API/ABI for third-party modules.

      No, if it fixes 2% of your servers and breaks 1% it's more stable, but the reason they don't is because you don't break something that works. Oh sure, everybody will complain a little over things that have never worked, but the real anger comes when you get "I upgraded to $foo and now my sound/wireless/suspend/whatever is BROKEN!!!". That is pretty much the whole difference between service packs and an upgrade, if you have to start worry about your server breaking between upgrades then it's time to switch d

    • by simpz ( 978228 )

      Enterprise distributions avoid kernel version upgrades for two distinct reasons: perceived stability and fixed API/ABI for third-party modules.

      Not true and most everyone has missed the point. Enterprise distributions avoid kernel upgrades for the stability of the Application ABI/API, not kernel modules ABI.
      This is all RH guarantees, you will be building a new NVIDIA driver with every minor RHEL kernel change for example.

      Enterprises want a stable base for their applications with no surprises.

  • outrageous! (Score:5, Funny)

    by nomadic ( 141991 ) <nomadicworld@ g m a i l . com> on Thursday May 20, 2010 @12:48PM (#32282268) Homepage
    This is unconscionable and fills me with deep, passionate rage. I can't believe a company followed a nonstandard numbering convention for one of their releases. That's the most evil think I've ever heard and it heralds the downfall of modern society.
    • Re: (Score:3, Insightful)

      What is unconscionable and fills me with deep, passionate rage is when the guys who run a distro think it's cute to name their releases after animals.

      Nothing's more annoying than checking a forum only to have to google a bunch of stupid animal names to see which version I have installed.
      • Re: (Score:2, Flamebait)

        https://wiki.ubuntu.com/Releases [ubuntu.com]

        bookmark this. now you never have to go through the agony of googling a bunch of stupid animal names again. think of what you can do with all that free time.

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          http://debian.org

          bookmark this. now you never have to go through the agony of googling a bunch of stupid animal names again. think of what you can do with all that free time.

          Fixed.

          • Re: (Score:1, Flamebait)

            Now, I've used both Debian and Ubuntu quite a bit, and don't want to get into which is better, but when it comes to naming conventions you can't really point to either one as being good or professional. Debian uses Toy Story characters for christ sake. And yes, I know they're 'codenames' not release names, but the GP's point is unaffected by this; when you're in Debian forums people constantly refer to the codename. The only reason that Debian names are any easier to remember is because they don't change

          • So cartoon characters is an improvement is it?

            Hell, I use Debian but would much prefer it if they just stuck to numbers.

        • by nomadic ( 141991 ) <nomadicworld@ g m a i l . com> on Thursday May 20, 2010 @04:37PM (#32285872) Homepage
          I think the animal adjectives should have to reflect the actual Ubuntu release, like Unstable Urchin, or Dependency-Breaking Duck.
      • Comment removed based on user account deletion
  • by bernywork ( 57298 ) <bstapletonNO@SPAMgmail.com> on Thursday May 20, 2010 @12:49PM (#32282294) Journal

    The biggest reason why everyone froze their kernels for a major release and back ported was the create what was effectively a driver binary interface. So if a hardware vendor (Yes, I'm looking at you Nvidia) wanted to create a binary driver release for a codebase, that driver release would work for the whole period of support for that codebase. This is because Linux doesn't have a driver ABI.

    So getting back to what Novell have done here...It's a hard one, I guess, if they spoke to their ISVs and they said they don't care, then it doesn't matter. If HP / Dell / Lenovo don't care either then, again, that's no problem.

    I guess there is always going to be someone out there who hasn't qualified their drivers with Novell for SLES 11 and just does self certification and was expecting their release to keep working (Which was tested against the earlier release) and now has to upgrade their driver because the Linux ABI is a rapidly moving target. On the other hand, a lot of people rely on on Novell / RedHat etc for driver support and don't go back to Dell / HP / Lenovo; if there is problems they point fingers at their Linux vendor first.

    Time will tell if this was a good idea or not. Personally, I'm not against it, more hardware support out of the box is better for me, if I run into a problem and have to run older hardware on an older kernel and just upgrade RPMs on my older systems then so be it. I guess that's what versioning in builds is for really isn't it?

    Berny

    • Re: (Score:3, Interesting)

      by talcite ( 1258586 )

      I think the biggest surprise here is the update to Xen 4.

      Xen 4.0 has barely been released for 3 months and they're moving to it for SLES? Insanity. There's barely any time to determine known bugs. What production environment would want to risk everything from downtime to data loss by using practically untested code?

      That said, Xen 4.0 has some really nice features with regards to the Remus checkpointing project. It essentially provides instant failover (with persistent network connections) to commodity hardw

      • Yup, the move to Xen4 is pretty interesting. Most other distros go towards KVM, and I tell you, it's not completely trivial to run xen on Ubuntu 10.04. At least stable production systems.

        I know.. KVM shoud be better and all that, but that still needs the HW support. This makes reusing older HW tricky.

  • That's funny. I ran a SLES shop for 3 years and for the entire time, I had a request in to Novell to explain the minor numbering for their kernels in an attempt to keep EMC happy with updates. They put it off saying that they were trying to find a Linux Engineer there to explain. For 3 years!! I do not miss Novell. Not one bit.
    • That's funny. I ran a SLES shop for 3 years and for the entire time, I had a request in to Novell to explain the minor numbering for their kernels in an attempt to keep EMC happy with updates. They put it off saying that they were trying to find a Linux Engineer there to explain. For 3 years!!

      I do not miss Novell. Not one bit.

      Man, that sounds really familiar. I could see Oracle throwing a fit over them changing the kernel release. They freakout when you only give a server 8GB of swap space!

  • Any Idea if they will repack the 'hotplug' or '_netdev' (like RHEL) in fstab for SP1. This was a major show stopper in SLES 11 when wanting to automount iscsi LUNS at boot time.

  • by Anonymous Coward

    This really SUX!
    this means upgrading HA Clusters with ocfs2 or any servers with 3rd party SAN,EMC,Powerpath will just FAIL

    And of course Novell support will say they have nothing to do with any 3rd party modules "it's your problem now"

    If i want to worry about all this things I'll just use free ubuntu and not paid "enterprise" version

    just my $0.01

    • Our of curiosity, why will it fail? Isn't OCFS2 part of main line kernel now?

    • How is that different from Windows Server service packs?

      • ^^^ +10 sad but true

      • by simpz ( 978228 )

        It's so different from Windows service pack. MS don't change the API/ABI that applications use. This will. Maybe they think they can keep this problem small but it violates the Enterprise computing no surprises rule.

        There's a reason RH is the biggest Linux vendor for corporations. They guarantee not to change the application ABI/API and that is vital if you are running an internal mission critical bespoke app. And suddenly the ground shifts under you.

        Maybe won't cause an issue for most or everyone even, but

    • a) my understanding is that newer versions of powerpath will detect kernel updates and recompile/reinstall themselves. don't know for sure. not a big fan of PP.
      b) yeah, how come Novell doesn't support EMC's software? what creeps.
      c) ocfs2 will be fine it's in the kernel.
      d) best solution: drop powerpath and use native linux mpio and never have to worry about that kind of shit again. the performance gains that powerpath provides are marginal in most scenarios. afaik, it really comes down to "true load ba

  • How can you backport minor updates? Backporting minor revisions and bug fixes is the same as patching ... which is the same as upgrading your kernel.

    I can see backporting makes sense when you had kernel 2.4 and 2.6 had neat incompatible features that you wanted to use. I would guess you are making something less stable than just downloading the kernel itself if you have your own custom patches that have limited tested. Why not use what everyone else is ... just a few revisions behind?

    • by dag ( 2586 )

      Tell that to the Ubuntu guy that had filesystem problems and required Red Hat's assistance to get it fixed. Ubuntu was impacted, Fedora was impacted, Red Hat's kernel was not. Why ? Because they were not running the latest and greatest. But a kernel that has been selectively patched and improved, by kernel developers that know exactly what they are doing. It can't get better than that...

      There are more systems running RHEL and CentOS kernels than there are systems running the latest. Not only because the tot

  • by Anonymous Coward

    At least renumbering the kernel is more honest than backporting the fuck out of kernel that's 3 or 4 years old, and leaving the version number the same.

    • Re: (Score:2, Insightful)

      by dag ( 2586 )

      What does this have to do with honesty ?

      Red Hat is backporting the stuff their customers are demanding _and_ what they feel confident to support in a production environment _without_ breaking existing setups. That's the goal.

      You don't do that by updating your complete kernel for one or two features you like to have. That would be insane.

      Red Hat never promised you that 2.6.18-192.el5 has any resemblance or compatibility with the original vanilla 2.6.18. That would make your kernel ancient and not fit for new

      • Red Hat never promised you that 2.6.18-192.el5 has any resemblance or compatibility with the original vanilla 2.6.18. That would make your kernel ancient and not fit for newer hardware.

        Well, you get ABI compatibility, but the AC's poor little mind apparently can't handle the idea of branches.

        Oh, hey, thanks for the Drupal book review, I was just thinking about figuring that out myself, this will help.

        • The problem with backporting is that it introduces a lot of hidden potential for a major mess, compared to a straightforward version upgrade. I can trust RedHat guys to do backports right, because they have some of the most prominent Linux kernel hackers on the payroll. But this is an exception rather than rule. I don't know much about SUSE resources in that department, but I sure as hell don't want Ubuntu to backport or patch upstream stuff - more often than not, these kinds of things is precisely why Ubun

          • If it helps any, Novell has been around for a very long time, and does employ lots of very smart people. Considering the fact that they've based their modern business almost exclusively on Linux, I have a high degree of confidence in their competence. Ubuntu (Canonical) is another matter entirely, depending on how you define competence of course...
            • If it helps any, Novell has been around for a very long time, and does employ lots of very smart people. Considering the fact that they've based their modern business almost exclusively on Linux, I have a high degree of confidence in their competence. Ubuntu (Canonical) is another matter entirely, depending on how you define competence of course...

              Correct, they've been losing money almost as long as Red Hat has been a public corporation! After firsthand experience working with Novell, I have absolutely no confidence in their competence. They do employ some very smart people (I'm not sure about "a lot"..I've only met two or three I would consider smart), but I have no idea what they're doing. It does not seem like they work on creating any of Novell's products. Novell needs to go and pimp themselves out to the highest bidder while the company has

          • Excellent point. It feels like there's an obvious solution to the problem that I'm not seeing.

        • by Junta ( 36770 )

          'ABI' compatibility in the kernel is not preserved. The kernel header files changed so that not only did drivers need recompile, they needed recoding.

          • 'ABI' compatibility in the kernel is not preserved. The kernel header files changed so that not only did drivers need recompile, they needed recoding.

            Among RHEL major-number releases? Sorry I lost the context.

            • by simpz ( 978228 )

              On RHEL only the application API/ABI is guaranteed between kernel versions. You still have to rebuild 3rd party drivers (e.g NVIDIA) between on any kernel updates. But the guaranteed application API/ABI is what businesses really pay for with RHEL i.e no surprises.

                Not sure how Novell hopes to achieve this now.

              • by Junta ( 36770 )

                I don't think the kernel factors into the application ABI/API overmuch. I have binaries from very old kernels with no issue.

              • by dag ( 2586 )

                Not true, there is a whitelist kABI for interfaces that are guaranteed to not change. If I recall correctly, even the nvidia driver worked fine going from a RHEL5.4 kernel to a RHEL5.5 kernel. So it's not guaranteed that all drivers keep on working on any 2.6.18 kernel, but the large majority simply do.

                Visit the ELRepo project page (http://elrepo.org/) or read the following document to learn how it works:

                http://dup.et.redhat.com/presentations/DriverUpdateProgramTechnical.pdf

      • by Junta ( 36770 )

        I have to agree with the sentiment of the above, even if 'honesty' isn't the right word. They backport stuff from 2.6.32 to 2.6.18. They break the kernel module interfaces (drivers have had to change their source code to follow the 5.x series). The resultant thing tends to exhibit some of the worse of both worlds.

        • by dag ( 2586 )

          As one of the members of the ELRepo project (http://elrepo.org/), I would like you to take a look at the collection of drivers (kernel modules) that the project has backported to the RHEL5 2.6.18 kernel. In total more than 400 drivers have been ported and a large majority of these drivers work for every 2.6.18 kernel that was released (from 2.6.18-8.el5 until 2.6.18-199.el5), thanks to the kABI whitelist. Including exotic stuff like nvidia or video4linux.

          So I fail to see the worse of both worlds. But then a

          • by Junta ( 36770 )

            Are you saying that a kernel module written against 2.6.18-8.el5 without any knowledge of what will happen up to 199 would compile without issue, because I call BS. I have dealt with RHEL5.2->5.3 and other such transitions and had to get new vendor source code as some include files changed in the kernel. Could that new source code compile against the old tree, sure, but it's easy to make the code have the right #ifdefs after the fact.

            I'm not saying ignoring the new drivers is good, but 'pretending' tha

            • by dag ( 2586 )

              You could have saved yourself the embarrassment if you would have visited the ELRepo website (http://elrepo.org/) and tested it for yourself, or you could have googled for kABI or kABI-tracking modules. (Welcome in 2010 !)

              The large majority of the drivers compiled against one kernel do indeed work for *all* 2.6.18 kernels. Only a few of them (those that do not use what's inside the kABI whitelist) have to be recompiled against a new major release kernel if an interface did change.

              http://dup.et.redhat.com/pr

  • This is what Debian did with 'Etch'n'Half', in the 4.0 'Etch' stable distribution, going from 2.6.18 to 2.6.24, which was alarming because, in their words:

    * "Debian does not guarantee that all hardware that is supported by the default etch 2.6.18 kernel is also supported by the 2.6.24 kernel, nor that all software included in etch will work correctly with the newer kernel.
    * Migrating from the 2.6.18 etch kernel to the 2.6.24 "etch-and-a-half" kernel

    • by JonJ ( 907502 )

      This is what Debian did with 'Etch'n'Half', in the 4.0 'Etch' stable distribution, going from 2.6.18 to 2.6.24, which was alarming because, in their words:

      Not really, those packages were completely optional. I'm guessing the first question when you try to get support from Novell is "Have you tried upgrading to the latest version?" So it's not the same, it's actually a rather poor comparison.

  • by internet-redstar ( 552612 ) on Friday May 21, 2010 @04:08AM (#32290674) Homepage
    Old Linux guys like me remember the time when they introduced this in 'enterprise kernels'. At that time it made sence, because in the 2.4 series there were good and... well _bad_ kernels. Some may argue that that still was the case in the early 2.6 tree. But that has been a long time in the past...

    The current situation is that the backporting policy basically sucks _bigtime_.
    It means that new hardware isn't out of the box supported by the 'enterprise distros' and that installing ubuntu with a new kernel is a no-brainer. It also means that - especially in the case of Red Hat, the kernel is so heavily patched, that it can lead to stability problems and introduces 'unusual problems' as opposed to the vanilla kernel.

    Backporting things for an old kernel and overly patching the vanilla kernel is basically saying: 'we know it better than the kernel developers'. And, sorry, that simply isn't true!

    As someone being heavily involved in Linux Enterprise support since 1998, and thus shaping it too, I can only hope that this is a sign of better things to come and an abandonment of the outdated, stupid and un-enterprise policy which only makes Linux look bad.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...