Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Red Hat Software Businesses Linux Business Perl Programming

Bitten By the Red Hat Perl Bug 234

snydeq writes "Smart coders always optimize the slowest thing. But what if 'the slowest thing' is the code supplied by your vendor? That was exactly the situation Vipul Ved Prakash discovered when he tinkered with a company Linux box on which Perl code was running at least 100 times slower than expected. The code, he found, was running on CentOS Linux, using Perl packages built by Red Hat. So Prakash got rid of the Perl executable that came with CentOS, compiled a new one from stock, and the bug disappeared. 'What's more disturbing,' McAllister writes, 'is that this Red Hat Perl performance issue is a known bug,' first documented in 2006 on Red Hat's own Bugzilla database. Folks affected by the current bug have two options: sit tight, or compile the Perl interpreter from source — effectively waiving your support contract. If a Linux vendor can't provide comprehensive maintenance and support for the open source software projects you depend on, McAllister asks, who ever will?"
This discussion has been archived. No new comments can be posted.

Bitten By the Red Hat Perl Bug

Comments Filter:
  • by dougmc ( 70836 ) <dougmc+slashdot@frenzied.us> on Friday August 29, 2008 @11:39AM (#24795595) Homepage

    Installing your own perl under /usr/local, leaving the system one alone under /usr, that waives your support contract?

    Seems unlikely, and if actually true, remarkably stupid.

    (However, messing with the perl under /usr, that would be a mistake. It could easily break other things that depended on that specific version ...)

  • by SatanicPuppy ( 611928 ) * <Satanicpuppy.gmail@com> on Friday August 29, 2008 @11:39AM (#24795601) Journal

    Who uses vendor Perl? It's like GCJ; if you don't really need it, it's good enough, but if you really need it, you download the real thing. And like java, it's easy to have multiple versions of Perl on your system.

    I guess that's snarky, but seriously. These guys were running a fancy production package on the crap perl install that comes with Fedora? They needed performance (and chose perl?) and they didn't look first at compiling perl from source? It doesn't take long at all, and the benefits are substantial, even aside from not having this bug.

  • by stjobe ( 78285 ) on Friday August 29, 2008 @11:41AM (#24795631) Homepage

    Yeah, well, good on mr Prakash I guess. Good thing he had the option of rebuilding from source, I can think of a few other operating systems and applications where that simply isn't an option.

    So, score one for open source I guess, headline be damned.

  • by SirGarlon ( 845873 ) on Friday August 29, 2008 @11:41AM (#24795637)
    Just because Red Hat made one high-profile mistake, doesn't mean their support service is without value. Jump to conclusions much?
  • by timster ( 32400 ) on Friday August 29, 2008 @11:48AM (#24795767)

    Well, I'm anything but a hardcore Perl hacker -- just use it to pragmatically list some rubbish now and then -- and I've never even heard of compiling your own Perl.

    In truth, it's NOT like GCJ in the least. GCJ is a relatively immature JVM built from an entirely different codebase than the Sun JVM. "Vendor" Perl and "real" Perl ought to be substantially the same thing.

    Just like all the foundation-level vendor tools, I would expect Perl to be built correctly on any official distro release. I shouldn't need to build my own GCC, my own Python, my own X, or my own Perl.

  • Article is a troll (Score:5, Insightful)

    by wrook ( 134116 ) on Friday August 29, 2008 @11:50AM (#24795801) Homepage

    Cent OS is *not* an OS that Red Hat provides support for. So, in terms of support, you get what you pay for. The bug is fixable by recompiling Perl? Great. Submit the fix to the maintainers. End of story.

    But, supposing that you *did* pay for support and you ran into this problem... It's a known bug with low priority. So get them to fix it. You're paying for support. Hold your vendor to their promises.

    And if they don't fix it, find another vendor. That's the beauty of open source. If you need support and your current supplier sucks, you can find another.

    But it's completely disingenuous to complain that recompiling your Perl binary will void your support contract *when you have no such contract*.

  • by Richard_at_work ( 517087 ) <richardprice@gm a i l . com> on Friday August 29, 2008 @11:55AM (#24795875)
    No, it doesn't waive your support contract, but it does mean you will be relying on a subsystem that is not supported by the vendor - which validates the 'effectively' modifier in the original statement.
  • by Dolda2000 ( 759023 ) <fredrik@dol d a 2 0 0 0 . c om> on Friday August 29, 2008 @11:58AM (#24795915) Homepage
    Even if it is true, the nice thing with a free operating system is that one can at least fix the bug oneself, support contracts voided or not. Try doing the same if there's a problem with Exchange or IIS.
  • by Anonymous Coward on Friday August 29, 2008 @11:58AM (#24795917)

    And recompiling doesn't invalidate his support contract; as a CentOS user he doesn't have one.

    The summary is bullshit.

  • Re:yum (Score:2, Insightful)

    by foobat ( 954034 ) on Friday August 29, 2008 @11:58AM (#24795937)

    the latest version supplied by redhat that is. Which is what the problem is all about.

  • by SatanicPuppy ( 611928 ) * <Satanicpuppy.gmail@com> on Friday August 29, 2008 @12:03PM (#24796027) Journal

    Well, I am harder core than the average schmo where Perl is concerned, so for me it's a requirement...The vendor version is always inferior. Most forums will tell you the same thing.

    But like I said, if you don't really need it, it's fine. I doubt the average user would ever run into this problem.

  • by KaZen ( 25922 ) on Friday August 29, 2008 @12:06PM (#24796089) Homepage

    Except that if they were paying Red Hat for support, they would have been given the hotfix when they called in to diagnose the problem.

    It seems a little odd to complain about Red Hat support, when in fact they weren't paying Red Hat for support.

  • by SuperBanana ( 662181 ) on Friday August 29, 2008 @12:18PM (#24796307)
    Every time Redhat releases a hotfix, CentOS grabs the source and compiles it. They use the exact same toolchain to compile the exact same source. The only difference between a redhat package and a CentOS package is that CentOS has replaced "Redhat" everywhere, because Redhat started using trademark law to keep them from doing what the GPL entitled them to do (it got so bad that at one point, Redhat was threatening CentOS over even mentioning Redhat on their website.)

    Let's keep our eye on the ball, here: this is a known bug, in Redhat's bug tracker, since 2006. Fixes have been commonplace since 2007, and only just now did Redhat get around to fixing the problem. The question remains: what good is Redhat over CentOS (the only difference being logos and a support contract) if they ignore a major performance bug for two years?

  • My distro is PCLinuxOS and the latest available kernel is 2.6.22.something. I've found myself having to compile a lot of packages from source because they haven't been added to the repos.

    And I've heard of similar problems in Gentoo.

    Maybe it's part of the deficient development cycle of some apps? i.e. no stable versions, and keep fixing bugs and adding features (and bugs) at the same time.

  • ...why? (Score:3, Insightful)

    by XanC ( 644172 ) on Friday August 29, 2008 @12:39PM (#24796699)

    Can you be specific about how the vendor compiling Perl is inherently worse than anyone else doing so?

    This is what distributions are for: to package and/or compile software so that users don't have to. What makes Perl so special that it's suddenly "inferior" when handled that way?

  • by Christian Smith ( 3497 ) on Friday August 29, 2008 @12:49PM (#24796889) Homepage

    ... so it's easy to see how someone might be lazy and just use whatever the vendor supplies.

    Wow, I wouldn't want you managing my servers.

    Home compiled software can easily be a source of security holes, as tracking what you have compiled versus what is patched using vendor updates adds significant management overhead and another point of failure. As an example, a popular open source project had its website compromised because the maintainer used a self compiled version of CVS, and forgot about it. Had the maintainer used a vendor CVS, the security hole would have been patched by the vendor update.

    Lazy is good. If you can do the job and be lazy, that is a win-win.

    I wonder what went wrong with the RH release?

    The RH bugzilla ticket indicates this as the initial issue.

    So it appears RH have not applied this fix, perhaps because the patch is included with a more cutting edge perl than is considered safe for inclusion with RHEL. Certainly, it looks like it was fixed in perl 5.9, but that may be an experimental branch more akin to the old 2.[135] linux kernels (and no vendor would have touched them in an enterprise targeted distribution.)

  • by jd ( 1658 ) <imipak.yahoo@com> on Friday August 29, 2008 @12:50PM (#24796903) Homepage Journal

    In general, that could be said of (almost) every single software package of any substance. If you want it to run well, you have got to roll your own. Well, almost. In theory, there's nothing to stop a vendor with a decent server farm gathering your system info then compiling the RPMs for you for that specific system, and/or having a set of stock ISOs rolled for, say, the five most-popular systems.

    A central build has the advantage over something like Gentoo in that vendors can usually afford better horsepower and can auto-tune the options per application better than the average Joe could ever hand-tune them. It's also more supportable, as the vendor then has the exact information for each package, as they would have had they rolled the RPMs from their own default configurations, which a totally user-defined setup would deprive them of.

    There's also the dependency hell and the namespace clash that "standard" distros suffer from all the time. I've never come across a distribution YET that supplies stock binaries that is capable of supplying ones that actually work together. First rule, guys, is that if package A is rebuilt, ALL packages in which A is directly or indirectly a dependency should automatically be rebuilt. It should not be possible, using JUST the stable packages (DEB or RPM), to get into a situation where packages barf or cannot resolve their own dependency requirements.

    (I won't even get into the issue of broken packages in the stable updates, where you cannot complete an install or a deinstall because the flippin' scripts don't work and don't cleanly handle the case where something breaks, effectively barfing over the disk and the package database.)

    The more I use Linux, the more I am convinced that the distros out there are operating on a flawed assumption. They work great, but only when that assumption holds true, but will catastrophically fail outside of those bounds. This assumption is that people will use the distro with a relatively narrow aim on relatively generic systems.

  • by Jimmy King ( 828214 ) on Friday August 29, 2008 @01:18PM (#24797385) Homepage Journal

    Man, I wish I would have been able to see this post 8 months ago. I fought with this at work for many weeks. We had a CMS that some contractors developed which used DBIx::Class heavily. They developed it on a SuSE box and had no issues. Then it was deployed on Red Hat (yeah, yeah. Not my choice to have the dev environment different than the live one. I've brought it up several times in the past.) It ran like total crap on the RH machines. Then the contractors left and the project was passed to me, where I had to profile the code and then do a ton of searching to find the bug. I tried several of the reported fixes that were documented at the time and nothing resolved the performance issues.

  • by /ASCII ( 86998 ) on Friday August 29, 2008 @01:19PM (#24797387) Homepage

    The company I work for does support for any Linux distribution, custom compiled packages, whatever. If the customer uses non-standard packages and oddball solutions, it often takes more time to solve their problems, but since we work by the hours, that's their problem.

    I find it hard to believe that businesses such as ours are unusual.

  • by DittoBox ( 978894 ) on Friday August 29, 2008 @01:22PM (#24797437) Homepage

    Interesting. The entire reason we like Open Source is because we can change the code and fix bugs and make our lives better without being explicitly tied down to a vendor. But when I sign up with a vendor to provide support with things like this, I'm not able to fix those problems and worse off neither is the company that I signed up with.

    What's the point then?

  • by ppanon ( 16583 ) on Friday August 29, 2008 @01:39PM (#24797713) Homepage Journal
    A bunch of administrative scripts in Redhat are written in perl and python. Don't know if it's still the case, but a few years ago, it used to be that if your upgraded to the version from the next major release, some of them would break. So you could live with the current state of things, or you could rev the O/S. It's called stability through configuration management. Sometimes it's a bit frustrating, but in 99% of cases, it's a win.
  • by rgviza ( 1303161 ) on Friday August 29, 2008 @01:46PM (#24797817)

    > and I've never even heard of compiling your own Perl.

    I've been doing this for a very long time, in addition to compiling my own php, apache etc.

    If it's exposed, I compile it. If a 0day hits, I don't have the luxury of leaving the fate of my production security in some vendor's hands.

    At the negligence trial, where I'm being prosecuted because my box got pwned, the plaintiff's attorney is going to ask why I didn't fix it if I could. I can, so I do.

    If you are moving from packages to compiled stuff, it takes a lot longer than it does if you already operate this way.

    The last SSL worm beat me over the head with the importance of this.

    While compiling doesn't make you more secure, it sure as hell allows you to become secure faster *when* something happens and the vendors drag their feet. It beats unplugging your servers until a fix is available, which was my other option when this happened.


  • by Kjella ( 173770 ) on Friday August 29, 2008 @02:09PM (#24798137) Homepage

    Well, what do you expect? The whole thing is about a CentOS user complaining about how Red Hat is doing a poor job supporting RHEL. What do you know, if he and others actually paid Red Hat for support instead maybe they'd have money to actually do support. Everybody wants something for free, but then deal with what you're getting. Under the GPL he and CentOS is free to leech off what Red Hat is doing but they're basicly bitching their free support isn't good enough. You think I should go on the LKML and bitch to Linus about fixing my bug, because Ubuntu uses his code and I'm using Ubuntu? It would make about as much sense.

    As for your other points, I do work with some other enterprise software and if you've ever tried getting support on anything you've tweaked yourself, he's either never tried it, not mentioned the tweaks or been extremely lucky. Anything outside the supported environment, any custom modifications you've done and they'll tell you to revert to the supported version and reproduce, and they won't even touch it until you do. I can't exactly say I got a broad experience with support contracts to speak of but my impression is that it's general practise and that if you want them to support your custom modifications you'll be paying through the nose for real enterprise support. My guess is they had something like regular business support? If so, I'm not surprised...

    As for updates, expect that they will ask you to reproduce on the latest version, they almost always do. One solution that may be possible is to upgrade a test system and show the error is still there, I have done that sometimes in the past. One of the issues I imagine Red Hat has is the broad number of packages - in my case I'm talking about a single product with a few major service pack versions, if you got continous patches like a distro I imagine it's difficult to support every combination. But again, I'm sure they'd do it if they got paid enough. That said, if you are running an enterprise distro with only security hotfixes they're usually the kind I'd rather apply than not...

  • by jc42 ( 318812 ) on Friday August 29, 2008 @02:11PM (#24798169) Homepage Journal

    There isn't anything wrong with preferring fast incorrect results over slow correct results, but most people probably want slow and correct to be the default if given the choice.

    Well, I'd be a bit careful about making such general statements. There is evidence that people aren't generally that intelligent.

    I remember back in the 1970s, when I was at a large university that shall remain unnamed, and a bunch of CS people did a detailed study of the Fortran that accounted for fully half the runs on the campus's central mainframe (which shall also remain unnamed). They found that fully half the runs produced at least some incorrect output due to undetected integer overflows. The hardware gave interrupts for floating-point overflows, but for integers, it just set a flag bit, and you needed to test that flag to catch overflows. The compiler had an option to generate such tests, but it was off by default. The vendor said they did this because they had found that most customers preferred faster code.

    The local gang didn't believe this, so they did a bit of a survey. They asked lots of users of the Fortran code whether they would prefer their programs to catch all arithmetic errors if this meant that the code ran slower, or if they would prefer faster code that sometimes didn't catch errors. Roughly 90% of the people they asked this said that they'd want the faster code. Later on, I ran across references to similar tests at other schools, with similar results.

    Personally, I was shocked by this. This mainframe was used to do the computing for most of the scientific work on campus, and scientific computing was almost entirely done in Fortran. So half their data runs had undetected incorrect output. They now knew this, and they still preferred the faster speed to correct output.

    Somehow, I suspect that this situation hasn't changed. I've dug into various programming languages since then, to learn how they handle this and other potential sources of erroneous results. Most current languages still ignore things like overflow flags by default. Some have no way of enabling the tests of such flags.

    Yes, I know lots of ways of explicitly testing for such errors myself. I've done it a lot, because I know I can't rely on others to enable the builtin tests (when they exist) when they recompile the code. But when looking at other people's code, I almost never see anything that will detect overflows. When you're N levels deep in function calls, you usually have no way of verifying the possible range of the current function's args, so there's no way of proving that an overflow can't happen.

    Sometimes I'm amazed that our systems run as well as they do, given this sort of nonchalant attitude towards known sources of hardware errors. And I do a lot of paranoid, defensive programming, even though I know that my employers probably don't want it because it slows down the software.

  • by doomicon ( 5310 ) on Friday August 29, 2008 @03:52PM (#24799637) Homepage Journal


    Bug since 2006 in an "Enterprise" Operating System. So they haven't been able to get some spare body to rebuild perl?

    Maybe I'm a bit jaded as I have had extensive dealings with Redhat support. I find it funny that for the longest time Redhat pushed their certifications so damn hard, but don't require their own support technicians to know anything beyond scripted responses.

  • by Karellen ( 104380 ) on Friday August 29, 2008 @04:57PM (#24801093) Homepage

    Reminds me something I heard a wise hacker say once, when someone tried to convince him that their new version of some code was better that his, because it ran in 10% of the time his did but produced (slightly) wrong results in a few cases...

    "If it doesn't have to produce correct results, I can make my version use no memory and run in zero time."

  • by chromatic ( 9471 ) on Friday August 29, 2008 @06:16PM (#24802699) Homepage

    There's no rule that patches applied on bugzilla are to be reported upstream! ... Upstream devs can register to receive any and all buzilla tickets for their products, if they chose to do so.

    Wow, how generous. They distribute software that I write to their users under the same name as my version while potentially applying patches that I may never see unless I go looking for them. Can you see how a bad patch like this might give users the wrong impression about the software I wrote, or how having to check every potential distributor of my software might not be the most enjoyable use of my time?

    And, lo and behold the bug will be fixed:

    A bug which was never present in a stable release of Perl -- the only reason it was present in the Red Hat version of Perl is because Red Hat took a patch from a development version of Perl and kept applying it even to stable releases of Perl when it was no longer appropriate.

  • A bug that the official vendor doesn't want to fix, so the customer goes back to the original source, compiles it and fixes the problem himeself...

    When's the last time you saw this sort of 'complaint' about a Microsoft product. It may start the same, but the ending is very very different.

  • by chromatic ( 9471 ) on Saturday August 30, 2008 @02:54PM (#24812489) Homepage

    You seem to want others to redistribute only your pristine sources.

    No, I want distributors to talk to upstream. If Red Hat had asked "Hey, do you know about this bug? Is there a fix? Can we backport a patch to the old version of Perl we distribute?" they could have avoided this problem.

    Is that really too much to ask?

Thufir's a Harkonnen now.