Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software Linux Business Linux IT Technology

Ulrich Drepper On The LSB 401

Sam Lowry writes "In a recent post at his livejournal, Ulrich Drepper criticizes the LSB standard and urges the distributions to drop it." It's an interesting piece; Ulrich raises some good points.
This discussion has been archived. No new comments can be posted.

Ulrich Drepper On The LSB

Comments Filter:
  • who? (Score:4, Insightful)

    by mmkkbb ( 816035 ) on Monday September 19, 2005 @10:13AM (#13595541) Homepage Journal
    Who is Ulrich Drepper, and why should I care about what he says on his LiveJournal?
    • Re:who? (Score:4, Informative)

      by Anonymous Coward on Monday September 19, 2005 @10:16AM (#13595564)
      Ulrich Drepper is the guy who currently leads Glibc development, which makes him an important hacker type person who should hopefully know his stuff.

      He also has an ego that could drag Theo deRaadts ego into a dark alley and beat it senseless. He is an asshole.

      How he is considered qualified to talk about the LSB when it doesn't have much of anything to do with Glibc, I don't know.
      • Re:who? (Score:5, Informative)

        by Nadir ( 805 ) on Monday September 19, 2005 @10:23AM (#13595614) Homepage
        > How he is considered qualified to talk about the LSB when it doesn't have much of anything to do with Glibc, I don't know.

        Probably because the LSB was created so that commercial binaries can run on any LSB-compatible distro. A key part of this is also related to symbol versioning in Glibc. As Ulrich is maintainer of Glibc, and as he works for Redhat which has to guarantee LSB certification, I guess he's entitle to talk about the LSB.
      • Re:who? (Score:4, Informative)

        by Otter ( 3800 ) on Monday September 19, 2005 @10:25AM (#13595631) Journal
        How he is considered qualified to talk about the LSB when it doesn't have much of anything to do with Glibc, I don't know.

        As I understood that somewhat incoherent rant, his complaints are actually about the LSB test suite, not the spec itself, and specifically about linker- and threading-related bugs in the suite.

        • Re:who? (Score:3, Interesting)

          by dirkx ( 540136 )

          his complains are actually about the LSB test suit

          And simply working to gether to fix that testsuite may be a more pragmatic way of fixing things. The LSB organisation has been open to feedback, is fixing things and is, like all these organisation, resource contrainted. Exactly the sort of thing open source volunteers are so excelent at helping overcome. Especially those employed by companies, like Ulrich his employer, who really want LSB certification.

          And linux desperately needs LSB. At the very le

          • Re:who? (Score:3, Interesting)

            by Nevyn ( 5505 ) *

            Right now we are seeing binary packag installers check for stdint.h so they it can guess that the right version of some libraries is in a different place - just so it works across RHES, Fedora, Suse and Debian. Not nice.

            Riiight, you have apps. that are checking for an include file in a specific location, which is _also_ provided by dietlibc ... and I can guarantee they can't use that.

            Instead of say just running /lib/libc.so.6 and comparing the version (which, of course, isn't ideal and it could even

        • Re:who? (Score:5, Interesting)

          by tolkienfan ( 892463 ) on Monday September 19, 2005 @01:36PM (#13597181) Journal
          The problem is actually quite simple.

          If the test-suite is broken, then the LSB guaranties are worthless.

      • Re:who? (Score:5, Informative)

        by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Monday September 19, 2005 @10:26AM (#13595641) Homepage Journal
        How he is considered qualified to talk about the LSB when it doesn't have much of anything to do with Glibc, I don't know.

        AFAIK, GLIBC is one of the components required for LSB compliance.

        And he's right, the LSB was a poorly thought out attempt to make all distributions compatible with RedHat rather than an attempt to come up with a common groud for all distros. For example, why oh why is RPM support required for LSB compliance? It doesn't affect the execution of software on the system, and only serves to create a mess for distros that use another packaging system.

        Far more frustrating than that, however, is the fact that LSB only covers the very core of the system. The APIs that 90% of programs rely on are not even mentioned in the LSB spec. Rather, the spec simply states that a few very basic libraries must exist, then goes on to detail the signatures of the function libraries. Not particularly useful unless you're Sun Microsystems looking for a way to convince people that you're compatible with Linux.
        • Re:who? (Score:3, Interesting)

          by Ed Avis ( 5917 )
          I think the intention was that vendors could ship a binary-only package and have it work on any LSB-compliant distribution. Hence the need to specify a package format that should work everywhere, and hence RPM. It's a nice thought, but it might have been easier to ignore packaging and just specify that tar and gzip commands shall be available.
          • A better approach. (Score:3, Interesting)

            by khasim ( 1285 )
            #1. Define the format of the package that LSB apps will be shipped in.

            #2. Define the functionality needed by the package management system to install, update/upgrade, remove those packages.

            #3. Let the various distributions add that functionality to their own systems IN ADDITION to the functionality they already have.

            Never define a app as the "standard".

            Always define the functionality so anyone can write an app to that standard.
        • Re:who? (Score:3, Informative)

          by dvdeug ( 5033 )
          For example, why oh why is RPM support required for LSB compliance?

          So there is some standard way of packaging a program for all LSB distros. Joey Hess's Alien can turn an LSB-RPM into about any package format you can need.

          LSB only covers the very core of the system.

          Right; that very core has taken years to standardize.

          The APIs that 90% of programs rely on are not even mentioned in the LSB spec.

          What programs? It's designed to be sufficient for commerical binaries, which historically statically link everythin
      • Re:who? (Score:5, Insightful)

        by Anonymous Coward on Monday September 19, 2005 @10:28AM (#13595665)
        How he is considered qualified to talk about the LSB when it doesn't have much of anything to do with Glibc, I don't know.

        I take that right back. I'd forgotten that LSB goes as far as defining the ABI, which is clearly the realm of Glibc and something which Ulrich is more than qualfied to comment on.

        I've always thought that the biggest problem with LSB was that it didn't go nearly far enough, which means that distributors and users can't all use the same binary and we end up with these ABI issues that Ulrich complains about.

        From what Ulrich says, the idea of the LSB is good but the implementation is deeply flawed. The standards board are seperated from the implementors who are seperated from the testers and communication and understanding between the groups is poor. Which is a shame, but LSB has always struck me as a bit of a lame duck.
        • Re:who? (Score:3, Insightful)

          by nietsch ( 112711 )
          How do you conclude that the implementation is poor? The implementation is the stuff the various distro's put in. What the LSB is, is a spec and a testsuite. His rant is about parts of the testsuite, which appear to be written by less skilled people. I agree that that is a bad decision, the test should at least be written by someone skilled in the software to test, preferably the software developers themselves. But testcode is not the deliverable code, it may contain bugs, just like any other piece of softw
      • Re:who? (Score:5, Interesting)

        by Tet ( 2721 ) <slashdot AT astradyne DOT co DOT uk> on Monday September 19, 2005 @10:33AM (#13595701) Homepage Journal
        How he is considered qualified to talk about the LSB when it doesn't have much of anything to do with Glibc

        The LSB has nothing to do with glibc? Really? Strange. I always thought the LSB was designed to ensure binary compatibility between distributions, and hence has quite a lot to do with glibc.

        Personally, I still think the LSB has some value, but Uli's concerns are valid. IMHO, they seem to point to problems with the current LSB test suite that should be fixed, rather than leading to the conclusion that the whole concept is broken, though. In its current form, there is little value to be had in LSB compliance, true. But it needn't always be that way. A decision needs to be made to either fix the LSB or abandon it altogether. Uli prefers the latter approach. I favour the former.

      • Then I wish he'd put a XML parser into glic so that no-one has an excuse for not using XML for configuration files and for data export / import.
        • by Tet ( 2721 ) <slashdot AT astradyne DOT co DOT uk> on Monday September 19, 2005 @11:27AM (#13596165) Homepage Journal
          I wish he'd put a XML parser into glic so that no-one has an excuse for not using XML for configuration files and for data export / import.

          Were there one available, I would still be unlikely to use it. The fact remains that after you've seen through all the marketing hype, XML remains inappropriate for many tasks, and configuration files are right at the top of the list. You only have to look at Jabber or Tomcat to see some perfect examples of that.

          • by antientropic ( 447787 ) on Monday September 19, 2005 @03:14PM (#13597866)

            The fact remains that after you've seen through all the marketing hype, XML remains inappropriate for many tasks, and configuration files are right at the top of the list.

            In fact, it's the opposite: XML makes a lot of sense for configuration files. For instance, suppose that you need to write a script that automatically adds a line to /etc/X11/xorg.conf or a similar configuration file. If a file like that is in XML, this is trivial: you can write a XSL transformation or use any of a billion tools to apply the change in a correct way. But if it's in some ad-hoc file format (as it is right now), you either have to write a parser and unparser (which would have been unnecessary if it had been in XML; and how do you know for sure that your code is entirely correct?) or use some hacky combination of sed/grep/etc. to perform the change (which is, alas, the "Unix way"). The latter will of course fail unpredictably in lots of cases. E.g., are you handling those sections correctly? Comments? What if the line was already present? And so on.

            Of course, XML is a horribly bulky format. But who cares? It's not like configuration files will take up a lot of disk space either way. The important thing is to have a universal standard format that can be easily manipulated using standard tools so that you don't have to implement parsers and printers all the time or approximate them using broken sed/grep hacks.

      • Re:who? (Score:3, Insightful)

        by Cyno ( 85911 )
        Glibc is the GNU C Library. Being a library, or a file of any kind, it is subject to some of the LSB rules, such as the location of libraries within a filesystem.

        Some of these rules are *nix standards and make sense in an old fassioned traditional way. Or they make sense in that we need a standard place to always find these files between different systems, in order to assume some sort of compatibility across platforms, way.

        But they don't always offer the best solution. Sometimes they have unnecessary rul
    • Re:who? (Score:3, Funny)

      by sxltrex ( 198448 )
      I think he's the drummer for Metallica.
    • Even if you don't use Linux and use Free/Open/NetBSD or MacOSX, you should be thinking highly of this man- he's pretty damn sharp to say the least.
      • by the morgawr ( 670303 ) on Monday September 19, 2005 @12:16PM (#13596599) Homepage Journal
        glibc developer actually

        And while he happens to be right in this case, I don't think very highly of him. He's clearly very bright, but the poster above who said that Ulrich had a bigger ego than Theo was spot on. Too often, he lets his ego and NIH syndrome get in the way.

        For example glibc is the only major C library that doesn't support the new buffer proctected string functions originally written by OpenBSD (at least last time I checked). These fuctions are faster, safer, and easier to use then the POSIX ones and are supported not just on BSDs but almost every commercial UNIX. Source compatability alone would dictate including them.

        Drepper however has repeatedly refused to include them because they work and they make it too easy to not code buffer overflows (no this is not a joke). According to Drepper programmers should be good/smart enough not to mess up something so simple as a string buffer so including a defacto standard that makes it easy to get it right is inappropriate. WTF?

        • by Nevyn ( 5505 ) * on Monday September 19, 2005 @03:18PM (#13597903) Homepage Journal
          And while he happens to be right in this case, I don't think very highly of him.

          [...]
          Drepper however has repeatedly refused to include them (strlcpy/strlcat) because they work and they make it too easy to not code buffer overflows (no this is not a joke).

          While Ulrich has his faults, the above is completely false. The reason they weren't accepted into glibc was IIRC:
          1) They are non-std. and did not have a usable standard like definition apart from the implementation and had no tests (Solaris implemented them slightly differently, for example, and Input Validation in C and C++ [oreillynet.com] from oreilly also screwed it up -- and that was written by people selling a Secure codeing in C book).
          2) It doesn't solve the problem better than asprintf() which had been around for years (although also non-standard), as you still have problems with truncation [and.org] (and both APIs have the problem of requiring the programer to correctly pass around the meta data about the string -- Ie. it's size/length).
          3) Given the above, and the fact the implementation is "free" then anyone wanting to use them can just include the source in their apps. and rely on autoconf (and they'll also be guaranteed to have the "correct" implementation).

          • Sorry about that, let's try again:

            First what I said above is true, at the time Ulrich said specifically that strlcat and strlcpy wern't nessessary because programers could just check their code for the common mistakes the strl* functions are intended to solve.

            1) It is true that they are not in the POSIX, ANSI, ISO, or Single UNIX standards, but neither is a ton of the other stuff in glibc. However, they are supported on almost every non-GNU libc -- making it a defacto standard. Many open source apps use

    • who cares? (Score:4, Insightful)

      by banana fiend ( 611664 ) on Monday September 19, 2005 @10:27AM (#13595657)
      RTFA

      It could have been written by Bill Gates or my mom.

      Why does the author have to be so important if the facts are laid out and verifiable. You don't have to agree with his analysis nor his conclusions, but the facts should stand or fall regardless of the author

      • Re:who cares? (Score:3, Insightful)

        Unfortunately this is no longer true. I tried for many years to adopt a "judge by content, not by source" policy, but have realised it's just hopeless idealism.

        There has always been spin and FUD, but these days it has developed a very organised, very slick phenomenon. This means that you need to give increasing weight to background motivations to pierce the veil.
        • Re:who cares? (Score:4, Insightful)

          by banana fiend ( 611664 ) on Monday September 19, 2005 @11:30AM (#13596185)
          it's just hopeless idealism.

          ummmm... at some point someone has to produce content to gain credibility. You say that FUD has become slick? Just because someone produces a slick info shot doesn't mean you shouldn't STILL be checking the facts.

          I think we're probably on the same side here, but you don't need anything to "pierce the veil" except verifiable references.

          Which this guy has. You can go to the bugzilla database that he talks about and discover for yourself if most of the bugs submitted are indeed bugs that show the tests are broken

          • Re:who cares? (Score:3, Interesting)

            Citation and statistics obviously help. But they aren't the be-all and end-all.

            Example 1: Interest Rate Prediction Markets
            Traders make judgements based on lots of market data - consumer confidence surveys, growth predictions, oil/stock/property price trends, major company results, etc. When you have a particular outlook it is almost always easy to come up with LOTS of figures to support that position. These are all verifiable statistics! To objectively ensure you are taking into account a representative sam
    • Re:who? (Score:2, Insightful)

      by schon ( 31600 )
      More to the point, why should you care about the opinions of someone who would write this:

      My advise: but the losses. To some extend, I think, the claims a scaled back meanwhile, if I understood Art correctly.

      To quote Lisa Simpson, "I know those words, but that sentence makes no sense to me!"

      Can someone.... anyone convert that into English?
      • My advice: cut the losses {and run}. To some extent, I think, the claims are scaled back. Meanwhile, if I understood Art correctly...
        • But that still doesn't make any sense..

          Perhaps me meant "to some extent, the claims should be scaled back"? If they are already scaled back, then what's he ranting about?
      • but => cut
        extend => extent
        a => have been
        meanwhile => recently
  • Ulrich Who? (Score:3, Insightful)

    by LithiumX ( 717017 ) on Monday September 19, 2005 @10:14AM (#13595552)
    Just curious as to who this guy is...

    ...and realizing that in today's net-driven society, all it can take is for people to quote you, and others automatically assume you're important. I have no idea who this guy is, and I'm already assuming he's someone since ./ quoted him in an article.
  • Ulrich Drepper... (Score:4, Informative)

    by MaestroSartori ( 146297 ) on Monday September 19, 2005 @10:14AM (#13595556) Homepage
    ...seems to be maintainer of the GNU C library, and works for Red Hat. At least, that's what Google says. Should I know who he is??? :/
  • by Anonymous Coward on Monday September 19, 2005 @10:17AM (#13595575)
    Some other random dude says this isn't true over on his MySpace!
  • by Anonymous Coward on Monday September 19, 2005 @10:25AM (#13595637)
    I've been using Linux for many years, and the problem of obtaining software packages drives me to the end of my nerves. Every single time I try to get a package that isn't something extremely common like Apache, I run into major, major problems. Honestly, I don't care how the problem gets fixed. Distribute a binary with everything compiled in for all I care. Distributions distribute every package known to man anyway. :)

    Something needs to be done. Even with the source, half the time I have to make all sorts of include changes. What is so hard about providing a common build and install process? If you get Apache, OpenOffice, and Mozilla to adopt a convention, everything else will follow. Why not have something like Apache Ant that simply installs either to a user directory or to a common directory and links to every user directory? Then provide a nice GUI on top of it, where it will either compile if the source is there and then install, or just install otherwise? How hard could that be? Forget this ./configure nonsense. It sucks.

    Regardless, this is a perfect example where sometimes it really does make sense to have "management" provide leadership by imposing structure. Ideally, they would be serving and representing the interests of users and helping to overcome the disinterest of joe programmer who doesn't do the psychologically difficult work of catering to someone other than themselves. The "scratch an itch" metaphor breaks down when other people don't know how to "scratch" themselves and need the help of a division of labor to serve their needs. Before you say that they should learn how to "scratch", think that as a community, society, and economy we all scratch eachother's itches in an incredibly diverse number of ways. This comes about because of intentionally trying to fulfill a demand. In the case of the Linux stack of Free/Open Source software, the developers have not taken responsibility for how their product is consumed.
    • Even with the source, half the time I have to make all sorts of include changes.

      Then the program is broken. Report the bug. The autoconf/automake scripts should take care of all that.

      It sounds as if you'd like autopackage [autopackage.org].

    • by Mr. Underbridge ( 666784 ) on Monday September 19, 2005 @10:56AM (#13595887)
      I've been using Linux for many years, and the problem of obtaining software packages drives me to the end of my nerves. Every single time I try to get a package that isn't something extremely common like Apache, I run into major, major problems.

      No kidding. You'll find some decent looking project, and it's no big deal, the developers just require this neat toolkit that they consider standard, and all the 133! distros have it, just not the old ones like RedHat, Slackware, and SuSE. Of course, the most recent build is two years ago, because after a year of development all the kids got egos and couldn't stand each other.

      Of course, then you find out that the neat toolkit they use depends on an old version of Python, and naturally it's built to do a hard-coded check for a specific version of python in the configure - not the current one of course. And naturally the references to the old version of python are strung throughout the config file. And as it turns out, if you fix all the references in the config, that will break the calls somehow. So you can either install yet another version of python, or forget about this neat little program.

      I really prefer compiling from source, but it's getting to the point where it's just not worth the crap.

    • I've been using Linux for many years, and the problem of obtaining software packages drives me to the end of my nerves.

      After ditching some old Linux installs a few months ago (a Gentoo system that had gotten hopelessly snarled up and a YDL with a broken RPM database) I tried out a few different options. Conclusion: the most important thing in Linux is a good package archive. The other 10,000 Linux annoyances mostly need to be solved once. The package stuff is just going to keep on biting and biting at you.

    • by aussersterne ( 212916 ) on Monday September 19, 2005 @11:03AM (#13595957) Homepage
      I think the (possibly regrettable, I don't know) answer to this is that Linux users need to choose: they can have an easy-to-use distribution that is a near monopoly in the Linux world (which is WHY it will then solve problems like the one you describe), or they can have a hundred different distributions.

      Right now, so long as you pick one of the "big three" (Debian, Red Hat/Fedora, SuSE), you will have very little package/software install trouble.

      Most companies that release Linux software offer the following downloads (as do most OSS software websites for individual products):

      1. .tar.gz to compile from source (gets you right into th dependency hell you want to avoid)
      2. RPM for Red Hat/Fedora
      3. RPM for SuSE
      3. DEB for Debian

      I have been in the Red Hat family since Red Hat 5 or so and I can tell you that beginning with Red Hat 8 things started to get really easy, and by the time the Fedoras had come around, I spend nearly zero time compiling my own software or chasing package dependencies. Tools like yum/apt even make it so that you don't have to FIND a download site and double-click on and icon, you just type in a command that says "I WANT IT!"

      But even for commercial software like Flash or Java, it's cake, I just install the package. The reason is because the package is DESIGNED FOR MY OPERATING SYSTEM.

      Sorry, but most of the other Linux operating systems (Slackware, Mandrake, Yoper, Xandros, whatever) are too small for packagers to target them, and that's generally what results in package hell--you are trying to use a package that assumes the components installed by default in another operating system. So even if they are both RPMs, installing a Red Hat/Fedora RPM on Mandrake will cause you trouble. Even once you get the packages all installed, the configuration and support files are likely to be located in all the wrong places.

      And yes, generally the packages ARE clearly labeled. So I guess my answer is the one people hate to hear, but if you're going to ask the question about "package hell" then you're going to get this answer: switch to a bigger distro (best case is probably Red Hat/Fedora) and the problem will generally go away.
    • "Something needs to be done. Even with the source, half the time I have to make all sorts of include changes."

      I will probably get modded flamebait, but I agree.

      I just went throught the process of adding Bugzilla [bugzilla.org] to my installation of Fedora Core 3 [redhat.com]. I run Fedora because that is the default Linux installed by my provider and anything else would more than double my costs. I just checked the LSB Certified Distribution List [opengroup.org], and sure enough Fedora is not on it. I tried upgrading my system using Yum [duke.edu], but th

  • Anyone know where this link [linuxbase.org] went? I get document contains no data.

    From the fine article:

    This applies also to the code which is written by the presumed professionals paid by the OpenGroup to write tests. Want an example? Look at this [linuxbase.org]. This is no isolated incident, I've found this kind of problems on many occasions.
  • by Speare ( 84249 )
    "It's an interesting piece; the reason are thought-out well."

    I'll grant I'm not familiar with all the politics and the specific methodology by which a Linux distro tests or achieves LSB compliance, but this blog entry sounds a lot like whining. Ulrich whines that it's hard, that the audit raises many bugs, that it's tedious, that other distros "somehow" achieve their compliance but he's not sure how, that the audit process itself has bugs, and that the LSB group must be pushing this agenda down people's

    • A philosopher would have spendt all his time trying to find a definition of "Base" in the LSB. I belive what you meant was that he should put forward some hypothesis and bolster those.

      And why shouldn't he be allowed to whine if, as he claims, the bugs are in the *tests*? If he's the maintainer of glibc, I'd assume he knows more about this domain than the average hacker.
    • by ArsenneLupin ( 766289 ) on Monday September 19, 2005 @11:29AM (#13596172)
      Errm, actually there is a single point in the piece: there are *huge* bugs in the test cases.

      All other points raised are shown to be consequences of this.

      The specific example he cited is a rather enormous bug (a thread which is detached can by definition not be joined. "Detaching" a thread means telling the system that you are not interested in its exit status... and join()ing is reading the exit status).

      (This doesn't mean that other examples are as clear cut. It could still be that most tests do actually show genuine glibc bugs, and that he just picked up the right example to bolster his point.)

      that the audit raises many bugs

      ... in the test cases ...

      that other distros "somehow" achieve their compliance but he's not sure how

      I'd say, if Ulrich is right about the test cases, the situation should be fixed by removing/rewriting the dodgy test cases althogher. Deliberately running distros with non-standard shared libraries or on dog-slow hardware to make them succeed the tests is pointless. If that is indeed how "somehow" some distros achieve to pass the tests, Ulrich is indeed right on the mark that it would make the test suite completely meaningless. You are not certifying a distribution, but you are certifying a distribution tweaked to run the tests...

      Better fix the suite, and run the distro under "normal" conditions (i.e. the same as normal users would do).

    • by iabervon ( 1971 ) on Monday September 19, 2005 @12:00PM (#13596449) Homepage Journal
      He's not whining that it's hard. He's whining that it's impossible, because the tests don't match the either the standards or common practice. He's whining that distros must be somehow faking compliance, because they ship *his software* which doesn't "pass" the buggy tests.

      His argument is: no set of Linux software could pass the LSB suite by actually consistantly giving the desired results, because there's no libc that consistantly gives those results (when run on sufficiently fast hardware to expose the bugs in the tests, for example); yet distros do claim to pass the suite; therefore, the LSB is not ensuring compatibility, because it certifies things that don't work by their rules.

      Furthermore, he argues that programs that don't work tend not to work because they rely on undefined behavior. Certifying that the environment behaves in accordance with the standard doesn't help, because the software developer's environment and the user's environment may do different things in some cases, while both comply with the standard. Unless the programs are tested for doing non-standard things, they won't necessarily work. And the undefined behavior is undefined for a reason: you can't improve the system without changing it (especially when the thing not defined is which takes longer: executing a certain function or waiting .001 seconds). And the same cases are particularly hard to test programs' assumptions about.

      The sections that you dismiss as whining are actually providing examples, which is important in engineering (or science). There are theoretical flaws in any process; it is always important to know whether those situations ever actually occur. If he didn't have an example of a program relying on undefined behavior which should vary between systems, one could say that nobody would actually write code like that and think that it worked; but it turns out that people actually do write such code, and these people happen to include the people writing LSB tests, which is why they're flawed tests.
  • I don't even need to read the article to agree that LSB is bad. It's not the idea of LSB that is bad either. Having a standard base to build software against would be great for Linux software developers. There would be a slow moving target that is easy to hit. It's also a way to guarantee that software you write will work on any distro that follows the standard. And distros not following the standard would at least know what it is and make accomodations to get that software to run without giving the develop
  • for these [slashdot.org] two [slashdot.org] trolls which are posted on every article about Linux. And yet some clueless moderator mods them up despite the fact that they are both wrong and offtopic.
  • by furry_wookie ( 8361 ) on Monday September 19, 2005 @11:10AM (#13596018)
    The idea of a common set of standards for lots of stuff obviously has many potential benifits for Linux.

    The problem with the LSB is it does not do much. What is needed is not a standard for "thou shalt have this version of libc in this directory", but instead a standards body needs to come up with "this is the way you will perform your system initilzation", "this is how you will set and store your ip networking configuration" etc...this would make YOUR skills transferable from distro to distro, would allow the community to come up with BEST OF BREED solutions for things like system configuration tools etc.

    Having 1000 different distros do this stuff in 1000 different ways is WORSE THAN not being able to run Oracle on a particular distro without a little tweaking.

  • by ajs318 ( 655362 ) <sd_resp2@@@earthshod...co...uk> on Monday September 19, 2005 @11:16AM (#13596077)
    Let's forget once and for all about binary compatibility. Bury it. Because it does not really benefit most people. There is one very well-known operating system which implements as near full binary compatibility as you can get -- and it's generally regarded as a disaster.

    What matters is source compatibility. And right now GNU/Linux has that in spades. Not just GNU/Linux, but the BSDs, Mac OSX, Solaris and even Windows have it. If the source code is properly written, and properly packaged, then it will compile on any machine that is up to the job of running it. If you make any really drastic changes -- the standard C library for instance -- you might well have to recompile some applications. Is that a major hardship? I don't think so. Back when we changed from round-pin 5 and 15 amp plugs to rectangular-pin 13 amp plugs, people had to have their houses rewired. When we went from artificial gas to natural gas, people had to have their cookers and heaters modified. When Channel Five launched, many VCRs needed their RF output shifted. These were all necessary changes for the better {ironically enough, we probably will be going back to artificial gas in future ..... but the new stuff probably will be more like the natural stuff so nothing will need to be changed}.

    Binary compatibility was never more than a nasty hack, fudged in for the benefit of those who want to lock up the source code of their software. These people are pure evil. By not sharing their code with you, they are just one very tiny step removed from stealing from you. It had the beneficial {at least, it was beneficial when processors were slow and disk space small} side effect that you did not have to spend CPU time and disk space compiling applications locally; but now that disk space and processor power are cheap, the benefits of pre-compiled applications are diminished substantially.

    There's even a good argument to be made in favour of deliberately introducing binary incompatibility. If programs compiled on my computer would only ever be able to run on my computer, and any program compiled on anyone else's computer would never be able to run on mine, then there would be no such thing as viruses or buffer overrun vulnerabilities. {Unfortunately, this raises the question of how to ever get any computer up and running}.
    • You sir are a Class A moron, who has no idea what he's talking about.

      Binary compatibility is EXTREMELY important to Linux if you want acceptance on the same level as Windows or OSX.

      If you make any really drastic changes -- the standard C library for instance -- you might well have to recompile some applications. Is that a major hardship? I don't think so.

      This laughable. That I even have to compile an app is laughable.

      End users do not want to compile and application. They do not want to debug it, figure out
  • by Lodragandraoidh ( 639696 ) on Monday September 19, 2005 @12:50PM (#13596870) Journal
    The LFSS (Linux File System Standard) [pathname.com] is the main standard I am really concerned about; if developers and OS distributions would stick to that it would solve a great deal of the problems I see when installing applications.

    The LSB is overrated imho.
  • standards testing (Score:3, Informative)

    by suitti ( 447395 ) on Monday September 19, 2005 @04:43PM (#13598672) Homepage
    About 15 years ago, i performed POSIX testing for Interactive Unix - an x86 System Vr3 system. I haven't done anything with LSB, but basically nothing surprized me in Ulrich's article. Mostly, I'd go further. And yet, my conclusions differ somewhat.

    The testing process was to run a test, and when it failed, try to figure out if the problem was in the test suite or the tested code. Simple enough.

    The tests certainly at some point worked.

    No. That wasn't the case. I found myself fixing obvious bugs in the test suite, then attempting to use the fixed version against the target. It was often clear that the test suite could never have worked.

    Some distributions still somehow manage to pass the test suits of a new version of the spec. And all this without the people reporting any problems and requesting waiving the test.

    We'd report the bugs, with suggested fixes, but we could not wait for fixes to come back and retest. We had to plow forward. We claimed compliance when we had a test we thought tested the assertions and passed it. We never asked for a waiver. Another nice things we came across during the LSBv3 testing are numerous timing problems.

    Been there. Done that, though I didn't have to find some slow machine. What is the value of such a certification? What assurance does this give you? Is don't use fast SMP machines an acceptable answer in any universe, especially when it comes to thread tests?

    If you have need of slow machines, I can provide approximately 25 working 486/33's. I'd put this on his blog, but he doesn't allow comments. I thought this was strange, because I use livejournal [livejournal.com] primarily as a place where people can comment. However, he talks about his choice there, too. To each their own.

    It is not possible to achieve the goal of 100% binary compatibility...

    All good points. And its worse than that. Yet, the exercise was valuable. For us, it uncovered many bugs in SVr3. Many. This was ultimately a good thing for our customers.

    We were also a Unix porting house. We fixed lots of bugs in our prior ports of Unix. We offered our fixes to AT&T for free. They declined. We had to apply our fixes to each port - without the benefit of CVS. And, we had thousands of patches. And all this for a basically stable system. It was around then that I was convinced of the incredible inefficiency of propietary software. This would never happen to gcc.

    My advise: but the losses.

    I read this as "My advice, cut the losses." Oddly, many versions of this mispelling pass my spell checker. Ulrich needs an editor. Perhaps I'll volunteer. Perhaps he can check my work. Will you be a swap editor for me? I'll check your work, you check mine.

    So, i agree that the test suite was a horrible idea from the idea that one might assure customers that their old software will still run, or will run on compatible platforms. I agree that the last bug will not be found. However, that is not an excuse to give up the search.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...