Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Software Linux

Linux Foundation Promises LSB4 194

gbjbaanb writes "Ever thought it was difficult to write software for Linux? For multiple distros? InternetNews reports that the LSB is making a push for their next release (due out later this year) that should help make all that much easier. Although the LSB has not lived up to expectations, this time around Linux has a higher profile and ISVs are more interested. This is to help persuade them to develop applications that will run on any LSB-compliant Linux distribution. If it gets adopted, LSB 4 could bring a new wave of multidistribution Linux application development. 'It is critically important for Linux to have an easy way for software developers to write to distro "N," whether it's Red Hat, Ubuntu or Novell,' [said Jim Zemlin, executive director of the Linux Foundation.] 'The reason you need that is because we don't want what happened to Unix to happen to Linux in terms of fragmentation.' The LSB defines a core set of APIs and libraries, so ISVs can develop and port applications that will work on LSB-certified Linux distributions."
This discussion has been archived. No new comments can be posted.

Linux Foundation Promises LSB4

Comments Filter:
  • by RandoX ( 828285 ) on Friday August 01, 2008 @01:59PM (#24437207)

    The Linux Standard Base, or LSB, is a joint project by several Linux distributions under the organizational structure of the Linux Foundation to standardize the internal structure of Linux-based operating systems. The LSB is based on the POSIX specification, the Single UNIX Specification, and several other open standards, but extends them in certain areas.

    http://en.wikipedia.org/wiki/Linux_Standard_Base [wikipedia.org]

    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Thank you. fscking acronyms... If you're gonna use them, at least define them once up front, kinda like a variable. If I see one more article using SMB or SMS without hinting at which of the numerous meanings they mean, I'll puke.
    • by Jason Earl ( 1894 ) on Friday August 01, 2008 @05:31PM (#24441193) Homepage Journal

      The Linux Standard Base is essentially a farce. The Wikipedia article linked to above gives a pretty good overview of why, but the primary reason is that developers don't want a set of tests that they can run against their application to see if it is portable. They want a binary distribution that they can actually install their software on and test against. Originally that's precisely what the LSB was supposed to be. It was going to be a small installable distribution based on Debian.

      At the time Caldera thought that would be problematic, and so the current incarnation of the LSB was born. Not that anyone uses it, as it is a complete waste of time.

      • Re: (Score:3, Insightful)

        "Not that anyone uses it, as it is a complete waste of time."

        I don't know if it's a waste of time. I don't even know what the original motivations for the LSB were.

        But I do know what is meant for now: is an intent from some distribution vendors to make easier to coaligate with privative source vendors. Not that I think this is a good or bad thing for them to do, but I do know that's not a goal I'm interested in, so I don't give a damn about LSB.

        Making better/easier portable configure-like tools? Sure.
        Mak

        • Re: (Score:3, Informative)

          by Yfrwlf ( 998822 )
          Giving a binary base for privative software vendors throwing their software wherever they like to, with half-assed start/stop scripts and without integration with the native package management tools of my distribution of choice?

          Uh that's exactly what the LSB is trying to put a stop to. Currently, software installation sucks [ianmurdock.com] because it doesn't have integration with the native package manager. One of the reasons for forming a cohesive, extensible packaging API is so that any package can communicate with t
  • Web devs, python devs, etc likely don't find it that difficult.
    • by Splab ( 574204 ) on Friday August 01, 2008 @02:42PM (#24437917)

      Wrong.

      Any given distro will have to make a choice of what modules each program should support, this means even as a PHP programmer you have no guarantee your software will work with default installation of PHP under a specific distro.

  • "The reason you need that is because we don't want what happened to Unix to happen to Linux in terms of fragmentation." says Jim Zemlin, executive director of the Linux Foundation.

    He needs to read the GPL and understand how it differs from the various PROPRIETARY licenses that caused the *nix fragmentation.

    • by Grey_14 ( 570901 ) on Friday August 01, 2008 @02:16PM (#24437495) Homepage

      maybe you mean something different, but I'm not sure how your statement relates to this issue. Afaik the LSB is about standardizing directory layouts and configuration files, and while sure under the GPL any linux distro CAN be made to follow those guidelines, almost none of them DO, so the difference between nonstandardized linux systems and nonstandardized UNIX systems is a philosophical one and not a practical one.

      (Although on Linux it's a fair bit easier to remedy)

      • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Friday August 01, 2008 @02:21PM (#24437599)

        maybe you mean something different, but I'm not sure how your statement relates to this issue.

        It relates to his statement that I quoted.

        "The reason you need that is because we don't want what happened to Unix to happen to Linux in terms of fragmentation." says Jim Zemlin, executive director of the Linux Foundation.

        That shows how clueless he is regarding the history of *nix.

        It was the various PROPRIETARY licenses that caused the fragmentation because an improvement made by HP had to be specifically licensed by Sun to be included in Solaris.

        But with the GPL, the improvements made in one fork are available to ALL forks.

        Therefore, the fragmentation will not happen because if a feature is worth it, it will be ported to the other forks. Without the need to coordinate licenses with HP or Sun or anyone else.

        The GPL rocks.

        • by renoX ( 11677 )

          >the fragmentation will not happen

          will not?? It has already happened! If you have a software which is certified with distribution X, it may or may not run on distribution Y: you have no guarantee, the fragmentation is already here.

          Features may be ported, but not necessarily in a compatible way: witness how easily the rpm tools have fragmented recently, ok there is now an effort to reunite them, but this example show that licensing compatibility is by now means sufficient to ensure binary compatibility, w

          • will not?? It has already happened! If you have a software which is certified with distribution X, it may or may not run on distribution Y: you have no guarantee, the fragmentation is already here.

            And an example of that would be ... ?

            And what is this "guarantee" that you're talking about?

            Features may be ported, but not necessarily in a compatible way: witness how easily the rpm tools have fragmented recently, ok there is now an effort to reunite them, but this example show that licensing compatibility is by

            • Re: (Score:3, Insightful)

              by Poltras ( 680608 )
              Softimage's XSI, VMWare, many other softwares that can't be recompiled, tested and tech supported in many distros or that need libs that are guaranteed on a platform. You think everything can come from a repository and that every bug in every distribution can be supported, tested and fixed consistantly?
              • Softimage's XSI, VMWare, many other softwares that can't be recompiled, tested and tech supported in many distros or that need libs that are guaranteed on a platform.

                I don't know about you, but I have run VMWare server and workstation on Red Hat, SuSE and Ubuntu.

                Yes, I have. I'm still running VMWare server on my Ubuntu box (Hardy Heron). It works. I have to run a script every time I upgrade the kernel, but that is all.

                I have also run Apache, Samba, BIND and many others on different distributions. Without an

                • by Poltras ( 680608 )

                  Apache, Samba and BIND are using autoconf and recompiled for every distro. Also, they have a package and are part of repositories. Not the case with many proprietary vendors...

                  I agree VMWare works on most major distribution (it recompiles the kernel module, which it shouldn't have to), although I'm sure it won't work on every YDL or shady distros out there.

                  Also, IIRC, they have specific distros for their technical support services. Guess why...

                  • Apache, Samba and BIND are using autoconf and recompiled for every distro. Also, they have a package and are part of repositories. Not the case with many proprietary vendors...

                    So? Fragmentation would be when they did NOT work.

                    Since they DO work, that is not fragmentation.

                    I agree VMWare works on most major distribution (it recompiles the kernel module, which it shouldn't have to), although I'm sure it won't work on every YDL or shady distros out there.

                    Yeah ... so it does work.

                    Again, fragmentation would be wh

  • Distribution (Score:5, Interesting)

    by 99BottlesOfBeerInMyF ( 813746 ) on Friday August 01, 2008 @02:08PM (#24437351)

    The quote in the summary reads:

    'It is critically important for Linux to have an easy way for software developers to write to distro 'N,' whether it's Red Hat, Ubuntu or Novell,"

    Personally (as a Linux on the desktop user), I'm a lot more concerned about easily acquiring installing software, than whether it has problems with my distro. For the most part I can get software to run, but it can be a huge pain in the butt. I wish LSB would focus on extending and standardizing package formats and creating advanced standards for package managers to simplify that part of my workflow. I never wonder, "will this run on Ubuntu," so much as "which package format is this in, or how hard is it going to be to compile and update."

    • Re:Distribution (Score:5, Informative)

      by dlgeek ( 1065796 ) on Friday August 01, 2008 @02:15PM (#24437487)
      You should wonder about will it run.

      Debian and Ubuntu use exactly the same packaging format (.deb). Try taking a debian package from a few years back and installing it on your system. Chances are, it won't work due to library incompatibilities.

      Now you could probably rebuild it for your system, but depending on what it is, it may or may not work.

      When you say "how hard it is going to bed to compile and update"...that's exactly what LSB is working on. It'll be trivially easy to compile a program written against the LSB specs on any LSB compatible distro.
      • Try taking a debian package from a few years back and installing it on your system. Chances are, it won't work due to library incompatibilities.

        No it probably wouldn't, but open source seems to not take too many steps back so it would be easy to just do sudo apt-get install *insert name of the deb package* if it is proprietary though, you are out of luck, but that is one of the (many) problems with non-free software.

    • by myrdos2 ( 989497 )
      I haven't looked at LSB 4, but in LSB 3 you'd make a standard .RPM package and it would supposedly install on any LSB-compliant Linux distribution.

      See here for more: http://www.linuxfoundation.org/en/Developers/LSB_Tutorial#Porting_your_code_to_the_LSB [linuxfoundation.org]

      "LSB-conforming systems promise to be able to install an LSB-compliant RPM. However, you need not limit yourself to that format, with the caveat that the packaging technology you choose must work on an LSB-compliant system. For example, a shell script wit
    • But, did you note the mention of "ISV" ?

      When last did you install Netbackup, Symantec Critical Server Protect, Veritas, Sun JES (and all its pieces) on Ubuntu? You haven't? Yes, this means that you aren't the target customer for the people who need problems solved.

      • Yes, this means that you aren't the target customer for the people who need problems solved.

        The point I was making is that it is hard for developers to easily get their software into a single format that all their customers can easily use. The fact that in my examples I'm not the target for LSB does not matter. I'm the customer of their target, and when it is a pain for me to get and install software because the developer had to choose to target either Debian or RedHat or waste time trying to target both with different procedures, well that is the exact problem they're supposedly working on solvin

  • LSB4 is all very well, but if RHEL does not follow (does anybody really think they will?) it will not amount to a hill of beans.

    • Exactly, Red Hat will be complaining that it doesn't use RPM, Ubuntu will grumble that they just made Apt and Deb simple enough for everyone to use, Gentoo will complain that it isn't fast enough...
    • It won't work because they'll (again) try to solve problems that don't need solving and (again) ignore the one thing that would actually make a difference.

      They'll probably throw yet another abstraction layer on top of init, one more for the package manager(s), and maybe a big tarball of symbolic links to top off the complexity party.

      But you won't hear anyone saying "Hey! What if we all agreed to use the same tool chain for a bit?"

  • by HighOrbit ( 631451 ) on Friday August 01, 2008 @02:24PM (#24437643)
    All the distrubtions use the same basic set of Gnu tools (like GCC, binutils, bash) and common programs like the perl binary. So why not have all the contemporaneous (i.e. released in the same time-frame) distros use the same tools? Shuttleworth was basically advocating an extended version of this (although he phrased it in terms of a coordinated release cycle [slashdot.org]) to be policy across several distros and to include higher-level applications like GNOME, KDE, and OO (besides the low level stuff like binutils).

    As I've said before [slashdot.org], software vendors like Oracle would love this because it would simplify their support.

    Now if only LSB would stop the cluttering of /usr/bin with non-system programs and put user install apps in /usr/local or /opt where they belong. ;)
    • by X0563511 ( 793323 ) on Friday August 01, 2008 @02:37PM (#24437841) Homepage Journal

      Well, from what I've seen, /usr/local and /opt were reserved for the local sysadmin to manage, and the package management system generally stayed away from that. This meant that custom software and distro packages didn't have file conflicts.

      Now, I like the way that works, a lot. But I don't have any objections against further partitioning of that scheme.

    • Now if only LSB would stop the cluttering of /usr/bin with non-system programs and put user install apps in /usr/local or /opt where they belong

      Won't happen until Linux distributions work out what is a system program and what isn't. In *BSD and most other UNIX derivatives it's obvious. In a Linux distribution everything is a third-party component including the kernel, so where do you draw the line?

      • by Fweeky ( 41046 )

        They're already made that distinction; they all have a core set of packages which are vital to a working system, e.g. what you get when you debootstrap a Debian/Ubuntu install, which is enough for a usable shell, basic networking, and installing other apps.

  • by Doc Ruby ( 173196 ) on Friday August 01, 2008 @02:26PM (#24437665) Homepage Journal

    Other than eliminating conflicting directory structures, the most important standard for Linux distros to completely unify would be a single API to data protocols and MIME types. Like the one FreeDesktop.org has managed to sync (in principle) between GNOME and KDE Desktops, but for all distros (including servers).

    A registry of which app to hand off a URL to given its protocol part, to retrieve the data. A registry of which app to hand off the data to once it's retrieved. Different data handler lists for displaying, editing or executing (the usual Linux RWX modes) the content, depending on the use case triggering the registry access. The registries could include prioritized lists of different apps, depending on user selection or settable default preference. And of course any single app could be registered to either registry, in any mode it will function properly.

    Then the OS is performing its main task of connecting processes to the hardware and to each other. In a very simple and clear architecture. That every single app can use, without having to anticipate how the other apps will agree with it.

    If LSB4 can pull that off, using the existing attempts as a starting point, it won't just make a unified Linux target for developers across distros. It will make LSB4 itself more quickly and completely adopted, because its benefits will be so compelling.

    • Ok, I plenty layers of abstraction there, but I don't see a problem that needs solving. Are you trying to make it so that if I click on a png it gets opened up with the application that I set in one of my registry configuration wizards?

      • Yes. And that if you click on any URL, or in any other way tell an app you want a network/internet/filesystem object with a URL, that the app will get the data, by relying on whatever standard processing you have assigned (or the OS assigns by default).

        Yes, this should all seem familiar from Firefox or your other browser - it was basically introduced with Netscape. But it should be an OS feature. So that any app can access it in a truly standard fashion. That's why it was started in FreeDesktop.org. But eve

  • by BELG ( 4429 ) on Friday August 01, 2008 @02:33PM (#24437761)

    We don't want it to happen?

    It already did.

    Distribution compatibility and package management is a big problem for most, if not all developers, and has been for a very long time.

    • Distribution compatibility and package management is a big problem for most, if not all developers, and has been for a very long time.

      And it will be, for quite some time to come. Linux doesn't have a stable ABI, so it's very hard to deploy [blogspot.com] to. I'm pretty sure even the LSB's goals don't reach far enough to solve this fundamental problem.

      • Linux does have a stable ABI for userland programs. Linux is a kernel. It only interfaces with userland programs via system calls. These are backwards-compatible right back to Linux 0.1.

        Linux distributions may or may not have a stable ABI. If you use C++, they tend to use the GCC ABI which changes periodically for C++. Libraries also might not have a stable ABI, but if you write an app that depends on one then you have to either see if it makes ABI guarantees and consider static linking if it doesn't.

    • Distribution compatibility and package management is a big problem for most, if not all developers, and has been for a very long time.

      Debian [lwn.net] focused on and solved this problem with their FHS (the whole lwn discussion [lwn.net] on LSB4 is here), and take packaging and interoperability seriously (they also take distribution seriously, but other distros do that too). But IMHO, Debian represents the amount of rigor, effort, and time it takes to get these non-glamorous 'administrative' things right. In particular, a commitment to 'must pass defined installation/filesystem/interoperability test suite' over 'rpm -i seems to drop stuff in place ok' is h

    • Don't write your code to rely on v 2.3b of some obscure library that some other obscure programs may need a different version of and may not exist on some systems anyway. Stick with core libs like libc etc and you can't go wrong. Or even better just distribute a statically compiled binary.

  • Do we want LSB? (Score:2, Insightful)

    by maestroX ( 1061960 )
    Formalizing the basis of a linux system seems awkward to me. It simply evolves, LSB is following.

    I've never had any need for a standardized linux environment except when I had to run Civ3 using libc5. The kernel never really freezes AFAIK.

    The beauty of linux, progress continues, just switch distros. If you need something comfy and reliable, use Debian.

  • by harlows_monkeys ( 106428 ) on Friday August 01, 2008 @03:25PM (#24438875) Homepage
    Every project needs a code name. For this, I propose "Bullwinkle", and their slogan can be "This time for sure!".
  • +1 as a developer (Score:2, Insightful)

    by ge0ffrey ( 1337221 )
    As a developer I've considered this one of the (if not the) most important issue in linux. I am happy to hear it's finally getting the attention it needs. Many applications (especially games) will only be released for linux (and work out-of-the-box without tweaking), once there is a decent way to build 1 release for any (LSB compliant) linux distro. I myself build java applications (on Ubuntu) that work perfectly fine on linux, but because of this problem, I simply don't bother building a release package f
  • They don't want what happened to UNIX happen to Linux?

    "But, Doctor Evil, that already happened."

    The horses have escaped and had children.

  • Instead of trying to make all the distributions the same, why don't they make a library that abstract away the difference?

    Example: If my program need to link to a ssl library(Such as openssl), version 2.3 or newer, I should call a function
    findLibrary("ssl",2,3) which would return the path to the needed .so file, or null if the file is not installed. There could then be a
    function to also ask the os to install the needed library if it were not there.
    Each linux distribution should then implement the library in a way, so that the Redhat version, might forward the call to rpm, while the debian version of the library would query the dep database insted.

    And instead of the infinite debate on /opt vs /usr/local the program could just call getPathForUserInstalledSoftware();
    And getDefaultCompilerPath() instead of the current autoconfig hack.

    Then a linux standard base, would just be a specification of the needed functions in LinuxStandardBaseLibrary.

    And we would newer have to use the autoconfig hack. (The library might ofcause also be implemented on Solaris, and maybe even cygwin/windows)

    • The trouble is, LSB would essentially turn into ICANN in that regard. How do you determine what qualifies a library for inclusion in LSB except by charging like we do for domain names? Of course, there could also be a vetting process, but one person decides that the vetting process is not working for them, and suddenly you have an alternate version of libSSL floating around that's required for a specific program, and before long the entire thing falls flat.

      We need things to be as much as possible the same.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...