Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Standard Base 1.0 135

Peter Makholm writes: "Finally the 1.0 version of the Linux Standard Base released. Now software vendors can finally just say that they comply with the standard and then you should be able to use the software on any Linux whether you uses Debian, TurboLinux or Open Linux. Check the standard at linuxbase.org."
This discussion has been archived. No new comments can be posted.

Linux Standard Base 1.0

Comments Filter:
  • by Anonymous Coward
    There are times when distributions really shouldn't need to follow this standard. Example: embedded Linux. You don't necessarily want to follow a full desktop specification. Nor, for that matter, do you necessarily want to do so on all forms of desktop Linux.

    Certainly it is undeniable that the major distributions could do with some standardisation (although frankly, whilst rpms have caused me many problems throughout the years, I've never found that kde and gnome were particularly in conflict).
  • by Anonymous Coward
    While it would take more effort, I would argue that it would be better in the long run to define another specification for embedded systems. The embedded systems motto "perform one function and perform it well" (I know that motto used to describe UNIX shell tools, but it applies here as well =) means that an embedded-friendly LSB would be much too lean. If we start removing functionality from the LSB until its trim enough to work in an embedded setup, then we are also doing a disservice to the server/workstation/desktop ISVs who have to continually put in the extra effort to get their applications up to spec. We're trying to entice these people to support GNU/Linux, remember?
  • by Anonymous Coward
    In order to succeed, the LSB doesn't have to be adhered to by ALL Linux distros.

    For example, a linux-based PDA might not be expected to be able to run ALL off-the-shelf desktop applications. Developers targeting such a distro may need to know what they're doing a bit more.

    That's not to say that all distros shouldn't do their best to come as close as possible (within space constraints, etc) to the standard. That just makes sense in terms of wooing developers.

    Still the LSB takes us that much closer to a place where you won't have to have RedHat training vs Caldera training, etc. Same applies for ISV's, complaints about GUI standards notwithstanding.

    One of these days the KDE and GNOME camps will agree on shared standards for icons/menus/cut-paste/object models to the point that it'll be feasible for the LSB to address them. Either that or one of these camps will eventually 'win', also making it easy for the LSB to address these issues.

    So distros, get cracking. Let's see who's first to demonstrate compliance. RedHat, it would be great if it were you, since you're the only distro big enough to buck the LSB and get away with it. This is a great opportunity to demonstrate that you've no intention of manipulating the Linux market.

    And KDE/GNOME warriors. Get over it. Start cooperating - seriously. Your tookits can both succeed even if you desktops don't. The essence of cooperation is that each side loses a little, but we all gain!
  • by Anonymous Coward
    I don't want my experience dictated down to desktop choice. And rest assured that if one of KDE or Gnome became LSB, in effect it becomes the dictated choice through the weight of its popularity. If that's what I was after, Windows would do.

    For package managment, as long as the libraries, config files, etc. install with LSB compliance, why couldn't DEB and RPM co-exist?

    Fully agree with you regrading printers, fonts, etc., but it's worth pointing out the fundamental difference between this and desktops is similar to that between the drivers cockpit of a vehicle and its basic mechanics. We expect the latter to just work, but I don't want the interior standardized across all manufacturers and models.

  • by Anonymous Coward
    Bah, back in my day we didn't have these fancy smancy package managers. We had to go through all the effort of typing:

    ./configure
    make
    make install

    You see, the kids, they use the Windows operating system, which gives them the brain damage. With thier clicking and thier typing and thier e-mail and thier browsing. So they don't know what Linux is all about!
  • Yeah, that's a concern. Of course it would be a bit tricky for a program to figure out which files were worth transmitting, but the point is valid.

    And even I have violated my said principle by running installers as root ... for things like Corel Office.

    ---
  • I dunno, but when I buy hardware I make sure it has open source drivers. We have a choice, so make sure manufacturers are listening.
    ---
  • by Micah ( 278 ) on Saturday June 30, 2001 @09:12AM (#118006) Homepage Journal
    The application should not contain binary only software which it depends on running as root, as this makes security auditing harder or even impossible.


    Personally I try hard not to run closed source software as root. I'm glad to see this in the standard.

    ---
  • I don't think so... [linuxbase.org]. Ok, it's a minor mistake, but they call it a 1.0 release.
  • Amen. Vendors should not have any defined idea of where their files go; system administrators should have fine-grained control over where software packages end up. (The --prefix option in configure scripts is a godsend; every package should use something like this!)

    Good control over where software goes facilitates terrific schemes for software management, like the outstanding, time-tested /usr/site [jhu.edu] system, which permits extremely fine-grained control over what packages are installed, allows multiple architectures to be handled at once, and splits installs such that all of a package's files all go under one logical place, so that the whole package can be terminated with a simple rm -rf.

    This is one place where free software leapfrogs commercial, in its ability to handle nonstandard placement (after all, if the software doesn't like where you want to put it, just fix it so it does!)

  • Yes, but this version of the LSB is so retarded that basically the developer is going to either have to ship every single library that it links (including glibc if it wants some of the new features found there), or the developer is going to have to say "Supports RedHat 7.? only."

    Not to mention the fact that there is no reference platform for the LSB. Making it possible for the developer to create an application that they think is LSB compliant, but which still has bugs on LSB compliant distributions (which may still have different versions of the required libraries).

    Fortunately since RedHat bases its distribution on open libraries it is easy for the end user (or the distribution makers) to simply include match RedHat's choice of libraries. It's a pain in the neck, but until the other distributions are willing to push for a competitive useable LSB RedHat will remain the de-facto standard.

    I am not a RedHat user, and I wouldn't want to target the LSB, why should the developers that are currently using RedHat as a target make the switch?

  • You apparently are confused as to what a reference platform is. A reference platform, in this case, would be a distribution with only the libraries that the LSB provides included. Then software that ran on the LSB reference platform would be guaranteed to run on any LSB compliant distribution. Caldera might be LSB compliant, but they aren't the LSB reference platform by any stretch of the imagination.

    The tools included with the LSB make it possible to automate checking what libraries your application links, but they are not nearly as straightforward to use as simply having a reference platform.

  • The folks working on the LSB could have easily made the reference platform first. In fact, early incarnations of the LSB were supposed to be binary reference platforms (based on Debian).

    They could have used a subset of Debian stable as the reference platform, and simply documented what was available in their subset instead of creating an "imaginary" distribution (that is not installable and therefor not tested in real life) and then perhaps creating a distribution based on it after the fact.

    Now they have got an old snapshot of several GNU/Linux libraries, and they hope to get developers to use those libraries instead of the fancier versions that come with any modern distribution.

    On the other hand, if a developer chooses to use the newer libraries bundled with RedHat, he will be able to successfully target the largest part of the Linux market (RedHat), and he gets the benefits of using the more modern technology. Besides any non-RedHat users who wish to run his software can simply download the appropriate libraries. Linuxers have been doing this for some time now, and it really isn't that big a deal.

  • I really don't see how the naming convention outlined in the LSB helps RPM users. The major benefit of Debian packaging system is that all of the packages comply with Debian's strict standards, and are then tested together. In a nutshell if someone packages up a Debian package they can safely guess that zip-2.30-3 is the same on every Debian install. Whereas RedHat and SuSE might have zip RPMs that install zip in drastically different incompatible locations (and they may even be based on different software).

    Debian provides a safe, non-commercial base on which to build. RedHat, SuSE, Caldera, and the other RPM distributions each are basically separate entities, with no correlation at all even at the most basic level. The LSB has tried to remedy this by making the packager put their name in the filename (as opposed to the SPEC file).

    The LSB tries to patch this up by providing standard instances of about 125 shell utilities and a couple dozen libraries. Big whoop. Only the simplest of applications will be able to get by on the libraries provided (and the libraries provided will soon be ridiculously old to boot).

    In other words nearly any application is going to require a substantial amount of non-LSB packages to run. And many pieces of software won't ever be available as LSB packages because they rely on newer features of the libraries in question.

    Just to give you an example as to how crazy this is let's imagine that Sun were to create a piece of Gnome application that they wanted to distribute. Since Gnome isn't part of the spec, they would not only have to package their own software as an LSB package, but they would have to package all of Gnome as well, (because they can't rely on vendor RPM packages). This would make their application very large, and they would basically guarantee that they would have to maintain their own packages for all of Gnome. These packages would probably be incompatible with the version you already had installed and the version that HP, IBM, and every other vendor was using for their LSB Gnome apps. After all, we have quite a bit of confusion right now with only a limited number of Linux distributors, if everyone who wanted to sell a Gnome application had their own version of Gnome it would be even worse than the current mess.

    On the other hand, they could simply develop with RedHat version ?.? as their target and rely on RedHat to package Gnome for them. Anyone else wishing to run their application would have to have a RedHat compatible version of Gnome. That's sounds trick, but it would almost certainly be available from your distribution vendor. Since most Linux libraries are quite backwards compatible, installation would probably be as easy as getting the newest version of Gnome and installing it.

  • by Jason Earl ( 1894 ) on Saturday June 30, 2001 @02:53PM (#118013) Homepage Journal

    Standards are good, but you wouldn't want to be stuck with these particular standards forever. The LSB talks about the libraries that are supposed to be included with a Linux distribution, and goes as far as to specifically state which versions they should be (although not which minor version). In other words you get things like ncurses 4 and 5 should be included, the tar included is GNU tar version (I don't know which version they specified) and the shell is bash version (whatever).

    That's fine and dandy for now, but two years from now LSB version 1.0 is going to look pathetic. Developers aren't going to want to stick to it because the software available will be so much nicer. Heck, the software available now is nicer than what is specified in the LSB.

    Not to mention the fact that their isn't an LSB reference platform. The only way to make sure that your package is LSB compliant is to do a code audit. If the commercial developers were willing to do this then they would already be making portable packages. The stuff listed in the LSB is not rocket science. In fact, every single distribution has had to solve all of the relevant problems. The LSB won't solve a thing.

    The original plan for the LSB was to build a reference platform. This platform would probably have been the Debian base platform plus some other basic necessities. This way the commercial developer could have actually tested his application against the reference, and all the other distributions would have had to do was make sure they included at least an optional set of libraries that was precisely like the libraries included in the LSB reference. That would have been useful, and it would have allowed for the standard to migrate intelligently with time. Every time you got a major Debian stable rev (about once a year) the LSB would rev as well, and everyone would know ahead of time where the new standard was going (they just would have to participate in the Debian mailing lists.

    All is not lost, however. Linux still has a standard. It's a de-facto standard, but it is also an open standard, and so it will do. That standard is the freely available bits in RedHat Linux. It will probably tick off Caldera, SuSE, and Mandrake that they will have to continue to track what the folks at RedHat are up to, but it is their own fault for making the LSB so unpaletable. None of the commercial Linux vendors wanted to do the right thing and create a standard that was actually competitive with their own distributions, and so they created a standard that is so unpalatable to developers that it will never get used.

  • by Jason Earl ( 1894 ) on Saturday June 30, 2001 @12:11PM (#118014) Homepage Journal

    The LSB is useless enough as it is. Your plan would basically tell the developer, "This platform is about as friendly as a rabid Komodo dragon, feel free to pay no attention to our specifications."

    You need to remember the problem that the LSB is designed to solve. The LSB is designed to give commercial developers a reference platform that they can develop to and then be guaranteed that their software will run on every LSB compliant Linux distribution. Right now most commercial developers simply target RedHat, and then let the rest of us that don't use RedHat sort out how we are going to get the software to run on our platform o' choice. Sometimes, for various reasons, the commercial vendor will even admit to supporting several different distributions, but they don't like the work that this takes, especially considering the size of the market.

    So the LSB folks put together a set of minimum requirements for a Linux distribution, and quite frankly, my guess is that they are too minimum to really be of any use.

    You see, while you might be interested in the smallest Linux distribution possible, most people want the added features of GNU tar and bash, and you can bet that commercial developers are going to want a lot more than that. Unfortunately, since the LSB is not a distribution in itself they almost certainly will get more than that. They will continue to do precisely what they are doing now. They will develop their software on RedHat, using RedHat's cutting edge libraries, and when they are finished and want to see what it would take to make their distribution LSB compliant they will realize that it would take a significant amount of work. The LSB is like a snapshot of GNU/Linux frozen in time. It's sort of like running Debian Stable. It's chuck full of good stable software, but chances are the version that you really want to be running is not the one available. There are a lot of features that simply aren't available if you are only using the LSB libraries. If the LSB was a standalone distribution, then you could at least use it as a development platform. But since it's not, commercial developers will continue to do what they do now. They will target RedHat, and force the other distributions to follow RedHat's lead.

    Oh well, I personally use Debian, but I can't help but think that we could certainly do worse than using RedHat as a de-facto standard. At least they are committed to Free Software. The standard at least will be an open standard.

  • Reading through the LSB specification, I am disappointed that it prescribes the "fat" GNU implementations of standard Unix tools as a minimum requirement for compliance. Of course I'm all in love with the added functionality of the GNU tools, I just think it's wrong to make them "standard", i.e. a dependency.

    Examples:

    • The standard LSB shell is bash,
    • the standard LSB tar is GNU tar (with the -z option and others).
    Making them a standard requirement rules out more lightweight implementations, such as the ash shell and the busybox or the BSD tools. This in turn makes it impossible to build embedded Linux systems conforming to the LSB - be it PDA Linux or be it one-disk routers.

    Instead of targetting only server/workstation setups, I would have preferred the LSB to settle on low common denominators, like

    • the common POSIX-compliant functional subset of ash and bash,
    • the common subset of GNU tar, BSD tar and busybox tar, etc.etc..

    This would still allow anyone to make bash the standard shell and GNU tar the standard tar in an LSB-compliant distribution, but it would require third-party software makers to take care that their shell scripts run on ash as well as on bash if they want the LSB compliance sticker for their product.

    (P.S.: That said, I would love to see GNU/Linux distributions - above all: Debian - to scale down to a basic ash/busybox setup, which would require [a] to get rid of bash/GNU tool specific syntax in the setup and configuration scripts, [b] free all their package managing from dependencies on scripting languages like Perl, using ash scripts + minimal sed + minimal awk for simple tasks and compiled C code for more complicated stuff.)

  • i would be happy if they forced people to put binaries in the right place.
    ones critical for system boot in /bin, and /sbin and the rest in /usr/bin, /usr/sbin, etc.
    and what about boot scripts?
    where do they go?
    /etc/init.d/, not /etc/rc.d/init.d/!
  • FYI, /etc/init.d is the LSB specified place to put init scripts. The FHS doesn't have anything to say about init.d .

    My personal argument would be that I might run a service that was install in /opt or /usr/local, but it would make no sense to install the script into /sbin for a server that is install in /usr/local/sbin .

    Think about running a distro from CDROM that mounts /etc and /var as a RW ram disk and mounts /usr/local from NFS. You keep most of the dirs off of the root dir from needing to be writeable.
  • > So the LSB folks put together a set of minimum requirements for a Linux distribution, and quite frankly, my guess is that they are too minimum to really be of any use.

    Can you be specific here? Maybe I'm not understanding something, but if software package foo depends on lib baz, the lsb rpm for foo should depend on baz. Simple.

    LSB doesn't need to include every lib anybody will want when writing software, it only needs to include the realy basic stuff every distro has, so that distros can be standard about it. But if a package needs extra functionality, that's what package manager dependancies are there for. Does that mean LSB should be pared down to the most minimalist possible set of system requirements? No, that's not practical. The requirements need to reflect reality, and most systems just aren't that minimal.

    Personaly, I'm very impressed with the LSB. Especialy the naming conventions for rpms. My guess is that that could go a long way toward giving rpms some of the same reliability and dependability that debian users are used to with debs.
  • by IGnatius T Foobar ( 4328 ) on Saturday June 30, 2001 @08:59AM (#118019) Homepage Journal
    Go ahead and mod this into the toilet if you want to, but it's a serious criticism and something I feel strongly about:

    The LSB is not enough to offer a single target for ISV's.

    It is missing two important things:
    • A standard package format (RPM or DEB)
    • A standard desktop framework (KDE or GNOME)
    Until the coordinators of the LSB get "ballsy" enough to actually dictate these things (and rest assured it will anger 50 percent of the Linux community), we still do not have a single platform target for app installations.

    If you look at the ISV's who have ventured into Linux so far, the single target is (and I believe, until these issues are resolved, will remain) Red Hat.

    When users install desktop apps, they expect the following things to happen:
    • The installer needs to be easily startable (ok, we might be doing ok there)
    • Icons and menu items are automatically added to the desktop
    • Resources such as printers, fonts, etc. need to be connected to automatically
    • If updated system libraries or components are required, find them and offer the opportunity to install them
    The LSB is a good start, but it's not a comprehensive binary target. I believe that you can't make everyone happy -- some truly serious decisions such as package manager and desktop framework need to be made.
    --
  • Read the fucking documentation, perhaps? Some slackware users have a clue, and aren't the sort of people who would install random-irc-opengl-mp3-plugin.rpm as root without knowing exactly what is going on. Anyway, a .tgz can check dependencies if it wants, by looking in /var/log/packages and bailing out of its install scripts.
  • thanks. I just find it annoying that people on /. tend to only lean towards Debian. The poster should have mentioned all the contributors or at least placed a link to the list.

    The world does not revolve around Debian.

    Just my worthless .02
  • by JoeBuck ( 7947 ) on Saturday June 30, 2001 @11:22AM (#118022) Homepage

    The intent of the LSB is to document what an application developer can rely on. If you want to make the LSB allow multiple versions of the basic tools, it means that application developers have to do something like autoconf to get their package installed. Can't use "tar zxf foo.tgz", since Florian doesn't want GNU tar. Can't use anything but the most basic Posix commands, because again Florian doesn't want to use them.

    If you want to say that the LSB should only specify Posix-compatible commands, why do you need the LSB? Just use Posix.

    There may be a need for another specification for a cut-down Linux, something like the difference between hosted and freestanding implementations of ANSI C. But in this case, the standards folks are standardizing based on what is in place. Every GNU/Linux provider makes bash and GNU tar available. People with special needs may want to install alternatives, but in this era of boxes with a Pentium 3, 20 Gb of disk and 128Mb for $800 or less, it makes no sense to make the lives of application developers harder to satisfy a few dissidents who want to save a few kilobytes.

    Also, remember that the LSB is only a minimum standard. Developers who want to be more robust, and portable to BSD and Unix as well as Linux, will continue to be more rigorous in assuming only Posix features.

  • A standard shouldn't become a standard because a lot of people use it.

    For both good and bad reasons, this simply isn't true.

    We buy gas by the gallon and drive our cars by the mile. (At least in the US.) How many inches in a mile? How many teaspoons in a gallon? Why aren't we using the metric system?

    Windows. Could someone explain to me the technical advantages of windows? I'm not sure there have ever been that many. Yet it is the standard desktop operating system.

    Language. English has become the standard language for much of academia and beyond. If logic dictated language usage, we'd probably be speaking Esperanto.

    The word "standard" means doing what every one else is.

  • which would make .deb the standard, at least my experience says so (no twenty-seven-hours-of-which-package-might-that-stu pid-library-i-need-be-found-in (-> 'better' dependencies)

    Your problems would not at all be solved by everyone using .deb, because a lot of them are probably caused by binary incompatibility (for example, if you have to change a core component of your distribution to isntall a new package, the real problem has to do with binary incompatibility)

    The dependency system works fine, and RPM is nicer for packagers , since you don't have the same one-patch limit imposed by .deb.

  • Making them a standard requirement rules out more lightweight implementations, such as the ash shell and the busybox or the BSD tools.

    Lightweight isn't standard, and shouldn't be.

    Now, if you'd like to propose an LLSB, go right ahead; sounds like a dandy idea.

    -
  • That's a really stupid argument for not adopting a better standard of measurement, you should be ashamed of yourself.
  • by mwr ( 12650 ) on Saturday June 30, 2001 @01:45PM (#118027)
    The dependency system works fine, and RPM is nicer for packagers , since you don't have the same one-patch limit imposed by .deb.

    There's not a one-patch limit. There's a pristine tarball and a .diff.gz file, sure, but that doesn't mean there's only one patch involved. The .diff.gz shows the difference between the original source tree and the Debianized one.

    For example, in the Debianized Apache source tree, there's a whole directory of patches which are applied to the upstream sources during the build process:

    mwr@ch208h:~/apache-1.3.20/debian/patches$ ls
    ab_round_robin_support phf_dot_log
    apxs_assumes_dso regex_must_conform_to_posix_for_LFS_to_work
    debian_config_layout server_subversioning
    debian_ps_is_brutal suexec_of_death hurd_compile_fix_for_upstream
    usr_bin_perl_owns_you mime_type_fix

    Each can be selectively applied if you edit the build/make scripts, too. Not every package uses this selective patch method, but the really complicated ones often do (X, libc, and Apache at least, and those are just the ones I've seen personally).

  • alien

    Converts a package from deb to rpm and back.
  • The point of complex software development is to reuse
    code as much as possible, so you need standalone
    chunks of code that need to communicate with each other.
    Corba (e.g. Gnome's Bonobo) or quick and dirty hack
    like Kparts is all about code modularity and code
    reuse. You can't just go with Xlib, gcc and glibc.
    At least not for apps with millions of lines of code.
    And beyond object models, you need to standardize
    on things like drag and drop implementation. Once
    you do all that you've got desktop environment and
    so yes you have to tie your app to the desktop.
  • Yes. My feeling is that there needs to be some easy way to set up "application users" (rather like chroot, but I'm told that there's good reasons why chroot can't be used here).

    The idea is that a user would be able to set up sub-user accounts that only have partial access to the previledges of the user. I.e., these accounts would stand in the same relationship to the user account as the user stands to the root.

    Then there needs to be some easy way to flip into the sub-account at application startup time, sort of like:
    start Mozilla -as "browsing -pw gecko"

    where here browsing would be the particular sub-account with a password of gecko. Browsing would have the ability to rights restricted to those of the sub-account owner, but possibly subject to further restrictions. (Yes, you can save to my downloads folder, but you can't look at my documents folder.)

    Caution: Now approaching the (technological) singularity.
  • Here's what seems like a good idea: A standard API for installing apps to any desktop environment. It doesn't matter what the desktop is, just that it complies. This way vendors can write a generic install script for "the linux desktop" and not have to worry about what environment a user has. I think this is something the KDE/GNOME developers can agree on, don't you?
  • by Medievalist ( 16032 ) on Saturday June 30, 2001 @11:37AM (#118032)
    The primary reason to prefer linux over traditional unices (for me, as a sysadmin and user) is the clean separation of configuration files from the binaries they control and the data those binaries use.
    By this I refer to the way one can simply back up the /etc directory structure and capture the entire configuration of most linux distros, without getting anything else. This is extremely useful. Similar tricks can be played with /var on nameservers and DHCP servers.
    Traditional unices (the most egregious example being that hideous train-wreck of a Unix, HP-UX) scatter configuration files and binaries willy-nilly across the file systems, every program having its own unique hidey-hole. People steeped in Unix lore become inured to this, and start to think it is desireable because they are used to it. [Reality Check - BINARIES SHOULD NOT BE IN /etc AND rc FILES SHOULD NOT BE IN /sbin! - If that isn't obvious to you, you need a long vacation.]
    The LSB specifies the use of FHS 2.2 [pathname.com] which seems to be a more elegant version of the old linux file system standard. The FHS standard specs an /opt directory for the installation of major 3rd party applications - that is, the kind of applications that you dedicate a server to, like databases or digital data aquisition systems.
    The problem is, the majority of the application vendors ram their code in any old place they want, and then their apps don't run without those specific locations. Symbolic links are the best compromise you can usually get, without forking off your own source base, and sometimes even that won't work. Then, to make matters worse, they often require specific versions of various libraries - usually obsolete and/or insecure ones, in my experience.
    So, the major distributors may get off their asses and implement LSB eventually, which will be a Good Thing [TM] and will mean finally getting real total compliance with FHS, but application servers will still be wonky as soon as a big app (like tina or datastage - blech!) is installed. The LSB will supposedly address this by marketplace adjustment and app vendors without clues will fail commercially. I personally am not convinced this will happen seeing how Solaris and the patently inferior HP-UX still command market share today. Commericial needs require applications which require systems, and not the other way 'round.
    Me, I'll be happy when the ancient cruft like /etc/exports (I have a link named /etc/nfs.conf on the few machines where I am forced to run insecure crap like NFS and NIS) falls by the wayside. Until then no *nix standard can be both widely used and internally self-consistent.
    --Charlie

    Eric Dennis (Spothead Lex Animata) says the secret to happiness is lowered expectations.

  • You'll see the list of contributors and RedHat is on there...

    ---

  • Not only should packages have the --prefix option, but it should also be mandatory. I've decided to make it mandatory in mine. Without --prefix they will not install (but they will do a compile). So there will not be a default installation location.

    The reason for this is that system administrators need to wake up and stop doing installs in a daze. They need to think about what they are doing, have a plan, and follow through with it.

    One idea I was thinking of is requiring that a config file for the package must exist in /etc [etc] (or a directory in /etc [etc] if it's so complex a package that it requires many config files), before it will compile or install.

    One of the things I do find disgusting is packages that do things like hard code locations other than standard ones into the executeables. If it needs to know where to find stuff that the system administrator has put somewhere else, it should get that info from a config file in /etc [etc]. Thus if the system administrator does need to move things around, there is a way to say where to find them without having to recompile.

  • Just be glad it's not a Microsoft product, where you have to wait until Service Pack 5 for the third release of the product before it sucks less.

    --
  • wow. you must not have even read his post.
    he _did_ say that the official apt sources have good debs.

    he also pointed out debian's rpm skills
  • There are valid packages that need root access

    There are valid packages which require non-statndard hardware and non-standard function calls, as well. And they may not be LSB compliant. From my reading, all that means is that they're not (very) portable. No big deal - use it if you need to, but don't expect to be able to compile or run it on every system under the sun.

    Beside, if you don't trust app vendors whom are you going to trust?
    No one. You assume that every app has security holes that could potentially compromise your system. You do a cost-benefit analysis and see if the potential for compromise outweighs the benefit of running the software. (P.S. Nice use of the word "whom" - hardly anyone knows how to do that these days.)

    If it is an "open" source app do you really have the skills to wade thru tens of thousands lines of code in search of something that might be hidden in two lines or even bunch of static hex defines?
    If it's closed source, NO ONE can audit the code. If it's open source, someone can. If it's something that is mission critical, then I will probably assist the community in auditing the code. Suggesting that we all do our own total code audits sounds to me like going back to the wonderful medieval system, everyone producing just enough to cover his/her own needs. :)

    If something is seriously wrong you will know about it regardless if it is open or closed source application.
    With closed source, I won't hear about a problem in the code until the vendor chooses to tell me. I know there won't be a fix until the vendor chooses to provide me with one. Open source does not guarantee a solution to this sort of problem; it does provide for the existence of a solution to this problem.

    I believe that you are confusing two different situations - that of the individual and that of the large group. (I see this a lot and point it out whenever I can.) Generalities don't scale. Things that are generally true for one person are often not true for large groups of people. If you ask one person "Will you be able to audit the code" the answer will be "No" 99% of the time. But if you ask a group of a million people the same thing, the answer will be "Yes" 99% of the time. Think about it...

  • The so-called "standard" decided on RPM because the LSB committee is stacked with Redhat employees or sympathists. The LSB is nothing more than an excuse for Redhat to say "we are the standard", reaffirming their stance as Microsoft Linux.

    Also, packaging is not a trivial issue. Sure, you can simply repackage some program you built, but if it was built on a Redhat 6.2 system it's going to have diffrerent library dependencies than a Mandrake 7 system which has different library dependencies to a Debian 2.2 system which has different library dependencies to a Slackware 8 system which has different library dependencies to ...
    Also, both rpm and deb usually require recompilation as part of the package building process.

    --
    Matt
  • What makes a good Windows application good is the UI, and Linux need a document like the following: (MS URL snipped)

    You mean somthing like this:

    http://developer.kde.org/documentation/standards/k de/style/basics/index.html

    This was found at the KDE developer Reference Guides section (http://developer.kde.org/documentation/library/in dex.html), which is distinct from their Standards section (which details industry standards and Gnome-KDE and WM-KDE interoperability standards), and their excellent tutorials, archetecture guide and the 540 page book that is available online and in print that details the KDE interface and programming guidelines. The online version has user annontations.

    I would imagine that Gnome and Apple OSX have a similar set of documents. I've been a subscriber to MSDN for years - they *do* have some good resources, but they don't have the only set of good resources. And so, to answer your questions, yes: Linux Desktop environments *do* have UI standards.

    --
    Evan

  • WTF? Come on. Desktop shit is just eye candy!
    Who cares? Gnome, KDE, wmaker, sawfish, etc. sit on TOP of the OS. We are talking about the OS base, are we not?
  • I don't know about Mandrake, but the /etc/rc.d/ structure has been out of Red Hat since 7.0...
  • Code audits are possible, you know. The OpenBSD project has done it [openbsd.org]. It wouldn't have been possible with binary-only software.
    --
  • >you should be able to use the software on any
    >Linux whether you uses Debian, TurboLinux or
    >Open Linux

    I must say I have serious doubts about that !

    Is the LSB sufficient to make that goal a reality ?

    I am not enough knowledgeable about this subjet ! anybody care to comment ?
  • by jfunk ( 33224 ) <jfunk@roadrunner.nf.net> on Saturday June 30, 2001 @10:49AM (#118044) Homepage
    It doesn't mention SuSE either, who have been striving for compliance. Starting with 7.1 (I think) the distro has been compliant to whatever state the LSB was in. The next release (7.3 or 8.0) will, in all likelihood, be compliant to LSB 1.0.

    As for Red Hat, I don't know. They've been pretty divergent on a lot of things. They put init scripts in /etc, for example. /etc is for configuration files only. In SuSE, init script are in /sbin/init.d and there is a symlink in /etc if you install the 'eazy' package (that way, you have a simple choice).

    They also place commands in pre/post-(un)install scripts that are not available on all distributions.

    One big thing that freaks me out about the use of RPM is the naming in the 'provides' and 'requires' fields. One package may 'require', say, python-gtk, while only 'pygtk' is provided. The right software is there, but naming is a PITA.
  • I think the real problem with your sig is that it's inflammatory by its very nature. Implying that Win2K is an "upgrade" from Linux 2.4 is inherently going to draw negative criticism to you. Perhaps you just enjoy attention. Regardless, you cannot argue that Windows is a more technically superior solution and at the same time reduce someone else's arguments to nothing because you're looking at it from a "desktop user's" perspective. Desktop user implies through its connotation someone who is not very technically apt and therefore has no basis to judge a product as such. As for a Win2K running faster on a strictly desktop system, that's a pantload in my opinion. You seem to be going by your own observations and my observations tell me that the only application that starts faster on Windows is IE and that's because they tied it into the operating system. If you hate waiting for a browser so much when you use Linux just keep one open all the time. With X it's possible to have multiple virtual desktops making it easy to have a perma-webbrowser that isn't taking up space. And given that Unix apps in general are much better about memory management than Windows (though I do admit that Microsoft has been trying to fix that) it doesn't cause a problem to do that. Anyway, don't complain about the RAMdisk solution for konqueror because that's essentially exactly what windows does except they don't tell you that explicity and linux would much rather give you the option to not have precious ram space sucked up by an application that is permanently nearly running. Regardless, get rid of the inflammatory sig and you won't have people complaining to you. Keep it and you'll continue getting attention I'm sure (though the quality of the attention will continue to decline).

    -Mike
  • by akmed ( 33761 ) on Saturday June 30, 2001 @09:08AM (#118046) Homepage
    If you read the 1.0 standard, RPM is the official package format.

    http://www.linuxbase.org/spec/gLSB/gLSB/swinstal l. html#PKGFORMAT

    Not that I personally agree with RPM but I've heard that the RPM format is getting more robust and might actually offer all the things needed to have a unified packaging format. It'll be interesting to see if Debian and Slackware (along with other smaller non-RPM distros) go along with this though.

    -Mike
  • There's a problem that few people realize about lightweight standards - they encourage fragmentation. For example, both IRIX and Solaris confom to the UNIX spec, but the UNIX spec is so lightweight, it doesn't provide everything that a moden OS needs. The lighter-weight the standard, the lighter-weight the apps need to be to conform to that standard. IRIX and Solaris are as different as night and day - and I don't just mean binary compatibility. IRIX, Solaris, Tru64, hell even MacOS X all conform to UNIX, but they all have to extend it so much just to get some beyond-basic functionality out of it (and none of those extensions are standard!)
  • The fact of the matter is, Mandrake, Redhat, and such far preceed Turbo Linux and Open Linux in popularity. There's no reason to mention those two, really. From what I've seen, people throw linux out the window after using Turbo or Open, and if we're lucky, try something a little bit less like the excrete that comes from rears of cattle.

    This is just my personal opinion, of course. I suppose some people might like using Turbo or Open... but when was the last time something was released for them? I seriously don't know of a single geek that uses either. It's quite sad, really, that someone's efforts get wasted in such a manner. (Maybe the people of smaller distros could get together and work on a larger one? Ala, OpenTurbo Linux?)

    Anyway, to keep on topic... This standards base thing is good, however, what are we going to do about the differentials between current distros? For instance, the contingency between Mandrake/Redhat and Debian, where the initscripts are in /etc/rc.d/init.d vs. /etc/init.d? Will the offending parties (I'm going to guess it's mandrake/redhat on this one, but I'm not sure) change what they're doing for standard's sake, or will they keep doing it the way the have been, so as to not 'confuse the users' or something else silly?

    -------
    Caimlas

  • I can't say anything about Debian going the way of the RPM (which I don't personally like the idea of), but I can say that apt has greatly enabled RPM's to become more of a flexible option, as demonistrated in Mandrake 8. It seems to me that RPMs generally (at least with the case of mandrake) have less installation problems, even if they have less configuration options at the time of install, as deb's in debian do.

    -------
    Caimlas

  • I don't want to get started on a Linux vs Windows flame - right now I don't care. What makes a good Windows application good is the UI, and Linux need a document like the following:

    http://msdn.microsoft.com/library/default.asp?ur l= /library/en-us/dnwue/html/welcome.asp

    I know most Windows apps (even from Microsoft) don't follow this exactly, but having the document means that at least a new user has the chance of being able to sit down at a brand new application and used it sensibly - without having to click all over the place.
  • What they need to do then is get it into the Linux Standards Base, start a program like "Made for Linux", or even get one of the vendors to start "Made for Red Hat Linux" style branding.

    Without something like this it's hard to force devs to actually follow the rules.
  • Also, it seems to me that the large number of active distros is direct evidence that people disagree on what the standard should be.

    Alternatives to RedHat (or KDE, Debian) exist because, first, developers disagree with RedHat/et al. enough that they're willing to devote time to develop alternatives. And then users agree with the developers enough to support it, advertise it, and enhance it.
    --

  • by interiot ( 50685 ) on Saturday June 30, 2001 @09:29AM (#118053) Homepage
    The "Standard" in LSB won't be accurate if there's still large contention among linux users as to which implementation of feature X is the best. IMHO, LSB's role is to standardize things which have already been long hashed out and it's mostly obvious that there's one good winner. Standardizing things too early leads to a stifling of innovation (why do I feel dirty when I use that word now?).

    I don't think LSB's role should be to lock things to arbitrarily just to get a common platform. If large-scale forced standardization is your thing, go with that other OS [microsoft.com].
    --

  • I think Suse tried to declare themself LSB-compliant a few month ago. But because the standard wasn't finished they got flamed by the other participants in the standard group.

    You can't comply to an unfinished standard. But I think the major distributions is close enough to be useable.

  • Debian does support RPM-packages. "apt-get install rpm" (or even better "apt-get install alien")

    As a Debian user I think it would be far better if everyone would just comply with the Debian-policy and use Debian as the base but the world isn't so.

    Packaging format isn't just a battle worth fighting just now. And when the time comes I sure that the package format would neither be RPM or deb as we know the formats today.

  • Yes, it's right that the blurb doesn't mentions anything about RedHat. It doesn't mentions Progeny, Storm, Stampede, SLS or many of the other existing distributions.

    If you look at linuxbase.org [linuxbase.org] there is a lot of contributors including Caldera, Redhat, Suse, Mandrake, Metro Link, VA Linux and even the Open Group is mentioned.

  • Debian will probally never be "going the way of the RPM". Debian does support RPM as a pistribution format of binary packages. I have never tried it, though.

    One posible future is a united packaging format building on the experiences of both debian-packages and RPM. The dpkg-developers is talking with the rpm-developers now and then.

    Packaging formats is just not interesting. Look at RedHat, Suse and Mandrak. They manage to make incompatible distributions even though they share the packaging format. For now RPM is chosen for the standard high-level distribution format, live with it.

    After a lot of lurking on lsb-spec/lsb-discuss mailling lists I can't say that I'm surprised about this is discussion. But Debian has no problems with supporting RPM and still use another package format and I can't see why Slackware and Stampede should have more problems.

  • Geez I wish I could mod this up! Very funny!
    --
  • As far as standards go, this is not that terribly bad. Plus, it finally makes Linux start to resemble an OS, rather than a bunch of code people threw together. However, most of these "new" standards were standard anyway. It has stuff like X11, Xt, libz, ncurses, etc. The main problem with it is that it is too tame. While it standardizes some of the miscellaneous stuff, it totally ignores standards for GUI systems (other than X11, which was standard anway). To be complete, the LSB needs to dictate a GUI standard. It doesn't have to pick either KDE or GNOME, but it could take a subset of the functionality of both and define a source-level API standard that both could implement. Thus, there could still be multiple implementations, but apps wouldn't be tied to a particular one. If this sounds familer, its because it is basically what POSIX tries to do. POSIX was a good idea, and a standard GUI API would be too.
  • Oh my god! He likes Windows 2000 better than Linux! He must be trolling! Let me give you a technical summery of why I like Win2K better:

    A) Better sheduling. Shorter process quanta and priority boosts for GUI processes result in a much smoother GUI.
    B) Better GUI: Not only more uniform, but tons faster than X.
    C) Better OpenGL: Linux still can't beat Win2K OpenGL performance at high resolutions.
    D) Better compilers: GCC might be nice and complient, but Visual C++ just plain produces faster code. Plus, PE is an easier format to play with for OS design than ELF.
    E) Better IDE: KDevelop is almost on par with Visual Studio, but isn't quite there.
    F) Better desktop. KDE and GNOME may equal Win2K feature-wise, but I'm sick of waiting so damn long for Konqueror to start up.
    G) Better APIs. While Win32 may be pukalicious, DirectX is sweet, and in combination with OpenGL, there is nothing in Linux land (SDL, hah!) that can compete.

    But this is just me. People with different needs will like other things better. NTFS is pretty bad as JFSs go, so servers should go elsewhere. The processes semantics are clearly wrong (as GUI apps get special boosts from the kernel) and for those who run in console mode, are detrimental to their work. The CLI sucks ass (even with Cygwin), and it DOES crash more often; two weeks vs. two months. Still, I reboot into Linux everyday, so what do I care? Linux is not the greatest OS ever made. Neither is Win2K. The greatest OS ever is Be... I mean, specific to the individual person. I might not be crazy about Linux, but that doesn't mean I'm trolling.
  • Would you believe me if I did?
  • What is sheduling? "scheduling" ? Unix cron kicks the crap out of Win32 at any day. More flexiable and you can run it under differant users, etc.
    >>>>>>>>>>>
    I meant process scheduling. As in choosing which process to run next. Win2K makes special cases for GUI apps. For example, when a process releases a semaphore, it automatically gets a (temporary) 1 point priority boost. However, if the process is in the desktop's foreground, it gets a 2 point boost. If a process wakes up due to I/O being completed, it will get a 1 point boost if the I/O is to the disk, but an 8 point boost if the I/O is to a soundcard. These types of "hacks" violate standard UNIX semantics, but tend to make desktop-type apps have better interactive performance. Linux will never do this because it wants to be fair to all processes.

    Win32 GUI is smooth, but with reniace X/X apps can run the same way. I have enlightenment at -10 nice ;)
    >>>>>>>>
    I've had X down at -10 for years. It still isn't as smooth as Windows.

    Win32 GUI:
    more uniformed, yes
    tons faster locally, yes

    tons faster remotly, no
    more customizable, no
    >>>>>>>>>>>
    Who cares about remote performance? I'm a desktop user. I SAID that Win2K isn't for everyone. However, for my purposes, it is better than Linux.

    There is trade offs. Win32 GUI wins some, X wins some.
    >>>>>>>.
    Win2K wins all the desktop bits... (except maybe customization, but XP should help that)

    *cough* isn't VC++ the compiler at allow `void main()` in C? ;)
    >>>>>>>>>>
    Again, I don't develop heavy duty apps. For my OS design projects, Visual C++ is plenty standards complient. Plus, it has lots of features GCC doesn't, like non-braindead ASM inlining (because GCC wants to keep the backends and frontends separate) and keywords designed to help people writing hackish (ie. kernel type) code.

    Speed: VC++ wins on win32 platforms
    Standard complaint: gcc wins
    Number of lauanges supported: gcc wins
    Number of os supported: gcc wins
    Number of procs suppored: gcc wins
    Cost: gcc wins
    >>>>>>>>>
    Again, who cares about standards compliance, language support, OS support, or processor support? Maybe you do, but most desktop users don't.

    Have you checked out vim with color support? I can go from source code -> compile -> run faster with a keyboard than you can with your mouse.
    >>>>>>>.
    Does VIM automatically give pop-ups for function prototypes? Some of these C++ classes can be a pain to memorize.

    Put it in a ram disk.
    >>>>>>>>>>
    Oh, great solution. If you'd care to send me the extra RAM, I'll try it. Besides, isn't that what disk caching is for?

    GTK is MUCH MUCH cleaner than MFC. Unix does have OpenGL (*cough* didn't these come from Unix (SGI) and not MS?).
    >>>>>>>>>
    Yes, I said that Win32 (and by extension MFC) sucks. However, DirectX does not. As for OpenGL, I don't see your point. Who cares where it came from, the point is that Windows supports it better than Linux does.

    Go ahead and code up your app in DirectX... When you want to port it to MacOS*, BeOS, Unix, etc go ahead and have a fun time deturding all the MS crap from your application. Sure you cold pre-process it to holy hell, but the code base is going to be twice as large and twice as dirty. Use OpenGL and you don't have that problem...
    >>>>>>>>>>>
    DirectX and OpenGL are not directly comparable. There's lots of stuff that DirectX has that OpenGL does not. Sound, input, and MIDI APIs come to mind. Besides, why would you want to port it to UNIX? Everyone uses Windows anyway ;)

    Use the right tool for the job.
    >>>>>>>>>
    Look whose talking! I laid out my requirements, and Win2K is the right tool for the job! This f*cking thread started because you didn't like my .sig, and now you tell ME to use the right tool for the job? I know it sounds harsh, but it's a reality check to all you Linux grognards. You can't just blindly dismiss Windows as sucking and Linux being better. Windows does suck to some extent and in general, Linux sucks much less. However, Linux just sucks in the wrong places for most desktop users, and for those users, Windows is a more technically superior solution.
  • No, my point was that it was of no relevance what I said, since you probably wouldn't believe it either way. That said, I did buy Visual C++, and I got Win2K free from somebody who gets a dev kit.
  • I think the real problem with your sig is that it's inflammatory by its very nature. Implying that Win2K is an "upgrade" from Linux 2.4 is inherently going to draw negative criticism to you.
    >>>>>
    But somehow all the Micro$oft and Winblows .sigs never seem to draw any...

    Perhaps you just enjoy attention.
    >>>>>>>
    I just enjoy getting people like you teed off...

    Regardless, you cannot argue that Windows is a more technically superior solution and at the same time reduce someone else's arguments to nothing because you're looking at it from a "desktop user's" perspective.
    >>>>>>>
    Oh, but I can. If a 3D gamer goes from using a Matrox G400 to a GeForce2 it is an upgrade. If a Photoshop user makes the same transition, it is a downgrade (the G400 has better 2D quality). For me and many desktop users, Windows 2000 IS a technically better solution.

    Desktop user implies through its connotation someone who is not very technically apt and therefore has no basis to judge a product as such.
    >>>>>>>>
    Maybe from an elitist viewpoint. To me a desktop user is someone who uses a desktop machine (as opposed to a server) be it for programming, graphics design, internet browsing or email. Implying that doing such activities somehow makes someone less "technically apt" is just silly. I know many desktop users who know more about computers than a sys-admin, simply because the admin's job is so narrow in scope. Still, I also know admin's who know more than desktop users. It's the person, not on which end of the client/server connection he sits.

    As for a Win2K running faster on a strictly desktop system, that's a pantload in my opinion. You seem to be going by your own observations and my observations tell me that the only application that starts faster on Windows is IE and that's because they tied it into the operating system.
    >>>>>>>>>
    Its from my observations after having used both Linux and Win2K (and NT4) for several years. And there are technical reasons why Windows "seems" faster from a desktop perspective, and I have iterated them. While Linux may very well be able to process more SETI packets in a day, Windows simply has better interactive performance.

    If you hate waiting for a browser so much when you use Linux just keep one open all the time.
    >>>>>>>>
    You're trying to finesse the issue. Why does the browser take so long to start? With KDE (all KDE2 apps) it is a technical problem. The linker has to bind all virtual function references at load-time rather than doing it dynamically. With KDE's large nature, this linking takes a long time. However, Windows also uses a C++ library (MFC) and doesn't seem to have the same problem.

    And given that Unix apps in general are much better about memory management than Windows (though I do admit that Microsoft has been trying to fix that) it doesn't cause a problem to do that.
    >>>>>>>>
    No arguement here.

    Anyway, don't complain about the RAMdisk solution for konqueror because that's essentially exactly what windows does except they don't tell you that explicity
    >>>>>
    I'm not sure if this point is factually correct. Most Windows apps start faster than their KDE2 or GNOME counterparts, so I doubt it is just that IE is preloaded. Of course, you'll tell me not to use KDE2, but then you'll have to fess up that Linux can't compete with MS feature-wise.

    and linux would much rather give you the option to not have precious ram space sucked up by an application that is permanently nearly running.
    >>>>>>>>>>>>>
    Ahem, X...

    Regardless, get rid of the inflammatory sig and you won't have people complaining to you. Keep it and you'll continue getting attention I'm sure (though the quality of the attention will continue to decline).
    >>>>>>>>>>
    Right. You dislike the .sig so I should get rid of it. I've got a little mission for you. If you can get all the Linux zealots to get rid of their anti-Windows .sigs, I'll get rid of mine.
  • by runswithd6s ( 65165 ) on Saturday June 30, 2001 @09:52AM (#118065) Homepage
    I am a Debian developer, but I have not involved myself with policy decisions as of yet. I can tell you my biased opinion on the subject, however. We've all heard the arguments before, why one packaging system is better than another, so I won't go into an evangelistic rant about why deb's are better. I will say that I doubt very highly that Debian will drop debs any time soon. To be compliant with LSB, Debian will continue to package rpm [debian.org] and alien [debian.org].

    That should be sufficient for allowing software vendors to feel comfortable packaging for Debian with rpm's. Of course, the use of a packaging system does not alleviate the problems with dependency resolution and binary to library incompatibilities. Vendors will still need to be careful to recompile their rpm's in the environment they expect to deploy the software. One thing that is nice about LSB is that we will know where to find the software files when they ARE installed.

    --

  • by Dr_Claw ( 68208 ) on Saturday June 30, 2001 @09:28AM (#118066) Homepage Journal
    It is missing two important things:

    * A standard package format (RPM or DEB)

    Mmm, kind of - the important thing is that the same versions of files are installed in the same places... how they got there isn't so important. However, I agree - people distributing software for Linux don't want to have to package up binaries in several formats. Hence the frequent complaints from Debian users that $COMPANY has used RPM. This is certainly something I'd like to see addressed.

    * A standard desktop framework (KDE or GNOME)[snip]

    * Icons and menu items are automatically added to the desktop

    Argh. Standard framework - yes. Plain choice of KDE/GNOME - no. There is a reason some people use different window managers - they like the different feels of them. Personally I use Enligtenment, and I like the idea behind E17 of being a desktop shell - not trying to be a big collection of packages like KDE/GNOME. However, just because it's valid for several to exist, it doesn't mean they should be devoid of standards - far from it.

    Let me take the example above... icons on desktops (presumably shortcuts that launch applications). I really hate icons on my desktop - I use menus or launch apps instead. However, whichever camp you're in, the data is the same! So what we should be using is a standard way of storing this data, which your WM can turn into menus or desktop icons as your prefer. It's so stupid that KDE and GNOME create their own menus. I then have my various internet related apps split across the two menus - why!!. It's little things like this that need to be improved. Along with standard ways for apps to interact (drag and drop for example - yes I know people are working on this one).

    That said, I look forward to the LSB being taken up and progressing to address these issues.

  • its a handy tool that was written to be independent of packaging systems, happens to run and work well on Debs, has been ported to RPM, and will likely support both in iuts next major release.

    Asides from APT non-arguments, I've yet to see any major arguments that Deb is superior to RPM. Please list them if you have them.
  • Plus, a UNIX system you do not run programs by clicking pictures. You run them by typing the full path of their executable at the shell. There's nothing wrong with someone running them by clicking pictures, but that someone should have set it up that way themselves. Stop trying to make the system work differently then it was designed to do.

    Indeed. Unix systems should nevr be attached to internet, run languages other than B or FORTRAN, or provide functions other than compilation, or perform any other end user function asides from typesetting. None of this web server shite.

    Unix's trademark is modularity, not command line interfaces. Besides, command line interfaces are for people who don't know what they're doing. people who do magnetize littles needles and write to their disk with a steady hand and a keen eye.

    Its all a layer of abstraction. Even your precious shell.
  • Not to mention the fact that there is no reference platform for the LSB.

    Caldera released a set of patches for their distribution which made it 100% standards complaint. Google is your friend.
  • You haven't mentioned specifics, but RPM also marks certain files as config files, backing up the existing configuration to .rpmsave or the new one to .rpmnew depending on what's appropriate.
  • No. A reference platform is exactly that - a real world implementation of the spec. The limiting of included functionality is your own unique added definition. An LSB platform by your definition could not possibly exist as the system would not be functional without libraries the LSB does not specifically refer to.
  • ...ThinkGeek created a hybrid of this shirt [thinkgeek.com] and that one [thinkgeek.com] that says "First ALL YOUR BASE Post!" ?

    Clearly, there would be a market for it.
  • I wonder.. Did you pay for Win2k and Visual Studio? I promise I won't tell anyone! ;-)

    - Steeltoe
  • Yes, you didn't answer the question though ;*)

    - Steeltoe
  • I agree on the basics: dictate a unique desktop is not the way to go. A common standard for desktops, like the one they are trying at www.opendesktop.org (with both KDE and GNOME people involved ) would be a Good Thing(tm), however.

    However, I disagree on a couple of points:

    • I don't want any installer automatically putting stuff in my menus. >br> I do. And I'm very glad my distro (Debian) has found a standard way to work around the miriads of menu formats out there. Thanks to the dedication of its developers. Surely, a common menu format would be greatly appreciated by them.
    • a UNIX system you do not run programs by clicking pictures
      Ah, but Linux is not Unix (neither is GNU, as they say :). You would never think of putting Unix on an embedded device, would you.
      And command line requires standards, too : if every package would have put its executables in a different place (instead of /usr/bin and family) you had to spend half your life adding paths to your PATH variable (just like old DOS). So, why do you want people wast time rearranging menus every time they install something?
  • LSB

    Also Known As

    What Would RedHat Do?
  • ...Going to help linux. :)

    -------

  • >>http://msdn.microsoft.com/library/default.asp?ur l= /library/en-us/dnwue/html/welcome.asp

    Hmmm... when I go to that site I get a 'page not found error' How is that going to help Linux?

  • Debian users seem pretty herd-ish to me... Derek
  • I believe it's a given that RPM and DEB are almost neck to neck in terms of technical merits. Debian advocates tend to deny this, and go in a loop about the whole thing, I know.

    apt-get arguments do not apply here clearly. RPM has the further and important advantage that it works and practically all Linux platforms, now. Even Slackware, in case you're wondering.

    Besides, what's your beef with Britney Spears?
  • The poor quality debs referred to debs from Vendors, and not to the debs provided by Debian. Making a high quality deb is a lot of work, and often vendors are not experienced enough with Debian or do enough testingt to pull it off.
  • by Jagasian ( 129329 ) on Saturday June 30, 2001 @11:30AM (#118082)
    Honestly, what is a standard? I claim that Debian Linux is just as much of a standard for Linux, as is the LSB. Debian is not controlled by a company, and it even provides a reference implementation of the standard.

    I guess the question is, do Linux users base their decisions on the technical merit of a Distribution or do they make decisions based on the herd mentality?
  • by Jagasian ( 129329 ) on Saturday June 30, 2001 @11:27AM (#118083)
    Britney Spears is just so much more popular, so she is the best choice!

    Lets try to keep the discussion based around technical arguements.
  • Non-root installs may not be totally effective, but they're better than nothing. They may be able to steal your data, but as long as you don't run the package as root they shouldn't be able to add trojans to your system utilities, and that means that cleaning up your system should be a lot easier. And that's assuming the worst case of an actively malicious piece of software, rather than just a badly written one. If you're running a program with a serious security problem as root, you whole system can be compromised and (since it's binary only) you won't be able to fix the problem short of pulling the software. If it's running as a non-priviledged user then the damage from it being cracked will at least be somewhat contained.

  • by rgmoore ( 133276 ) <glandauer@charter.net> on Saturday June 30, 2001 @09:14AM (#118085) Homepage
    It is missing two important things:
    • A standard package format (RPM or DEB)
    • A standard desktop framework (KDE or GNOME)

    I think that this is slightly off the mark. The difference in packaging formats is, IMO, a comparatively trivial complaint. It should be comparatively straightforward for just about any software supplier to provide both DEBs and RPMs for their software. It's not even a matter of recompiling, just repackaging. This shouldn't be enough to slow things down much at all, and the number of projects that allready do so is evidence of the fact. IMO the difference in package management needs to be resolved eventually, but I think that it's small enough that this is not the time for one side to ram it down the other side's throat.

    OTOH, the lack of a standard desktop environment has the potential to be more of a problem. It's not trivial to make a package that plays nice with both KDE and GNOME, not to mention all of the people who will want to skip desktop environments alltogether and use just a lightweight window manager. IMO, though, the long term solution is not going to be forcing people to choose one or the other. Instead, GNOME and KDE will develop to the point that they interoperate smoothly, and disk space will get cheap enough that people won't complain about needing to keep both on their boxes to run arbitrary software. You can already run KDE apps on a GNOME desktop and vice versa (so long as you have both installed), so I'm not sure whether this is a really serious flaw, anyway.

  • Actually that was Caldera, it was the PR guys declaring something they knew nothing about.. allthough last i checked SuSE is the most standard compliant distro.
  • I'm sorry but I can't agree with people who think that LSB needs a "standard desktop".

    The problem here is not that GNOME and KDE apps don't play well together, it's that GNOME and KDE apps exist period.

    GNOME and KDE are both great desktops, but they are not the only options. Most of the people who I know use WindowMaker (myself included), others I know use twm, others XFCE etc.

    So if you say all of a sudden that the standard is KDE then you have all of a sudden said that "KDE is the Linux desktop." This is plain wrong. Here's my reasoning:

    What do you need to develop a GUI program? An Xlib implementation, a C compiler and libc. If you want to make life easier then use gtk+, qt, wxwindows or whatever. But you do not need KDE or GNOME. KDE or GNOME are set of programs that add ease to using the OS. Why do we need to all of a sudden make so many applications that should not give a shit about what desktop you're using depend on a particular desktop?

    When it comes down to it, developers should not make applications depend on a desktop.

    Also, if you want to talk about icons and menu items and stuff like that, here's my opinion on that:

    On a system where you have the choice to choose whatever desktop you want and customize whatever you want the user should have the sole responsibility of placing icons on his/her desktop. I don't want any installer automatically putting stuff in my menus. Plus, a UNIX system you do not run programs by clicking pictures. You run them by typing the full path of their executable at the shell. There's nothing wrong with someone running them by clicking pictures, but that someone should have set it up that way themselves. Stop trying to make the system work differently then it was designed to do.

    So anyway, for a very long time the idea of kthis and gthat that frustrated me. Please keep the concept far far away from any standard.

    Thank you.

    --
    Garett Spencley

  • Yes, you should be able to use your software in any linux distribution. But how long will it actually take from the distribution makers to accept and comply with this standard?

    Not very long at all, hopefully. If you look at the home page [linuxbase.org] for the Linux Base project, you'll see that their list of contributors includes all the big players in Linux, including hardware vendors like IBM and Compaq. Besides, it sure looks like a good excuse for a major revision number.

  • The origin is enshrouded by the very stuff of creation, it cannot be know by man, for it is old as time, strong as power itself, an enigma sought after by the willfull and determined, but never solved, and never tamed. It brings madness to some, death to others, eternity to few. None who seek shall be the same again, and those who find, shall never return. It is a descent into the nature of beauty, the thoughts of the universe, the greatest justice. Only the name, Katz, can be held within the mortal mind, what owns that name is too much for a single soul to know, the very base of human being is belong to Katz, it has begun war, it has set us up the bomb, it has moved all zig, and it has made great justice in so doing.

    The nature of Katz is beyond the simple humor of human civilization. Heed my advice and take off all zig, or you too may find that someone has set you up the bomb, and you too shall ask, "What happen?" Know this too be true, my son, for Katz, you see, is all around us. [brogdon.org]

  • by iomud ( 241310 ) on Saturday June 30, 2001 @09:24AM (#118099) Homepage Journal
    It seem's pretty RPM centric [linuxbase.org] to me. A standard shouldn't become a standard because a lot of people use it. It should become a standard based on it's technical merit and objectively evaluated.

    I don't know how you can look at dpkg and apt and not be impressed. Maybe people are afraid it will effect their business model's because debian [debian.org] is doing for free what other folks [redhat.com] are charging for [redhat.com]. Is either perfect? No. But that should only serve to raise the bar for each package management system. Then again splitting hairs dosen't do anyone any good, I just hope we can pick the best choice becase it _is_ the best choice.

  • while i understand that rpm packages are more widely used than deb packages, i still think that debs are technically superior. i wish that lsb had used that as its criteria in recommending a package format.
  • Hey! This LSB looks pretty much anty-Debian and pro-Red Hat... Don't you think so?
  • Other has mentioned it, but it has a package management format, which is RPM.

    The standard desktop framework is *much* more important, IMO.
    What is needed is not picking KDE or GNOME, what is needed is something on a more lower level.
    Currently, you get X, on top of that XLib, and on top of that, the WM & desktop enviroment.

    This is wrong, because the desktop framework is too high level. What is needed is a standard desktop framework, on top of *which*, the WM is then built.
    Beside getting ISV to release more software to Linux, this also help reduce the double-application syndrome, where an appliction is written to KDE and someone create an identical application for GNOME.

    This way, you write for the standard desktop platfrom (SDP, not a bad name :-} , and it work on every desktop or WM that you've.


    --
    Two witches watch two watches.
  • It's not enough yet, I'm afraid, you need a little more, like a basic desktop enviroment.
    See http://slashdot.org/comments.pl?sid=01/06/30/17342 53&cid=40 for the detials.


    --
    Two witches watch two watches.
  • I think that this is slightly off the mark. The difference in packaging formats is, IMO, a comparatively trivial complaint. It should be comparatively straightforward for just about any software supplier to provide both DEBs and RPMs for their software. It's not even a matter of recompiling, just repackaging.

    I agree. Packaging is a trivial issue, and one poster already suggested the standard may have decided on RPM

    OTOH, the lack of a standard desktop environment has the potential to be more of a problem.

    Then support freedesktop.org [freedesktop.org]. This group has already worked on and released a window manager standard for inter-op between desktops, and continue to work those other issues. The window manager standard was very well recieved I believe with just about every group out there represented (I followed the mailing list).

    I believe if we support groups like freedesktop, the KDE/GNOME issue will slip more and more into the background, right where it belongs.

  • I've been wondering if using non-root installs is really totally effective, especially on personal machines. For example, what's to stop a user-mode trojan from reading all of my personal user-mode files and then transmitting my valuable data to a server in Elbonia via a web request?

    The kernel is untouched, but my security has still been breached.

  • by JBowz15 ( 451573 ) on Saturday June 30, 2001 @09:02AM (#118120)
    Everyone knows 1.0 releases suck. We should probably wait for at least standards version 1.1, when more of the bugs have been worked out.
  • What do they need a standard base for when none of the developers can even get to first base?

Avoid strange women and temporary variables.

Working...