Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Updates from the Free Standards Group 67

Daniel Quinlan writes "Today, the Free Standards Group released version 1.2 of the Linux Development Platform Specification and let loose with the public review of FHS 2.2-beta that will be used in the Linux Standard Base (and is already being used by distributions). Also of note, the Linux Standard Base has a new chairman, George Kraft IV, and the LSB specification is nearing completion. Really."
This discussion has been archived. No new comments can be posted.

Updates from the Free Standards Group

Comments Filter:
  • "640kB ought to be enough for everybody."

  • Now, then, the line of defense against needless diversity is to enforce a standard on the metadata, the names, not the values!

    I can hear the moans right now: "Ack! Registry!"

    The thing to realize is that Unix already uses such a database: It's called the filesystem and the value mappings are called links or symlinks.

    It's also a fricking mess, with almost path is hardcoded or a compile-time option. This makes what should be the most basic sysadmin task, moving directories, impossible.

    And, sure, Windows is no better. But that's because MS discovered the nice side-effect that piracy is more difficult with hard-coded app paths. Other systems, such as MacOS, do a very nice job of making software relocatable.
  • Solaris wants to put stuff in /opt, & most Linux distributions prefer /usr/local (as do most GNU programs).

    I noticed that StarOffice made an /opt directory when installed on Linux. Personally, I like having some of those sundry programs on an /opt directory, and OS/Server/command line extras in the /usr/local.

  • In a word, "what?" As in, what are people thinking when the write tirades *against* open standards bodies for computing technology?

    Here are some that have worked, and made your lives a whole lot better:

    RFCs [rfc.net]
    POSIX/IEEE [ieee.org]
    HTTP/HTML [w3.org]
    ASCII/ISO 8859 [bbsinc.com]
    ANSI C [dkuug.dk]

    And that's just to name a few that immediately came to mind. Note that some of them had coporate sponsorship, some are truly community reviewed, and some are a mixture. But standards are essential for ever moving *beyond* the technology of today. If we didn't have a standard C, then people will still be arguing over how to improve C, rather than creating new languages.

    Really, standards shouldn't evolve that much. And people shouldn't wait to get them perfect. Agree on something that mostly works, use it, and move on.
  • Such standards are set buy the GUI Environment, e.g. KDE or GNOME.

  • The only way this will work is if all vendors come together on this and make it happen. Why would they want to do that? There are so many flaovors out there, if we start to standardize, the smaller "flavors" will be eventually out of business and we are back to capitalism at it's finest.

    Linux vendors might want to standardize offerings because when it comes to Operating Systems, Linux itself is one of the "smaller flavors."

    Besides, standardization doesn't mean that everyone does things the same way, it means that configuration files and binaries can be expected to be found in the same places, using the same formats. This allows individual vendors to create and provide tools which are helpful to everyone. Personally, I'd prefer to see commercial vendors like Redhat, Mandrake, and SuSE compete on grounds which don't include different methods for managing users and configuring software. If I wanted to swtich vendors, I don't want to have to learn how to do simple tasks all over again, for the sake of competition.

    --Cycon

  • The reason you've never seen a logical argument, is because, as best as I can tell there isn't one. I have yet to see a solid argument for why one is inheirently better than the other. Sure apt-get handles installing packages much nicer than rpm (the command line tool, not the package format). But from what I understand (at least with the latest version of rpm (the pacakge) found in rh7) there is nothing at all that prevents rpm from having rpm get program and having it resolve all of the dependencies and download everything for you. Also, last I heard (this might be different) rpm supports signing packages where deb does not.

    Half of the arguments I've seen for why debs are better than rpms were because debs use a directory for package info, where rpm uses a single file.

    Does anyone have a good (based in fact not religion) argument for why one is better than the other?
  • >Maybe what we need is some generic tool for installing binaries (like Installshield) that can detect what it needs.

    Oh, you mean like apt? Not that you're going to be able to "apt-get install oracle" any time soon :)

    However, something like apt, which intelligently manages package dependancies (if the packagers and packaging system intelligently SET package dependancies) can see which versions of glibc, Perl, SDL, $WHATEVER exist on a system and determine what needs to happen in order to install a program.

    What would be interesting is if distributors of binary packages (not counting those included in distributions such as Debian) could have those packages attempt to use libraries other than the exact ones for which they were compiled, if those libraries stood a reasonable chance of working. For example, I have a symlink in my /lib directory because the people at mozilla.org compile mozilla for Linux against a slightly older libc++ than the one I have. If it could have detected that and said "Oh, that libc++ is the same major version, it'll probably work", and simply run with a warning or something, that might be nice. (It does work perfectly, BTW).

    As for where oracle should put its stuff, it should probably use /opt.

    Sotto la panca, la capra crepa
  • As another poster pointed out, the window manager (KDE, GNOME, E, Win98) is responsible for standardising the interface (like alt-f4 being "destroy this window" in Windows).

    And as for KMail, it is, IMHO, being evil. For years, *NIX has used the "highlight is copy, middle button is paste" philosophy. Ctrl-C is "kill"!! Why did the developer of KMail decide that they had to emulate Windows? StarOffice is also bad in this respect.

    File open dialogs, OTOH, are totally the realm of the application developer, and in the Linux world, that means that everyone will probably write a dialog that works the way they want it to work. It would be interesting if the WM would provide hooks for something like that, where any app could call a standard "File Open" dialog; unfortunately, this would probably be different for every WM. Another case of one of the things that makes Free Software great (choice) working against it at the same time (it's easy to make everything shiny and smooth, if you're Apple and you control hardware + software tightly).

    Sotto la panca, la capra crepa
  • This is only a stopgap standard for use by the lazy. What needs to be done is to make Linux conform the the *current* standards already out there, instead of continuing to ensure that Linux apps will only run on Linux systems.

    Conform to the ISO/ANSI Standard C Library instead of glibc-2.2.

    Conform to POSIX instead of Linux-2.2.14.

    Conform to X11R6 instead of XFree86-3.3.5.

    A few pieces of software will need to be system specific, but the vast majority of Open Source code should be cleanly rebuildable on all Unix like operating systems, including *BSD, Solaris, HPUX, IRIX, etc.
  • Unfortunately, annoyances (hopefully temporary) are to be expected if people want to bring slightly-varying standards together to match a unified standard.

    At least you can use symlinks to get around the multiple file locations for now. it'll be annoying for awhile, but hopefully, eventually, as future distributions become more standards compliant the old symlinks can be phased out over time.

    Other problems might be encountered that simple links can't fix, that'll be a harder problem to solve in the meantime. Such as varying formats for etc files, for instance. Any lists of known problems/inconsistencies of this type?

  • Comment removed based on user account deletion
  • However a couple things he suggested are not GUI. In particular file associations. These are assummed to be "GUI" because they first appeared on the Mac and then on Windows. However it should be pretty obvious that a CLI shell could be written that took any filename typed in, looked up the association, and ran the program. Thus these associations are no more "GUI" than $PATH is.

    A common utility program (and easily replaced) to pop up a file chooser, wait for the user to pick a file, and then exits printing the result to stdout, would also be useful. It could greatly reduce bloat of programs by eliminating a large chunk of the toolkit they need to link. Adding some standard programs to display a message, ask the user a question, etc, would allow even scripts to have a "GUI".

  • "* /opt was designed for closed source Unices where there is a clear delineation between OS vendor and third parties."

    Yea, sure, you can say that... But Adobe stuff is closed source and ends up in /usr/local... And, what about the old cry "Red Hat is NOT Linux, it's a distribution of software that includes the Linux Kernel."

    Given that, shouldn't all of the packages that come on a CD Distribution of Linux go into /usr/local, because they are NOT the BASE OS, they are added on packages? (yes, argueable, but only with the "but /usr/local shouldn't be touched during an OS upgrade" statement)

    What is an OS upgrade? Should an OS upgrade really include anything that a distribution wants to put on as many CD's they feel like shipping? Or should the OS be considered the kernel, and the fundemental parts of the system most commonly required (init stuff, a shell, etc....)

    If I am going to make ANYONE work harder in this situation, my choice is the distributors, not the users. If distributers have to figure out what to put into /usr/local, what to put into /opt, and what to put down in /, then so be it. As long as it's logical, and makes it easier for the end users.

    If the end result is that the users can truely see that /opt can be exported, and thus they only have to install those packages ONCE on ONE system on thier network, but the stuff in /usr/local is only configured for that system (like apache, etc...)... And the really basic shit that every system needs to run, and stay running for maintaince and stability is down in /, whooow baby... that would be awsome.

    /opt and it's implementation is obsolete. But that doesn't need to be the case. It's something we can use constructively. KDE once went into /opt, based soley on the fact that part of it (qt) was "non-free." And I'm more than positive I can poke around on some NON-Linux *nix's (Solaris, IRIX, etc...) and find GNU stuff compiled and installed in /opt (thier conventional place for add on packages).

    IMHO /opt is due for a make over, and now (when it's starting to be less used, and more confused) is the time to nail it down and make it useable.

    But it's probably not worth arguing, because I am looking at it soley from a system administration, user, and logical standpoint. The people who control the standards are looking at it from a "mediator between big OS vendors" standpoint. They only seek to find a workable middle ground, not a truely logical way of doing things... So, it's all a discussion in vain.

  • And as for KMail, it is, IMHO, being evil. For years, *NIX has used the "highlight is copy, middle button is paste" philosophy. Ctrl-C is "kill"!! Why did the developer of KMail decide that they had to emulate Windows?

    Before you flip out, try selecting some text in KMail, going to another application, hitting the middle mouse button and seeing what happens. I'd also suggest that:

    • You're missing the distinction between copy/paste and a clipboard.
    • Your definition of "evil" could use some fine-tuning.

    Unsettling MOTD at my ISP.

  • This make me wonder why no one's done something like a GNOME-libs for Qt, it might be a little more work but it's possible.
  • If you use glibc, linux and Xfree86 extensions in your program, then it will *only* run on Linux.

    Take a look at the FreeBSD ports and start counting how many applications *require* glibc installed just to compile the software. Obviously, there are scads of developers that are indeed using non-portable extensions.
  • by LordNimon ( 85072 ) on Tuesday March 13, 2001 @08:46AM (#366388)
    Something is wrong here. http://www.freestandards.org/ldps/ [freestandards.org] says 1.1 was released March 12, 2001 (yesterday). But if you read that announcement, you'll see the headline says March 12, 2000. Not only that, but the announcement text makes references to old distributions, e.g. Red Hat 6.2.

    Also, I'm confused as to which distributions actually uses 1.1.
    --

  • by woody_jay ( 149371 ) on Tuesday March 13, 2001 @08:48AM (#366389)
    Even if we have an orginization that is giving Linux Standards, the fact that Linux is Open Source means no one has to listen. For example, let's say the Linux Standard's Orginization says RPM is the standard format that will be used for installation of software. Who has to listen? It's open source, if I want to tar and gzip my files to get them out there and force you to compile them yourself, then there is not one thing you can do about it.

    The only way this will work is if all vendors come together on this and make it happen. Why would they want to do that? There are so many flaovors out there, if we start to standardize, the smaller "flavors" will be eventually out of business and we are back to capitalism at it's finest.
  • by 3247 ( 161794 )

    Ever tried to install a RedHat RPM on a SuSE system?

    In the worst case, you can't install it due to unsatisfied dependencies because the package names differ...

  • I am a christian. Born and bread. This story dose not offend me.

    In fact reading it makes it quite clear that the author dose not actualy think that but is rather mocking or trying to influence and educate those who do.

    I.e. The person who dosn't "want enough of God to make me love a black man" would never actual say that. Such a person dosn't think God's love would push him that way ("Extream Racists think God agreas with them" -: Me)
  • --INSERT SARCASIM- Again, the self claimed gurus of standards frantically address the problems of fragmentation. But, as usual, it's a fierce fight to find a middle ground between various distributions (if you wish, including Linux as a now heavy and already fragemented player along with the other standby *nix systems).

    While all the troops lay out in the trenches of the subdivisions of /etc/rc* and /var/godknowswhy the real working system administrators couldn't honestly care less, they know the intricate deviations of each *nix they use already.

    Lost in the trenches, the architects of the new FHS fail to see the growing problem of network integration, and the lack of logical in the situations of day to day mantainance.

    If they had some balls, they would see, the battle should be for the greater good, and as such, ALL of the FHS should make a major shift in philosophy to how to make *nix systems more friendly.

    Most already agree that the bin, etc, var, sbin, home, lib structure works pretty well. So, now use it!

    IMHO, what needs to be done is:

    • make /opt the place for EXPORTABLE. Subdivide /opt into the basic components, /opt/bin, /opt/sbin, /opt/home, /opt/lib, /opt/etc, etc... Then, all the exportable home dirs are in one place... all the binaries that can be run on similar architecture go in the exportable by design /opt dir. (exportable is not /mnt, /mnt is only for TEMPERARY stuff).
    • make /usr/local REALLY LOCAL. Define it's structure like / and /opt, with /usr/local/etc, /usr/local/var, /usr/local/bin, etc... Put all the system specific stuff that isn't really part of the base OS (like apache and it's configuration files) in this LOCAL directory. Users who only run on that ONE system have a /usr/local/home
    • make only the NEEDED stuff that is truly the BASE of the OS go into /, like the kernel, a shell or two (with the extra shells like zsh and such in /opt/bin), basic init stuff (and I don't care if it's SysV or BSD personally, as long as I know where to go to find it).

    Just basically make some sense of it all! I know NO distribution (Solaris, Tru64, Linux, BSDs, none) will be compliant YET. But if they are making the standards, they need to have the balls to say "this makes sense, we need to do it, even if it takes a few years before everyone starts using it." In the long run, it is good for *nix. Everyone is just WAY to focused on short term middle grounds.... Why not focus on laying down a foundation that will not be continually growing more complex and illogical?

    This is so typical of standards committees, put a band-aid on a harlequin quilt, and say it's all matching and compliant now. It's time to throw that quilt in the washer with a package of clothing dye, and REALLY make it all match.

    Thus, finally, my annual *nix sux rant is complete, commence the flaming.

  • For example, let's say the Linux Standard's Orginization says RPM is the standard format that will be used for installation of software. Who has to listen?

    You forget that social norms, while limiting society, actually frees the individual citizen. Think about it. It used to be universally accepted in the Western world that a man opening the door for a lady was a sign of politeness. The women's liberation movement in the 70's did much to destroy this norm. Now if a man takes a girl on a date he won't know for sure if he's being polite or insulting her. Where once he was free to act, he now has to worry over a decision. At one time there was a standard that everyone agreed to use. Now there is confusion.

    Yes, that example was frivilous, I know. But think of the things that distribution do differently for no good reason at all, usually resulting in widespread confusion. Mandrake recently changed the install directory for their version of Wine if I'm not mistaken. Why? Well, was it just that someone thought it was a good idea, and there wasn't anyone saying no. Or, could it be that they just didn't know any better because there was no standard to turn to?

    With a standard in place that everyone can point at and say, "That's the way this community likes to do it," many things get simpler. Fewer trivial questions have to be worried over. The mind is freed to move on to more important subjects (like, When on a date do you eat the fried chicken with your fingers or your fork?) Even diverging from the standards will be simplified. Mandrake won't have to list where everything is in their distribution; they'll only need to point out what is different from the standard.

    No one will stop you from distributing your project as a tarball. But if everyone else is using RPMs, you may find more acceptance we you go along with the rest of the community. The way it is now, some distribute RPMs, some use apt-get, and some distribute tarballs (with different compression formats). Each has some small strenght over the others, but in the end they are all more similar than different. The end result is that the poor newbie is just confused. One distribution format would give him one less thing to worry over. Standards are a good thing.

  • No doubt! Maybe what's needed is a GUI based tar utility that can extract the files to the correct directory, kinda like WinZip. Hmmm... I always wanted a weird project. :)
  • Did you notice the little clipboard icon in KDE?

    That is a neat little interface to the standard X clipboard which is what is used by KDE, including KMail. You will also notice that anything you select, whether by Ctrl-C (using the keyboard entirely) or simply selecting with the mouse, will be in the clipboard. The little icon also has history, so you can select something from the history, and it will be in the X clipboard. It is truly seamless and friendly.

    Netscape's handling is what is wrong here. KDE is doing the right thing, IMO. Besides, with the current Konqueror, I no longer need Netscape. I haven't used it since KDE 2.1 was released. I was buying stuff the other night and found that Konqueror worked were the latest Netscape didn't...
  • Let me tell you why file associations are a bad idea. This is a true story.

    My wife called me one day because her and the accountant had spent the entire day tring to get one quikbooks file transfered from her work computer into the accountants. She had done a backup but quickbooks could not see the file and double clicking on the program did something weird and eventually locked up the computer.

    I went over there to find out that when she made the backup she named it backup.doc not backup.qbb. Because the open file dialog was filtering by qbb it never showed up as being on the disk. When she explored the disk it hid the extensions and she saw that there was a file called backup. When she double clicked on it word attempted to open up the file and crashed.

    Here is the lesson. The name of the file should never be the most important thing about that file.

    The machintosh had things right it knew about the program that created the file no matter what you named it.

  • If you said /usr/dict/words to joe use his brain would explode. Joe use needs to get a mac. MacosX is pretty nice gui for a unix!
  • Whever happened to old DOS days when every app lived in it's own directory and needed nothing outside of what could be found in there?
    Is the concept of trying to save disk space even valid anymore?
  • Don't get me wrong, I think that standards for Linux are one of the only ways that Linux will survive and I do believe that they are a good thing. My point was that it's hard to standardize something when it's initial purpose was to allow the user to do whatever they wanted to with it and enjoy whaetever idiosyncrasies they feel the easiest and most fit their style. This has been the purpose of open source and therfore no one has to listen. I love Linux, would love to see it really take a good share of the market. I believe that standardization is a good way to start that. But let's face it, there are many people who like using Debian's package manager vs Red Hat/Mandrake's and they would have a hard time conforming if that was set as the standard. And if by some act it does become a standard, what happens to a distro like Debian when what they have used is not the "Linux" standard?

    Like I said, I love linux, standards are a good idea, I am just wondering how well it will actaully work. Like I have said a million times before:

    Of course, that's just my opioion. I could be wrong --Dennis Miller
  • Or do what apple did. Throw out backwards compatibily and come up with a directory structure that makes sense to you, add a super slick GUI on the top of that, add some really cool programming environments and call it something geek sounding like MACOSX.
  • But Adobe stuff is closed source and ends up in /usr/local...

    Actually, it doesn't. At least not in ANY of the acroread packages on RPMfind.net.

    Red Hat put in in /usr/lib [!] which contradicts the FHS completely (yes, binaries can go here, no, documenatation are sample files cannot).

    Everyone else puts it in /opt.
  • I think I downloaded it from Adobe, as an .rpm, and it went into /usr/local? I don't remember for sure. But, that's exactly the point. Even if the package manager or distributor are diffrent, there should be a defined place by the FHS for the software to go.

    So, if I use a .deb, a .rpm, a .tgz, or a install script, it goes to the same place, as defined by the FHS.

    That's exactly why I am against saying "Red Hat is an OS" and allowing them to put shit in /usr insted of /usr/local or /opt. They only make adding software easier if you never switch packaging systems, and always rely on them....

    Hmm... Need to only use the one distribution or break links and paths... Oh... Now it makes sence (sarcasim), Why would they want to make things standard, it would only allow you to break free of being dependant on the Distribution!

  • It's realy that simple. You don't need solid profe in order to pray. However you do need it to consider something a sientific fact.

    The interesting thing is that for those inside "Organised Religion", God is fact. However some of us ( including me ) accept that we cannot prove this to other people.

  • by Anonymous Coward
    Maybe my Debian distro is screwed, but half of my documentation is now in /usr/share/doc. Why can't we just keep /usr/doc? And no reason why the dictionary has to be /usr/share/dict/words/ and not /usr/dict/words ... Arrgh, this is so annoying. Seems like design by committee to me...
  • Standards at last! Almighty God! Standards at last!

    :)

  • It was so cute to find that the new RH release used a new glibc, making it pretty much incompatible with older versions.

    Are the FSG members supposed to wait untill the linuxbase paper is finished before they start thinking about being backwards compatible themselves?
  • I grep'd through the FHS 2.2 doc really quickly and I could find nothing to indicate a standard for runlevels or runlevel init scripts.

    One of the things that bothers me most about the diversity in Linux distributions is that no one seems to agree on what runlevel standard to use.

    For instance, Debian is pretty much SysV compliant (like Solaris), in that everything is in /etc/init.d with the runlevels themselves in /etc/rc0.d etc. Yet RedHat puts everything in /etc/rc.d/init.d with the runlevels in /etc/rc.d/rc0.d etc., which is SysV compliant with a twist? And Slackware is more like BSD and does not have SysV runlevels. (Note: It has been years since I used Slackware so that may have changed).

    I mean, I think it is pretty annoying. It's bad enough (but acceptable) that the various other Un*xen have their own filesystem layouts. It's pretty much historical. And one can say, *this* box is Solaris, and *this* box is HPUX, and *this* box is AIX, and *this* box is BSD. But why the hell can't Linux distributions aggree to be totally SysV compliant? Why can't we say *this* box is GNU/Linux instead of Debian, RedHat, Slackware, and so on?
  • Your focused entirely on GUI there in your comments, and that's not where the real problems are.

    From an end user standpoint, GUI standards for *nix systems would be very nice, and make life easier.

    But competition is good in GUI, and if you choose to stick with one tool kit (motif, gtk, qt, or whichever), then you can already find a great deal of consistancy. Trying to merge a mix of GUI tools, your always going to be asking for inconsistancy.

    Actually, when you think about it, the mear fact that you can run all of these diffrent apps based on diffrent tool kits and phlosiphies is pretty cool.... Even with the inconsistancy of copying, button style, whatever.

    The real issue that they need to deal with is the structure of the OS underneath. How to make all those diffrent apps compile, and install, and work, and seem to follow a logical structure from the underside.

    Let's stay focused on the ground before we try to touch the stars. adding consistancy to the GUI is not something I think these specific standards comittes should be wasting time on. Let's make the apps run, the systems talk to each other, and the process of configuring stuff more logical first.

  • by Tom7 ( 102298 ) on Tuesday March 13, 2001 @09:14AM (#366409) Homepage Journal

    Who do all the Linux Standards Base belong to?
  • It was so cute to find that the new RH release used a new glibc, making it pretty much incompatible with older versions.

    Uhhhhh. Let's say that RH waited until version 14.0, a decade from now, to switch to the new glibc. Would you then be saying, "It was so cute that they broke backwards compatability." What about the other vendors who use the new glibc in their distros -- are they now guilty of breaking backwards compatability? Would you have recommended that no vendor ever change glibc versions?

    It's not the vendor's fault if glibc broke compatability in a point release. At least RH waited until a major release before shipping it as the default library (I believe). At some point in time, every vendor is going to make such changes. The win under Linux, of course, is that if you don't agree you can always change out the glibc version yourself.

  • Forgive me if I'm way off-base here. I haven't looked at the new std yet (but I will.....really) and it's been some time since I read the old one. But I thought /opt was at least semi-defined so that packages go in /opt/<pkg_name>, the executables are in /opt/<pkg_name>/bin and /opt/bin contained links to /opt/<pkg_name>/bin (so that $PATH wouldn't get gnarly). So if you look in /opt you would see bin, package1, package2, share (I suppose), and so on.

    This seemed ugly to me at first, but it does have the advantage that to remove a package, all you have to do is rm -r /opt/<pkg_name> and then get rid of all the symlinks you just orphaned in /opt/bin.

  • Standards are certainly a good thing. Imagine if GE lightbulbs only fit into GE sockets. But your post for some reason called to mind something that happened at work a year or so ago. At that time I had a dual-boot NT/Linux machine and had had it for well over a year*. Most of my work was done in NT, but I enjoyed playing around with Linux from time to time. One day the middle-level manager whose little empire includes MIS (a MS bigot and pretty much computer illiterate to boot) got wind of it and complained to my boss that I was using a "non-standard" OS on his network. I had to laugh.

    *The punch line is that now I'm running Linux full-time and using NT from inside of VMware.
  • The wording in the FHS and its use on the mailing list arenear polar opposites.

    The FHS itself uses the wording `optiona' to describe things that go in /opt. Since this is completely arbitrary (neither distributions nor users nor admins nor ISVs have a consistent concept of `optional' software), the FHS itself is fairly poor in this regard.

    But ask on the FHS mailing list, and your response will be: ifs its an application that needs its own tree, it should live in /opt, especially it its from an ISV.

    If its something you've compiled yourself (which you want to have its own tree, thus being seperate from your packaged applications) it should live in /usr/local.

    For one, I think /opt should be dtruck from the FHS. The only reason anyone uses to justify its existence in backwards compatibility - so I'd add a note

    Why kill /opt?

    * Because Unix applicatioons should be structured by the types of files (ie, documentation, configuration ,etc)and their role within the system (ie, whether being a base level component (eg, needed to boot and run nearly all programs). /opt breaks this.

    * Because we don't need any more subdirectories under /

    * Because people actually believe the FHS when it uses the term `optional' and decide to stick whatever fits their own personal definition of `optional' this week into the directory.

    * Because "`/opt" is becoming "Program Files" of the Unix world and a dumping ground for badly written apps that use their own heirarchies making the filesystem even more of a mess

    * Because I'd rather break compatibility (even in such a small way) than include this *hack* into the FHS simple for the reasons of backwards compatibility. I'm a DevFS / ACL / boot sanity / Xrender / DRI type of guy - if somethings is broken, I want it fixed, regardless of whether its popular in flavors of Unix I don't use. They don't set the standards any more, we do.

    * becuase plenty of people also dislike /opt, including it seems most people on the FHS list and Alan Cox. Most of these would also prefer to see the directory phased out over time.
  • Amongst all that, I forgot another important point:

    * /opt was designed for closed source Unices where there is a clear delineation between OS vendor and third parties. That same delineation exists within Linux, but since the same package can be provided within or without a distribution, its not consistent. A distribution (which includes package x) puts x into its FHS annointed spot. A user running a distributions without package x who installs it puts in its FHS annointed spot.

    Same package. Same base standard. Two completely inconsistent locations. I can no longer sit down at a machine and know where package X is instaleld anymore. This is the exact type of thing the FHS is set out to prevent.
  • Actually, there's no reason why anyone couldn't use apt-get to install binary closed source packages. Combined with encryption it could even allow you to purchase them from a distriobution mirror of those packages, with a small portion of the funds going to your vendor.

    I'm probably not talking about Debian here, since most commercial software isn't tested too well on Debian or released as .deb. However, the newer APT based distros like Mandrake 8 and Connectiva should definitely expand APT (or libraries) to handle this. It saves admins time and hassle and makes them a little cash too.

    As for where oracle should put its stuff, it should probably use /opt.

    It should if it didn't come with the distro. if it did come with the distro (eg, Red Hat + Oracle, of free databases which do and don't come with distriobutions) it should live in /usr/local, according to the FHS list (the FHS itself is very unclear).

    Hey wait...same package...two locations. I can't sit down on a machine and know where it lives anymore. Oh my God! Maybe /opt is broken!
  • File Open dialogues should be customized between GNOME/GTK and KDE/QT applications.

    Its amazes me when people keep telling me `yes, but different people work on GNOME and KDE' as some type of magical excuse (not you specifically, but in general). So?!?! Lack of consistency is hurting Linux desktops more than competitoion is enhancing it anyway, but that's a point for another day. Sit down with each other and work out a standard design. If you're *really* worried, juts make somethign that looks the same in your respective toolkits.
  • Ummm.. Move everything from old dirs to new dirs, then do a
    ln -s /usr/share/doc /usr/doc & ln -s /usr/share/dict/words /usr/dict/words

    And get on with your life

    They should offer options to do things like this in the install and configuration tools, though. Would probably help if distros just did stuff like this a lot, such as fixing the confusion between modules.conf and conf.modules.
  • Yea, the /opt/pkg_name/ thing has been frequently used. And, it's got merit.

    But, the /usr/local/pkg_name/ thing is also frequently used in the past as well, (by Adobe, Apache, and others).

    This does make a case for "each package in it's own directory vs. package management (metadata)." My vote would probably go down for package management, and leave it up to the system admin to choose his/her method of managing thier packages.

    IMHO, symbolic links are always just a messy workaround, and the thought of a directory for each package that far down the path strikes me wrong.

    I guess I personally think of things in an old fashion UNIX sence, where you have one system that exports a lot of stuff, and many workstations that just read-only mount the executables on that server.

    The growth of PeeCees and "a workstation on every desktop" have changed that. Everyone wants thier own system, with all thier own binaries, all on thier own drive... and that's a system admin nightmear. Not to mention that you are tied down to YOUR workstation, and loose all of the oldschool UNIX style networking (xterminals, sit anywhere and use your account on any server, all boxes are roughly equal...)

    Maybe it's not just the growth of power to a single computer that has caused it. No doubt big hard drives and lots of free cycles have done a lot to make people believe in the "MY COMPUTER, YOUR COMPUTER" mentality. But a lot of the credit probably goes to reaching bandwidth limitations (10 uses on 1 server from 10 xterminals can be taxing on bandwidth as much as on the server memory and cpu).

    But old school UNIX still has merits that we are throwing away if we don't try to simplify the basic file hiarchy system. NFS mounting /opt/bin from a bunch of boxs running on FlashRAM or CDROM only would rock (not to mention save money on each workstation, leaving more money for a killer server).

    Add encription to the mix for more fun. Xterminals need much less bandwith (a limiting factor) and just a little more CPU power (not so bad) to have a encripted X session. That not only gives a wee bit more security, but frees up some bandwidth.

    Now... Let's take ALL of that cute idea from oldschool UNIX, and add some SlashDot style new school 3l33+ script kiddy ideas.... "Imagine a Beowulf Cluster of these!" Yea, it could be done. Have 1 computer lab, containing 20 workstations, all running as 1 distributed system. You log into one, and while 10 guys sit browsing the web using very few cycles, you have access to the CPU power of all 10 systems to compile your latest project... Hmm.... Ok, now I went over the edge... I'll shut up now.

  • The way it is now, some distribute RPMs, some use apt-get, and some distribute tarballs (with different compression formats).

    Excellent post, but APT get isn't a packaging system. You meant to say Deb. Furthermore, APT get is designed to be packaging system independent and currently works with RPM or Deb.

  • I think with the rise of the gnome and kde projects that linux is likely less fragmented today than it used to be. Most people only see the desktop 90% of the time anyways.
  • Correct, it appears that Netscape is in the minority.

    Most new Unix programs use the PRIMARY selection for both the "copy" command and select-this-text. Netscape uses the SECONDARY selection for the copy & paste command. So only middle-mouse-click works to it. This may be true of all Motif programs.

    I discovered this quite quickly when I foolishly tried to make fltk use SECONDARY in the same way. It fixed Netscape but broke everything else.

    Only using PRIMARY is also good when you have programs like xterm, or ported Windows programs, that only have one way to paste.

  • The software does not have to come as a .deb in fact apt-get install realplayer insists that you have the .rpm of realplayer :) If Oracle decided to test or if you could make it work with Debian there is no reason that the apt-get install .deb could not just point to /cdrom and get the stuff that it needs or just run the installer with the options that it wants at that point. This would most likely require that Oracle at least help a bit to make the code possible but it could be done. This is the true power of apt you can use it to install just about anything you want in just about any format. This is a good thing.
  • by OdinHuntr ( 109972 ) <ebourgNO@SPAMpo-box.mcgill.ca> on Tuesday March 13, 2001 @08:30AM (#366423)
    - This is version 1.1, not 1.2

    - Why XFree86 3.3.x? 4.0 has proven stable and is faster.

    - Why on Earth is there no mention of Perl? Perl is the glue that holds many, many useful applications together; not including it in a standard makes no sense.


    --
  • What (if anything) is the difference between these two? The only thing I can tell (from personal experience) is that Solaris wants to put stuff in /opt, & most Linux distributions prefer /usr/local (as do most GNU programs). The FHS doesn't really make any distinction; the 2 sections (on /opt & /usr/local) look the same to me.
  • I guess the key reason for standards among Linux distributions is for vendors of binaries....Oracle needs to be able to make some assumptions about where things are/should go if they're going to distribute distro-agnostic software. I think it's probably a lot easier (in general) to distribute binary software in the Micro$oft world at the moment. Maybe what we need is some generic tool for installing binaries (like Installshield) that can detect what it needs. After all, automake and autoconf makes it pretty easy to get stuff to BUILD on various distros/Solaris/*BSD/etc. It should be that easy to INSTALL as well.

    *sing* I'm a karma whore and I'm okay....
    I sleep all night and I work all day
  • There's a really simple explanation here. It's a typo, the year is really 2001.

    Red Hat 6.2 is known to be a conforming platform, so it is listed.

  • /usr/local is semi-standard structure containing bin, etc, sbin, lib, etc... although it's adhoc defacto.

    /opt is completely undefined, and if I want to make my installer go in /opt/thing/executable/app I'm pretty much free to do it.

    Note my earlier comments about what they SHOULD be, but.... As for now, they are just about whatever you want them to be.

  • The FHS makes an effort to group sections of the filesystem by the type of data that goes in the directories. It differentiates based on two criteria: shareable vs. unshareable and variable vs. static. /usr is designed to be static and shareable. However, some parts of /usr are architecture-dependent, such as the /usr/bin and /usr/lib. /usr/share is designed to be a hierarchy of shareable, architecture-independent files.

    The two examples you give (/usr/share/doc and /usr/share/dict/words) are both things that are architecture-independent and so are put in /usr/share.


    --Phil (If only they'd deprecate /opt)
  • Obviously, all your Linux Standards Base belong to us. That's the beauty of Free Software. The system truly belongs to the end users, as they have the right and ability to change things themselves.

  • I agree with the need for sanity about where to put things after wrestling with various flavors of UNIX for the past 15 years.

    I've lost my personal favorite: user directories currently go into /home, where I had hoped the early transition would have been made from /usr into /u.

    Given some of the divergence that has already occurred in some of the Linux distributions, I suggest that while it is fine to encourage uniform placement of libraries, config files etc. in the file system, that such a battle is already lost.

    Retreat and fortify the next line of defense!

    That is, enforce a standard with something like a global /configure script that populates a named database. That database will have name value pairs showing the important places that important things exist:

    glibc_directory = /usr/lib
    for example. Now, then, the line of defense against needless diversity is to enforce a standard on the metadata, the names, not the values!

    If every distribution can agree that the same name will be used to contain the value, we can stop at one level of indirection and breathe a collective sigh of relief. Third party app installation will include a check for the existence of such a database to know where it should try to look for some dependent shared library, for example.

    Then, every system should have a cron job to cull through every directory on the system looking for .MagicConfigure files to run that will update the database of name value pairs. Make it so the database can be mapped nicely into a hierarchal tree, can be converted to XML, etc.
  • /usr/local is structured in the same manner as /usr and is intended to be used for locally installed programs in the same way that /usr is used for distribution-provided programs. /opt is also for non-distribution-provided programs, but is structured differently. Each program (or logical group of programs, like StarOffice) gets its own directory in /opt, /etc/opt, and /var/opt. This is (I believe) intended to make it easier to manage third-party programs--when you want to remove the program, you only have to delete (at most) three directories instead of hunting down files all over the place.

    I personally don't see a great deal of use for /opt anymore, because (on Linux, at least) package management is so prevalent. /opt was designed to make it easier for human administrators to manage programs on their systems. Package management programs provide (IMHO) a much better method for doing the same thing. Ah, well. Enough ranting for the day.


    --Phil (I was glad to see the appearance of SGML directories in the new FHS.)
  • And furthermore (in regards to the original post) Debian packagers are slowly converting from /usr/doc to /usr/share/doc but many packages haven't yet been fixed in this regard. There is some debate as to how to handle this when relase time comes, since the rate of conversion (over 6000 packages now in Debian!) is somewhat lagging.
  • by slothbait ( 2922 ) on Tuesday March 13, 2001 @11:23AM (#366433)
    The FHS (Filesystem Hierarchy Standard) lays out the basic organization of a compliant filesystem. The differences between flavors of init scripts run deeper than simply where the scripts are located within the filesystem. Thus, runlevels are beyond the scope of the FHS.

    While inappropriate for the FHS, runlevels may well be treated in the more comprehensive LSB. I am unaware if this is on the agenda of the LSB, but the current discrepancy between systems is certainly an annoyance. However, standardization of start up scripts may prove difficult as it involves treading through a few holy wars. Some of the old SysV / BSD schism carries on in the Linux camps, and this partially explains the different runlevel schemes in use today.

    Personally, I think rc scripts *should* be homogenized, and the LSB may be the appropriate body to push this through. I don't think the current divide is buying us much other than headaches. However, presently the LSB seems more concerned with binary compatibility at the application level: glibc versioning, ABI standardization, etc. This is a rather important topic as third party companies begin porting to "Linux", which they quickly find is segmented into ~5 pretty much compatible systems. This isn't such a big deal if you have source code, but can be a big hair mess for binary-only applications. Of course, some would argue that they don't *want* binary only applications on their system, but I'll leave that debate to the Slashdot masses...

    --Lenny
  • Section 4.5.2 (/usr/bin) mentions perl.
  • Though $whereis is pretty handy for finding out wtf things are on a box, but (IMHO) some sort of agreement on directory structure and where packages should go would be kinda handy. It's really fun trying to find the shell script that runs samba and other packages on different distros, etc.
  • This is going to make releasing software for conforming distributions sooo much easier. It will be especially cool if RPM and apt-get support this structure.
  • This could really be a good thing. They could fix many of the "problems" that prevent Linux from dominating the desktop.

    Setting a standard set of APIs for stuff like the clipboard, file associations, desktop integration, etc. The Windoze way to handle clipboard stuff is to first "register" a type for the data you are placing there. Most apps use a canned type and therefore you can cut and paste between almost any windows program. Why is it bad for things to work? Couldn't we do the same thing in X with a XML spec of some sort?

    And how about standardizing the interface a bit. I can't tell you how hard it is to explain to my wife that in KMail you use CTRL+C to copy, but then to paste it in Netscape you push the middle mouse button, and hope...

    Not to mention the 15 different file open dialogs I see every day. Some of them are really rotten too...

    I love Linux, don't get me wrong. However, I believe that standardizing some of the more obvious stuff for the GUI crowd would benefit us all immensely.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...