Updates from the Free Standards Group 67
Daniel Quinlan writes "Today, the Free Standards Group released version 1.2 of the Linux Development Platform Specification and let loose with the public review of FHS 2.2-beta that will be used in the Linux Standard Base (and is already being used by distributions). Also of note, the Linux Standard Base has a new chairman, George Kraft IV, and the LSB specification is nearing completion. Really."
Re:What is the standard capacity of /dev/null? (Score:1)
Re:And Soo... The saga continues. (Score:1)
I can hear the moans right now: "Ack! Registry!"
The thing to realize is that Unix already uses such a database: It's called the filesystem and the value mappings are called links or symlinks.
It's also a fricking mess, with almost path is hardcoded or a compile-time option. This makes what should be the most basic sysadmin task, moving directories, impossible.
And, sure, Windows is no better. But that's because MS discovered the nice side-effect that piracy is more difficult with hard-coded app paths. Other systems, such as MacOS, do a very nice job of making software relocatable.
Re:/usr/local vs. /opt? (Score:1)
I noticed that StarOffice made an /opt directory when installed on Linux. Personally, I like having some of those sundry programs on an /opt directory, and OS/Server/command line extras in the /usr/local.
Holy knee-jerk reaction, bathead (Score:1)
Here are some that have worked, and made your lives a whole lot better:
RFCs [rfc.net]
POSIX/IEEE [ieee.org]
HTTP/HTML [w3.org]
ASCII/ISO 8859 [bbsinc.com]
ANSI C [dkuug.dk]
And that's just to name a few that immediately came to mind. Note that some of them had coporate sponsorship, some are truly community reviewed, and some are a mixture. But standards are essential for ever moving *beyond* the technology of today. If we didn't have a standard C, then people will still be arguing over how to improve C, rather than creating new languages.
Really, standards shouldn't evolve that much. And people shouldn't wait to get them perfect. Agree on something that mostly works, use it, and move on.
Re: If they only had the balls.. (Score:1)
Such standards are set buy the GUI Environment, e.g. KDE or GNOME.
Re:Even if this happens, what can it change? (Score:2)
Linux vendors might want to standardize offerings because when it comes to Operating Systems, Linux itself is one of the "smaller flavors."
Besides, standardization doesn't mean that everyone does things the same way, it means that configuration files and binaries can be expected to be found in the same places, using the same formats. This allows individual vendors to create and provide tools which are helpful to everyone. Personally, I'd prefer to see commercial vendors like Redhat, Mandrake, and SuSE compete on grounds which don't include different methods for managing users and configuring software. If I wanted to swtich vendors, I don't want to have to learn how to do simple tasks all over again, for the sake of competition.
--Cycon
Re:NOOO!!!!! RPMs Suck!!! (Score:1)
Half of the arguments I've seen for why debs are better than rpms were because debs use a directory for package info, where rpm uses a single file.
Does anyone have a good (based in fact not religion) argument for why one is better than the other?
Re:Ah....standards.... (Score:2)
Oh, you mean like apt? Not that you're going to be able to "apt-get install oracle" any time soon
However, something like apt, which intelligently manages package dependancies (if the packagers and packaging system intelligently SET package dependancies) can see which versions of glibc, Perl, SDL, $WHATEVER exist on a system and determine what needs to happen in order to install a program.
What would be interesting is if distributors of binary packages (not counting those included in distributions such as Debian) could have those packages attempt to use libraries other than the exact ones for which they were compiled, if those libraries stood a reasonable chance of working. For example, I have a symlink in my
As for where oracle should put its stuff, it should probably use
Sotto la panca, la capra crepa
Re:If they only had the balls.. (Score:2)
And as for KMail, it is, IMHO, being evil. For years, *NIX has used the "highlight is copy, middle button is paste" philosophy. Ctrl-C is "kill"!! Why did the developer of KMail decide that they had to emulate Windows? StarOffice is also bad in this respect.
File open dialogs, OTOH, are totally the realm of the application developer, and in the Linux world, that means that everyone will probably write a dialog that works the way they want it to work. It would be interesting if the WM would provide hooks for something like that, where any app could call a standard "File Open" dialog; unfortunately, this would probably be different for every WM. Another case of one of the things that makes Free Software great (choice) working against it at the same time (it's easy to make everything shiny and smooth, if you're Apple and you control hardware + software tightly).
Sotto la panca, la capra crepa
This is only a stopgap standard for the lazy (Score:2)
Conform to the ISO/ANSI Standard C Library instead of glibc-2.2.
Conform to POSIX instead of Linux-2.2.14.
Conform to X11R6 instead of XFree86-3.3.5.
A few pieces of software will need to be system specific, but the vast majority of Open Source code should be cleanly rebuildable on all Unix like operating systems, including *BSD, Solaris, HPUX, IRIX, etc.
Re:Bring Back /usr/doc! (Score:1)
At least you can use symlinks to get around the multiple file locations for now. it'll be annoying for awhile, but hopefully, eventually, as future distributions become more standards compliant the old symlinks can be phased out over time.
Other problems might be encountered that simple links can't fix, that'll be a harder problem to solve in the meantime. Such as varying formats for etc files, for instance. Any lists of known problems/inconsistencies of this type?
Re: (Score:1)
Re:If they only had the balls.. GUI Balls? (Score:2)
A common utility program (and easily replaced) to pop up a file chooser, wait for the user to pick a file, and then exits printing the result to stdout, would also be useful. It could greatly reduce bloat of programs by eliminating a large chunk of the toolkit they need to link. Adding some standard programs to display a message, ask the user a question, etc, would allow even scripts to have a "GUI".
Re:/usr/local vs. /opt? (Score:1)
Yea, sure, you can say that... But Adobe stuff is closed source and ends up in /usr/local... And, what about the old cry "Red Hat is NOT Linux, it's a distribution of software that includes the Linux Kernel."
Given that, shouldn't all of the packages that come on a CD Distribution of Linux go into /usr/local, because they are NOT the BASE OS, they are added on packages? (yes, argueable, but only with the "but /usr/local shouldn't be touched during an OS upgrade" statement)
What is an OS upgrade? Should an OS upgrade really include anything that a distribution wants to put on as many CD's they feel like shipping? Or should the OS be considered the kernel, and the fundemental parts of the system most commonly required (init stuff, a shell, etc....)
If I am going to make ANYONE work harder in this situation, my choice is the distributors, not the users. If distributers have to figure out what to put into /usr/local, what to put into /opt, and what to put down in /, then so be it. As long as it's logical, and makes it easier for the end users.
If the end result is that the users can truely see that /opt can be exported, and thus they only have to install those packages ONCE on ONE system on thier network, but the stuff in /usr/local is only configured for that system (like apache, etc...)... And the really basic shit that every system needs to run, and stay running for maintaince and stability is down in /, whooow baby... that would be awsome.
IMHO /opt is due for a make over, and now (when it's starting to be less used, and more confused) is the time to nail it down and make it useable.
But it's probably not worth arguing, because I am looking at it soley from a system administration, user, and logical standpoint. The people who control the standards are looking at it from a "mediator between big OS vendors" standpoint. They only seek to find a workable middle ground, not a truely logical way of doing things... So, it's all a discussion in vain.
Re:If they only had the balls.. (Score:1)
Before you flip out, try selecting some text in KMail, going to another application, hitting the middle mouse button and seeing what happens. I'd also suggest that:
Unsettling MOTD at my ISP.
Re:If they only had the balls.. (Score:1)
Re:This is only a stopgap standard for the lazy (Score:3)
Take a look at the FreeBSD ports and start counting how many applications *require* glibc installed just to compile the software. Obviously, there are scads of developers that are indeed using non-portable extensions.
March 12, 2000 or 2001? (Score:3)
Also, I'm confused as to which distributions actually uses 1.1.
--
Even if this happens, what can it change? (Score:3)
The only way this will work is if all vendors come together on this and make it happen. Why would they want to do that? There are so many flaovors out there, if we start to standardize, the smaller "flavors" will be eventually out of business and we are back to capitalism at it's finest.
RPMs (Score:1)
Ever tried to install a RedHat RPM on a SuSE system?
In the worst case, you can't install it due to unsatisfied dependencies because the package names differ...
Offensive ? (Score:1)
In fact reading it makes it quite clear that the author dose not actualy think that but is rather mocking or trying to influence and educate those who do.
I.e. The person who dosn't "want enough of God to make me love a black man" would never actual say that. Such a person dosn't think God's love would push him that way ("Extream Racists think God agreas with them" -: Me)
And Soo... The saga continues. (Score:2)
While all the troops lay out in the trenches of the subdivisions of /etc/rc* and /var/godknowswhy the real working system administrators couldn't honestly care less, they know the intricate deviations of each *nix they use already.
Lost in the trenches, the architects of the new FHS fail to see the growing problem of network integration, and the lack of logical in the situations of day to day mantainance.
If they had some balls, they would see, the battle should be for the greater good, and as such, ALL of the FHS should make a major shift in philosophy to how to make *nix systems more friendly.
Most already agree that the bin, etc, var, sbin, home, lib structure works pretty well. So, now use it!
IMHO, what needs to be done is:
Just basically make some sense of it all! I know NO distribution (Solaris, Tru64, Linux, BSDs, none) will be compliant YET. But if they are making the standards, they need to have the balls to say "this makes sense, we need to do it, even if it takes a few years before everyone starts using it." In the long run, it is good for *nix. Everyone is just WAY to focused on short term middle grounds.... Why not focus on laying down a foundation that will not be continually growing more complex and illogical?
This is so typical of standards committees, put a band-aid on a harlequin quilt, and say it's all matching and compliant now. It's time to throw that quilt in the washer with a package of clothing dye, and REALLY make it all match.
Thus, finally, my annual *nix sux rant is complete, commence the flaming.
Re:Even if this happens, what can it change? (Score:2)
You forget that social norms, while limiting society, actually frees the individual citizen. Think about it. It used to be universally accepted in the Western world that a man opening the door for a lady was a sign of politeness. The women's liberation movement in the 70's did much to destroy this norm. Now if a man takes a girl on a date he won't know for sure if he's being polite or insulting her. Where once he was free to act, he now has to worry over a decision. At one time there was a standard that everyone agreed to use. Now there is confusion.
Yes, that example was frivilous, I know. But think of the things that distribution do differently for no good reason at all, usually resulting in widespread confusion. Mandrake recently changed the install directory for their version of Wine if I'm not mistaken. Why? Well, was it just that someone thought it was a good idea, and there wasn't anyone saying no. Or, could it be that they just didn't know any better because there was no standard to turn to?
With a standard in place that everyone can point at and say, "That's the way this community likes to do it," many things get simpler. Fewer trivial questions have to be worried over. The mind is freed to move on to more important subjects (like, When on a date do you eat the fried chicken with your fingers or your fork?) Even diverging from the standards will be simplified. Mandrake won't have to list where everything is in their distribution; they'll only need to point out what is different from the standard.
No one will stop you from distributing your project as a tarball. But if everyone else is using RPMs, you may find more acceptance we you go along with the rest of the community. The way it is now, some distribute RPMs, some use apt-get, and some distribute tarballs (with different compression formats). Each has some small strenght over the others, but in the end they are all more similar than different. The end result is that the poor newbie is just confused. One distribution format would give him one less thing to worry over. Standards are a good thing.
Re:NOOO!!!!! RPMs Suck!!! (Score:1)
KMail/KDE (Score:2)
That is a neat little interface to the standard X clipboard which is what is used by KDE, including KMail. You will also notice that anything you select, whether by Ctrl-C (using the keyboard entirely) or simply selecting with the mouse, will be in the clipboard. The little icon also has history, so you can select something from the history, and it will be in the X clipboard. It is truly seamless and friendly.
Netscape's handling is what is wrong here. KDE is doing the right thing, IMO. Besides, with the current Konqueror, I no longer need Netscape. I haven't used it since KDE 2.1 was released. I was buying stuff the other night and found that Konqueror worked were the latest Netscape didn't...
Re:If they only had the balls.. GUI Balls? (Score:2)
My wife called me one day because her and the accountant had spent the entire day tring to get one quikbooks file transfered from her work computer into the accountants. She had done a backup but quickbooks could not see the file and double clicking on the program did something weird and eventually locked up the computer.
I went over there to find out that when she made the backup she named it backup.doc not backup.qbb. Because the open file dialog was filtering by qbb it never showed up as being on the disk. When she explored the disk it hid the extensions and she saw that there was a file called backup. When she double clicked on it word attempted to open up the file and crashed.
Here is the lesson. The name of the file should never be the most important thing about that file.
The machintosh had things right it knew about the program that created the file no matter what you named it.
Re:Bring Back /usr/doc! (Score:2)
Re:And Soo... The saga continues. (Score:2)
Is the concept of trying to save disk space even valid anymore?
Re:Even if this happens, what can it change? (Score:1)
Like I said, I love linux, standards are a good idea, I am just wondering how well it will actaully work. Like I have said a million times before:
Of course, that's just my opioion. I could be wrong --Dennis Miller
Re:And Soo... The saga continues. (Score:2)
Re:/usr/local vs. /opt? (Score:2)
Actually, it doesn't. At least not in ANY of the acroread packages on RPMfind.net.
Red Hat put in in
Everyone else puts it in
Re:/usr/local vs. /opt? (Score:1)
So, if I use a .deb, a .rpm, a .tgz, or a install script, it goes to the same place, as defined by the FHS.
That's exactly why I am against saying "Red Hat is an OS" and allowing them to put shit in /usr insted of /usr/local or /opt. They only make adding software easier if you never switch packaging systems, and always rely on them....
Hmm... Need to only use the one distribution or break links and paths... Oh... Now it makes sence (sarcasim), Why would they want to make things standard, it would only allow you to break free of being dependant on the Distribution!
religeon != Sience. (Score:2)
The interesting thing is that for those inside "Organised Religion", God is fact. However some of us ( including me ) accept that we cannot prove this to other people.
Bring Back /usr/doc! (Score:2)
Standards at last! (Score:1)
Standards at last! Almighty God! Standards at last!
:)
Why do we wait? (Score:2)
Are the FSG members supposed to wait untill the linuxbase paper is finished before they start thinking about being backwards compatible themselves?
Standard for runlevel init scripts? (Score:1)
One of the things that bothers me most about the diversity in Linux distributions is that no one seems to agree on what runlevel standard to use.
For instance, Debian is pretty much SysV compliant (like Solaris), in that everything is in
I mean, I think it is pretty annoying. It's bad enough (but acceptable) that the various other Un*xen have their own filesystem layouts. It's pretty much historical. And one can say, *this* box is Solaris, and *this* box is HPUX, and *this* box is AIX, and *this* box is BSD. But why the hell can't Linux distributions aggree to be totally SysV compliant? Why can't we say *this* box is GNU/Linux instead of Debian, RedHat, Slackware, and so on?
Re:If they only had the balls.. GUI Balls? (Score:1)
From an end user standpoint, GUI standards for *nix systems would be very nice, and make life easier.
But competition is good in GUI, and if you choose to stick with one tool kit (motif, gtk, qt, or whichever), then you can already find a great deal of consistancy. Trying to merge a mix of GUI tools, your always going to be asking for inconsistancy.
Actually, when you think about it, the mear fact that you can run all of these diffrent apps based on diffrent tool kits and phlosiphies is pretty cool.... Even with the inconsistancy of copying, button style, whatever.
The real issue that they need to deal with is the structure of the OS underneath. How to make all those diffrent apps compile, and install, and work, and seem to follow a logical structure from the underside.
Let's stay focused on the ground before we try to touch the stars. adding consistancy to the GUI is not something I think these specific standards comittes should be wasting time on. Let's make the apps run, the systems talk to each other, and the process of configuring stuff more logical first.
The question is... (Score:4)
Who do all the Linux Standards Base belong to?
I feel that you're missing an important point (Score:2)
Uhhhhh. Let's say that RH waited until version 14.0, a decade from now, to switch to the new glibc. Would you then be saying, "It was so cute that they broke backwards compatability." What about the other vendors who use the new glibc in their distros -- are they now guilty of breaking backwards compatability? Would you have recommended that no vendor ever change glibc versions?
It's not the vendor's fault if glibc broke compatability in a point release. At least RH waited until a major release before shipping it as the default library (I believe). At some point in time, every vendor is going to make such changes. The win under Linux, of course, is that if you don't agree you can always change out the glibc version yourself.
Re:/usr/local vs. /opt? (Score:1)
This seemed ugly to me at first, but it does have the advantage that to remove a package, all you have to do is rm -r
Re:Holy knee-jerk reaction, bathead (Score:1)
*The punch line is that now I'm running Linux full-time and using NT from inside of VMware.
Re:/usr/local vs. /opt? (Score:2)
The FHS itself uses the wording `optiona' to describe things that go in
But ask on the FHS mailing list, and your response will be: ifs its an application that needs its own tree, it should live in
If its something you've compiled yourself (which you want to have its own tree, thus being seperate from your packaged applications) it should live in
For one, I think
Why kill
* Because Unix applicatioons should be structured by the types of files (ie, documentation, configuration
* Because we don't need any more subdirectories under /
* Because people actually believe the FHS when it uses the term `optional' and decide to stick whatever fits their own personal definition of `optional' this week into the directory.
* Because "`/opt" is becoming "Program Files" of the Unix world and a dumping ground for badly written apps that use their own heirarchies making the filesystem even more of a mess
* Because I'd rather break compatibility (even in such a small way) than include this *hack* into the FHS simple for the reasons of backwards compatibility. I'm a DevFS / ACL / boot sanity / Xrender / DRI type of guy - if somethings is broken, I want it fixed, regardless of whether its popular in flavors of Unix I don't use. They don't set the standards any more, we do.
* becuase plenty of people also dislike
Re:/usr/local vs. /opt? (Score:2)
*
Same package. Same base standard. Two completely inconsistent locations. I can no longer sit down at a machine and know where package X is instaleld anymore. This is the exact type of thing the FHS is set out to prevent.
apt-get should be able to install closed source pa (Score:2)
I'm probably not talking about Debian here, since most commercial software isn't tested too well on Debian or released as
As for where oracle should put its stuff, it should probably use
It should if it didn't come with the distro. if it did come with the distro (eg, Red Hat + Oracle, of free databases which do and don't come with distriobutions) it should live in
Hey wait...same package...two locations. I can't sit down on a machine and know where it lives anymore. Oh my God! Maybe
Re:If they only had the balls.. (Score:2)
Its amazes me when people keep telling me `yes, but different people work on GNOME and KDE' as some type of magical excuse (not you specifically, but in general). So?!?! Lack of consistency is hurting Linux desktops more than competitoion is enhancing it anyway, but that's a point for another day. Sit down with each other and work out a standard design. If you're *really* worried, juts make somethign that looks the same in your respective toolkits.
Re:Bring Back /usr/doc! (Score:1)
ln -s
And get on with your life
They should offer options to do things like this in the install and configuration tools, though. Would probably help if distros just did stuff like this a lot, such as fixing the confusion between modules.conf and conf.modules.
Re:/usr/local vs. /opt? (Score:1)
But, the /usr/local/pkg_name/ thing is also frequently used in the past as well, (by Adobe, Apache, and others).
This does make a case for "each package in it's own directory vs. package management (metadata)." My vote would probably go down for package management, and leave it up to the system admin to choose his/her method of managing thier packages.
IMHO, symbolic links are always just a messy workaround, and the thought of a directory for each package that far down the path strikes me wrong.
I guess I personally think of things in an old fashion UNIX sence, where you have one system that exports a lot of stuff, and many workstations that just read-only mount the executables on that server.
The growth of PeeCees and "a workstation on every desktop" have changed that. Everyone wants thier own system, with all thier own binaries, all on thier own drive... and that's a system admin nightmear. Not to mention that you are tied down to YOUR workstation, and loose all of the oldschool UNIX style networking (xterminals, sit anywhere and use your account on any server, all boxes are roughly equal...)
Maybe it's not just the growth of power to a single computer that has caused it. No doubt big hard drives and lots of free cycles have done a lot to make people believe in the "MY COMPUTER, YOUR COMPUTER" mentality. But a lot of the credit probably goes to reaching bandwidth limitations (10 uses on 1 server from 10 xterminals can be taxing on bandwidth as much as on the server memory and cpu).
But old school UNIX still has merits that we are throwing away if we don't try to simplify the basic file hiarchy system. NFS mounting /opt/bin from a bunch of boxs running on FlashRAM or CDROM only would rock (not to mention save money on each workstation, leaving more money for a killer server).
Add encription to the mix for more fun. Xterminals need much less bandwith (a limiting factor) and just a little more CPU power (not so bad) to have a encripted X session. That not only gives a wee bit more security, but frees up some bandwidth.
Now... Let's take ALL of that cute idea from oldschool UNIX, and add some SlashDot style new school 3l33+ script kiddy ideas.... "Imagine a Beowulf Cluster of these!" Yea, it could be done. Have 1 computer lab, containing 20 workstations, all running as 1 distributed system. You log into one, and while 10 guys sit browsing the web using very few cycles, you have access to the CPU power of all 10 systems to compile your latest project... Hmm.... Ok, now I went over the edge... I'll shut up now.
Re:Even if this happens, what can it change? (Score:2)
Excellent post, but APT get isn't a packaging system. You meant to say Deb. Furthermore, APT get is designed to be packaging system independent and currently works with RPM or Deb.
Linux less fragmented than it used to be (Score:2)
Re:KMail/KDE (Score:2)
Most new Unix programs use the PRIMARY selection for both the "copy" command and select-this-text. Netscape uses the SECONDARY selection for the copy & paste command. So only middle-mouse-click works to it. This may be true of all Motif programs.
I discovered this quite quickly when I foolishly tried to make fltk use SECONDARY in the same way. It fixed Netscape but broke everything else.
Only using PRIMARY is also good when you have programs like xterm, or ported Windows programs, that only have one way to paste.
Re:apt-get should be able to install closed source (Score:1)
thoughts (Score:3)
- Why XFree86 3.3.x? 4.0 has proven stable and is faster.
- Why on Earth is there no mention of Perl? Perl is the glue that holds many, many useful applications together; not including it in a standard makes no sense.
--
/usr/local vs. /opt? (Score:2)
Ah....standards.... (Score:2)
*sing* I'm a karma whore and I'm okay....
I sleep all night and I work all day
Re:March 12, 2000 or 2001? (Score:1)
Red Hat 6.2 is known to be a conforming platform, so it is listed.
Re:/usr/local vs. /opt? (Score:1)
/opt is completely undefined, and if I want to make my installer go in /opt/thing/executable/app I'm pretty much free to do it.
Note my earlier comments about what they SHOULD be, but.... As for now, they are just about whatever you want them to be.
Re:Bring Back /usr/doc! (Score:1)
The FHS makes an effort to group sections of the filesystem by the type of data that goes in the directories. It differentiates based on two criteria: shareable vs. unshareable and variable vs. static. /usr is designed to be static and shareable. However, some parts of /usr are architecture-dependent, such as the /usr/bin and /usr/lib. /usr/share is designed to be a hierarchy of shareable, architecture-independent files.
The two examples you give (/usr/share/doc and /usr/share/dict/words) are both things that are architecture-independent and so are put in /usr/share.
--Phil (If only they'd deprecate
Re:The question is... (Score:1)
Obviously, all your Linux Standards Base belong to us. That's the beauty of Free Software. The system truly belongs to the end users, as they have the right and ability to change things themselves.
Re:And Soo... The saga continues. (Score:2)
I agree with the need for sanity about where to put things after wrestling with various flavors of UNIX for the past 15 years.
I've lost my personal favorite: user directories currently go into /home, where I had hoped the early transition would have been made from /usr into /u.
Given some of the divergence that has already occurred in some of the Linux distributions, I suggest that while it is fine to encourage uniform placement of libraries, config files etc. in the file system, that such a battle is already lost.
Retreat and fortify the next line of defense!
That is, enforce a standard with something like a global /configure script that populates a named database. That database will have name value pairs showing the important places that important things exist:
for example. Now, then, the line of defense against needless diversity is to enforce a standard on the metadata, the names, not the values!If every distribution can agree that the same name will be used to contain the value, we can stop at one level of indirection and breathe a collective sigh of relief. Third party app installation will include a check for the existence of such a database to know where it should try to look for some dependent shared library, for example.
Then, every system should have a cron job to cull through every directory on the system looking forRe:/usr/local vs. /opt? (Score:1)
I personally don't see a great deal of use for /opt anymore, because (on Linux, at least) package management is so prevalent. /opt was designed to make it easier for human administrators to manage programs on their systems. Package management programs provide (IMHO) a much better method for doing the same thing. Ah, well. Enough ranting for the day.
--Phil (I was glad to see the appearance of SGML directories in the new FHS.)
Re:Bring Back /usr/doc! (Score:2)
runlevels are beyond the scope of FHS (Score:4)
While inappropriate for the FHS, runlevels may well be treated in the more comprehensive LSB. I am unaware if this is on the agenda of the LSB, but the current discrepancy between systems is certainly an annoyance. However, standardization of start up scripts may prove difficult as it involves treading through a few holy wars. Some of the old SysV / BSD schism carries on in the Linux camps, and this partially explains the different runlevel schemes in use today.
Personally, I think rc scripts *should* be homogenized, and the LSB may be the appropriate body to push this through. I don't think the current divide is buying us much other than headaches. However, presently the LSB seems more concerned with binary compatibility at the application level: glibc versioning, ABI standardization, etc. This is a rather important topic as third party companies begin porting to "Linux", which they quickly find is segmented into ~5 pretty much compatible systems. This isn't such a big deal if you have source code, but can be a big hair mess for binary-only applications. Of course, some would argue that they don't *want* binary only applications on their system, but I'll leave that debate to the Slashdot masses...
--Lenny
Re:thoughts (Score:1)
/etc/smb.conf, /etc/samba/smb.conf (Score:1)
Thank goodness... (Score:1)
If they only had the balls.. (Score:5)
Setting a standard set of APIs for stuff like the clipboard, file associations, desktop integration, etc. The Windoze way to handle clipboard stuff is to first "register" a type for the data you are placing there. Most apps use a canned type and therefore you can cut and paste between almost any windows program. Why is it bad for things to work? Couldn't we do the same thing in X with a XML spec of some sort?
And how about standardizing the interface a bit. I can't tell you how hard it is to explain to my wife that in KMail you use CTRL+C to copy, but then to paste it in Netscape you push the middle mouse button, and hope...
Not to mention the 15 different file open dialogs I see every day. Some of them are really rotten too...
I love Linux, don't get me wrong. However, I believe that standardizing some of the more obvious stuff for the GUI crowd would benefit us all immensely.