Designing Good Linux Applications 209
An Anonymous Coward writes: "A guy from IBM's Linux Impact Team in Brazil has written a guest column on Linux and Main describing how applications should integrate with Linux. It's Red Hat-centric, but there is a lot of material about the FHS and LSB that most users probably don't know."
It would be cool if Compaq had such a team... (Score:4, Funny)
(Apologies in advance to all
Re:It would be cool if Compaq had such a team... (Score:2)
Seems appropriate with their Linux Clustering tech...
Re:It would be cool if Compaq had such a team... (Score:2)
More accurately, it's Red Dwarf style humor... I certainly had a laugh at it. =)
Reminds me of when my company announced they had hired a Director Of Product Engineering. Being a Dilbert fan, I would have leapt at the opportunity to make his business cards....
Re:It would be cool if Compaq had such a team... (Score:2)
"I think we're all beginning to lose site of the real issue here, which is, what are we going to call ourselves? I've narrowed it down to two suggestions. The League Against Salivating Monsters, or, my own personal preference, the Committee for the Liberation and Integration of Terrifying Organisms and their Rehabilitation
Into Society. Uhm, one drawback with that, the abbreviation is CLITORIS."
Rimmer in POLYMORPH
Re:It would be cool if Compaq had such a team... (Score:2)
Re:It would be cool if Compaq had such a team... (Score:2)
Anyway, they had made a little 4+1 PAD (4 async ports, 1 X.25 port) in a small PacTek box. Marketting had half-seriously considering calling it the "mini-pad".
Thankfully (a) they didn't, (b) I got a better job shortly thereafter.
First of all, (Score:4, Insightful)
PLEASE take your time and DEBUG the current ones.
The collection of half-abandoned software that has tons of bugs that nobody uses (perhaps of those bugs) is absolutely huge.
Re:First of all, (Score:1, Troll)
Re:First of all, (Score:2)
Re:First of all, (Score:2)
Re:First of all, (Score:1)
Re:First of all, (Score:1)
Re:First of all, (Score:2)
Re:First of all, (Score:2)
Re:First of all, (Score:5, Interesting)
IMHO, to attract OSS developers, a piece of software has to be:
By any way, I don't pretend that these are anything more than a few rules of thumb, but in the end I'm sure that, for OSS software having the characteristics above, developers willing to do maintenance will show up by themselves without needing to preach them.
Re:First of all, (Score:2, Insightful)
I always thought design documents were supposed to tell me this. I guess I must have been building too much software in corporate environments.
On a more serious note, I see a disturbing lack of design documentation in open source software. This is in my opinion one area open source definitely should improve, togehter with project management. But that would make oss development a lot more formal, and a lot of people probably do not want that. Choices, choices.
Re:First of all, (Score:4, Insightful)
There are more and more stable applications out there now, however. Take Mozilla for example. The long awaited 1.0.0 should be out in a month or so. An XMMS the MP3 player which is as good as they get (thanks of course to huge demand for a good MP3 player), OpenOffice.org which is slowly creeping towards their 1.0 release and beyond, KDE3/Koffice(and KOffice doesn't have many developers, partly due to low demand, but I think that will change soon). Things have really improved in the last year I think, and 2002 will be a big year as well.
Re:First of all, (Score:3, Interesting)
Commercial software projects are the same: only a small fraction of them make it to final release without being canceled or redefined. The general public just doesn't see these. You can rest assured, however, that the revision control systems in your average software company are littered with countless defunct projects.
For some reason, management doesn't say: Why develop new products? We can just restart work on all of our old canceled projects and bring them to market. Maybe the reasons we canceled them magically went away...
Packaging and Testing (Score:3, Insightful)
The other important thing is that programs often don't work very nicely with each other, or need certain versions to work. This is where having a central system for controlling dependencies is rather important. I don't actually think Debian goes far enough at the moment (not really handling Recommends with apt), but it's getting there.
The other important part of packaging is handling upgrades automatically. Packages have security problems, they have new features added. If you have to work out (a couple of months later) which --gnu-long-opts-enable --with-features --without-bugs you had to put on the
# echo "http://debian.brong.net/personal personal main" >>
# apt-get update
# apt-get install bron-config
Whee
(note - that URL doesn't exist yet, but it's my plan for the future).
(note:2 - no ssh private keys in that
Re:Packaging and Testing (Score:1)
You mean, something like a Registry?
RPM is the standard, and APT works on it (Score:3, Informative)
The advantage there is that RPM is a standard - currently the older RPM (version 3) is included in the Linux Standards Base, but once Maximum RPM is updated for RPM 4, its extremely likely that RPM 4 will become the standard.
If you're using Red Hat I highly recommend installing it.
rpm -Uvh http://enigma.freshrpms.net/pub/apt/apt-0.3.19cnc
apt-get check
apt-get update
apt-get install
Re:Packaging and Testing (Score:1)
While he doesn't mention Debian at all, it's clear that the article is strong on packaging. I actually prefer Debian's approach, having a list of sources from which you obtain software, and providing search tools for that list.
The guy who made that doc signs there he works for IBM.
His intention clearly was to do straight points about "how to do this", and from a basic standpoint. Thus he mentioned everything a lot RHS-like, probably targeting the newbies.
Linux experts won't have such a problem to assimilate his guide and to make adaptations for their needs.
/usr/local obsolete? (Score:5, Insightful)
From the article:
I understand that this is directly from the FHS, and not some evil concoction from the mind of the author, but dammit, I think it's wrong. Perhaps /usr/local is obsolete with respect to package managers, and that makes some sense (because the package manager should handle proper management of placed files, though in practice that's not always the case), but as long as open source is around, there will always be software that is compiled rather than installed through a package manager. There will also always be applications that are not distributed in your package format of choice (as long as there is more than one package management system, this will always hold true). In these cases, it's still a good idea to keep around /usr/local and /opt. Personally, I'll have /usr/local on my systems for a long time to come, because I prefer to use the Encap [encap.org] management system.
Re:/usr/local obsolete? (Score:5, Insightful)
Actually, no. It is from the diseased mind of the author of the article. He first cites the FHS, and explains how good it is to have a standard like that, and then proceeds to ignore everything it says. /usr/local is explicitly reserved for local
use and therefore no package should *ever* install
itself there (my /usr/local, for example was NFS
mounted, and RPMs that tried to install there
would fail because root didn't have write access
to it). So far, so good, and we're in agreement
with the article. But then he goes on to say that /opt should never be used. What? According to the
FHS, /opt is exactly where IBM should be
installing stuff. Quite how he's decided that
the two directories are obsolete is beyond me.
Both have well defined and useful purposes, both
in common usage, and
in the latest FHS spec (see
http://www.pathname.com/fhs/ [pathname.com]).
I'm afriad IBM have just lots a lot of
respect from me for this...
Re:/usr/local obsolete? (Score:1)
My bad, then. I'm not 100% familiar with the FHS myself, so I made the (poor) assumption that when the author said that's what the FHS defines, he was speaking authoritively. Apparently not. If slashcode would allow editing of comments, I'd fix this assumption.
Re:/usr/local obsolete? (Score:3, Interesting)
Re:/usr/local obsolete? (Score:3, Informative)
I understand that this is directly from the FHS.
Not true. This is what the FHS [linuxdoc.org] says about /usr/local:
Re:/usr/local obsolete? (Score:2, Informative)
/opt is in FHS (Score:5, Informative)
/opt is in FHS 2.2 [pathname.com] at secton 3.12. It begins:
Doesn't look very depricated to me. I think the problem is your FHS link isn't really the FHS; it is the SAG (Systems Administrator Guide) [linuxdoc.org], which in section 4.1 [linuxdoc.org] clearly says it is loosely based on the FHS.
As for /usr/local, I do agree it should be off-limits to the distribution (besides setting it up if not already present). And packages in the package format of the distribution (e.g. RPM for Redhat, Mandrake, SuSE, etc ... DEB for Debian and any like it ... TGZ for Slackware ... and so on) really should stay out of /usr/local. What /usr/local should be is whatever is local policy (FHS doesn't say it this way). Packages that the administrator really wants to be separate from the package management system, stuff compiled from source, stuff locally developed, all is eligible to be in /usr/local. My guess is the author of the article has no experience doing system administration combined with a decision making role where he might have to choose to do something slightly different than what everone else does.
/opt is in FHS, but not for distro use (Score:2)
(snip)
Distributions may install software in
This strongly discourages distribution use of
Since almost all software could be part of a distribution, and Unix has traditionally sorted its files by their type before their application,
Re:/opt is in FHS, but not for distro use (Score:2)
It depends on the package. If you need multiple versions of the same package to be present, then /opt is an advantage. But I do agree the distribution (e.g. Redhat, Debian, whatever) should not stuff there (setting it up empty is fine). However, a package itself may need to be there for some reason, such as being able to find version specific resources based on which version was executed. In this case a script in /usr/bin to run the package might be wise. The UNIX tradition of separating files by type and usage works in most cases, and has an advantage for the sysadmin (like making common files shared over a network, and platform specific files grouped by platform, and machine specific configurations distinct for each machine). But that isn't 100%, so flexibility is needed. A package should avoid /opt unless really needed.
/opt considered evil (Score:2, Interesting)
When trying to partition the different mount points
Hopefully, the folks in charge of the FHS will consider this.
Re:/opt considered evil (Score:2)
I do "ln -s usr/opt /opt". Maybe that's what you meant. But I also do it before things are installed, so I don't have to skip by package. OTOH, I pre-install Slackware to a single partition under chroot first, to get the file tree as "installed". Then to install a new machine I boot it with my rescue CD, dd the drive to zero (personal preference, but not really needed), partition, format, mount, replicate the file tree, run lilo with -r, remove CDROM, and reboot. It's all scripted and takes about 4 minutes over 100 mbps ethernet for a server (no X) setup, or 9 minutes for a workstation setup (with X, Gnome, KDE, and the works). The tree already includes all my general local changes, and the script also hunts for host specific changes.
Re:/opt is in FHS (Score:2)
Just send it to EFF [eff.org] and they will take care of it. Thank you for your support.
Re:/usr/local obsolete? (Score:2)
Source sode should be installed through a package manager. If you're a systems administerator and don't know how to package applications, you need to learn because you need it to do your job.
If you have the brains to compile from source, you have the brains to make a source package. I'm tired of inheriting somebodies backyard apache installed, with a bunch of forced packages and non packaged apps. I can't repeat that install on other systems (especially annoying when testing) the install optiosn used aren't documented, and as the author didn't include an `uninstall' target in his Makefile, I can't uninstall it properly (unless I use something like stow, but in that case I may as well package the goddamned app).
Because there's missed dependencies, I find out when something neeeds somethign else when it breaks, rather than before I install it. How it breaks is different with each app. Same with finding out if that app is installed, and how various files on the system got there. In other words, non packaged systems are an absolute mess and I have little time for them.
Learn to package. It's simple, and you and the machines you will manage will be happier for it.
dependency hell (Score:3, Interesting)
After using many versions of Slackware, I finally tried Redhat at version 5.1. Actually I had tried it at a way earlier version and it never successfully installed. But 5.1 worked OK. The reason I tried it was I bought a Sun Sparc 5 and wanted to try Linux on it. Redhat seemed to be OK, so I later tried it on a couple other i386 systems, and that was working OK ... for a while. As it turns out, I needed to make upgrades before RPMs became available (see next paragraph). I also needed to make some changes in how things were built. The RPM system started getting out of sync with what was actually installed. The system ran just fine, but soon it got to a point where some packages I was installing with RPM would not install because the RPM database thought things were not installed which actually were (but weren't installed from RPM, so I can understand why it didn't know this). So I ended up having to do forced installs. And that ended up making it more out of sync. By the time I had gotten to Redhat version 6.0, I was getting fed up with it. I switched back to Slackware (and Splack for Sun Sparc eventually came out and I use that, too) and am happy again, with well running systems. And I am now exploring LFS [linuxfromscratch.org].
You say the system administrator should know how to package applications? Why the system administrator? I'd have thought you'd have expected the programmer to do that. If I get some package which is just a TGZ source file tree (because the developer was writing good portable code, but not using Linux to develop on), why should I, in the system administrator role, have to be one to make a package out of it? I'll agree it doesn't take more brains than needed to properly install the majority of source code, but I won't agree that it is easy (in terms of time spent) to do. At least I have the brains to actually check the requirements of what a given package I'm compiling needs, and make sure it is there by the time it is actually needed. The dependency may not be needed until it is run, so I have the flexibility of installing in whatever order I like. Also, some "dependencies" are option, and don't need to exist unless a feature is to be used that needs it. For example, if I'm not using LDAP for web site user logins, why would I need to make sure LDAP is installed if some module that would otherwise use it is smart enough to work right when I'm not using LDAP.
Re:dependency hell (Score:5, Interesting)
That's not the solution to the problem. Any management system ceases to become effective as soon it ceases to be ubiquitous. If your Apache is locally built, and you made the mistake of not packaging it, then you've nullified the effectiveness of the package manager for anything which touches apache.
You say the system administrator should know how to package applications? Why the system administrator? I'd have thought you'd have expected the programmer to do that.
Good point - ideally the programmer should, but its a simple enough case for SysAdmins to learn if they do encounter an unpackaged app.
Have you tried making RPMs? I'm not a programmer by any means but its amazingly simple. Check www.freshrpms.net for a few good tutorials.
Also, some "dependencies" are option, and don't need to exist unless a feature is to be used that needs it. For example, if I'm not using LDAP for web site user logins, why would I need to make sure LDAP is installed if some module that would otherwise use it is smart enough to work right when I'm not using LDAP.
Another good point. This should be handled by a system similar to debs excellent required / suggested / recommended dependency system, which could fairly easily be ported to RPM from what I understand of it.
Finding out a dependency exists when something breaks is no way to manage a system. Knowing what software has been installed on a machine is vital tomaintaining the security of your machines, and having proper uninstalls stops your hard disk from filling with crap. And there's a stack of other benefits.
I find most people who dislike RPM haven't used the system. its very much similar to building an app. Inside the RPM itself is the original tarball of the app (plus maybe a couple of patches) and the spec file which is comprised of:
Its pretty muc hthe same as if you'd compiled the app without a package manager. RPM just standardizes your build process. You can easily rebuild source RPM for your local architectecture, and RPM will take compiler flags for your own custom configuration options. I like compiling a lot of apps from source too: I just take a few extra moments do it in a standardized fashion. This pays off repeatedly when I'm administering the machine infuture (or if I need to repeat thsi work on another machine).
Re:dependency hell (Score:5, Insightful)
It's the creation of the spec file that's a chore. I have to know what dependencies the package has to make it. If I know already, such as by RTFM's the original source package docs, then I know all I need to know to manage it without RPM. I still see making an RPM here as a redundant step.
I do some programming, but I still don't RPM-ify those programs ... yet. But when someone comes up with an "autospec" "autorpm" program which figures out everything to make the RPM file so it becomes as trivial to make it as to install it, I might be more interested. Right now I'll still with "./configure" and "make install" which work just fine for me.
Re:dependency hell (Score:2)
Well, then you're not a system administrator. You're some guy who may (or may not) administer a few Unix boxes.
For a real sys admin, most of the work goes into standardization and documentation. She's working for a company that loses valuable time and money when its systems go down, and it loses vital flexibility if its not able to replace the sys admin on a moments notice (like when the sys admin gets hit by a bus). She recognizes this, and makes every effort to make herself replaceable.
In the real world, the very worse sys admins are always the "irreplaceable" ones -- the ones with so much specialized knowledge that only they have. The horrible sys admins are the ones who can't be bothered to keep a standardized list of everything installed on the systems, and the prerequisites for each of those installations. That's not administration, it's voodoo. If you work at a company with an admin like that, get him removed from his job, immediately. If you are an admin like that, grow up, immediately.
Re:dependency hell (Score:2)
I am a system administrator, and I do keep things standardized and documented. I've been doing it since long before Linux (and therefore long before RPM) even existed. I've been doing it before SunOS became Solaris. The definition of being a system administrator is not Linux specific. Although I now do mostly Linux, it's most definitely not RPM based. Just because I don't do it the way you like it done, doesn't mean it doesn't accomplish the task.
Re:dependency hell (Score:2)
This is useful because it allows preple installin that package to
Install in a uniform, non interactive way. This way you can install your package as part of a automated update or rollout to your machines. At my workplace, `apt-get install cybersource-workstation' pulls down every RPM package needed to do work on a cyber workstation, plus config files for printers and similar items, and installs a couple of hundred pieces software automatically across each machine. Doing this without packaging is difficult.
Intelligently deal with configuration files during upgrades
Install, uninstall, and more importantly be queried using the same mechanism, so other admins know what you've done (this can be saved with a lot of documentation, but you'd spend more time documenting the machine than adminning it).
Uninstall the package cleanly (make uninstall is unforunately rare)
But when someone comes up with an "autospec" "autorpm" program which figures out everything to make the RPM file so it becomes as trivial to make it as to install it, I might be more interested.
Its nice that you're open minded. RPM pretty much already comes with something like that already, which automatically adds the libraries an application relies on to its dependencies when creating the package. Besides that, most apps generally only have a couple of dependencies anyway, and they're quite simple ("my printing config needs lpr installed, what package own lpr, add that package to the dependencies list" - its pretty easy).
Re:dependency hell (Score:2)
Having read the Maximum RPM book, I found that the steps involved in building an RPM package out of a source tarball is definitely NOT uniform, and most definitely is very interactive. So doing that means I have to be taking an interactive approach somewhere. RPM has to build the package from source, as would I.
I see value in having distributed packages in RPM when those packages are built right, and when they are available when needed. I don't see the value in building them myself, as that appears to take a lot of time. And time is the crucial factor. Every time I did an emergency security upgrade on a Redhat box, there were no RPMs, and I had no time to make one.
Also, I just don't have dependency problems on my Slackware based systems. Things do work. The rare (maybe 2 or 3 at most) times I've had to download something else in addition to the package I was downloading, it was clearly obvious after RTFMing the README and INSTALL files. In most cases my custom made source installer script for each package just works with the new version already. When it doesn't this is fixable after RTFM and/or one compile.
The Linux From Scratch project supposedly has someone working on making a setup that builds the whole thing from source and produces a big pile of RPM packages as a result. Maybe that might be something to look into when it becomes ready for prime time.
If "make uninstall" is not available, then how is RPM going to figure it out? Is it going to just see what packages are installed by "make install" and list them? What if a file is not actually installed by the Makefile because it's already present (e.g. it hasn't changed since the previous version)? What if a file is merely modified by the Makefile, but previously existed? (This would be considered to be a bad practice, but unfortunately is very real, and has to be dealt with)
Actually, I developed a set of system administration patterns around mid 1980's which I still practice. Back then some of these things were hard to do, but were important. Now days they are less difficult. One of them is that packages are simply NOT trivially uninstalled. This means a careful analysis in advance as to what needs to be installed, or else I just live with the wasted space (disk drives these days are unlikely to be filled up due to uninstalled packages that I was previously sure I needed). So basically, I don't uninstall, unless it's a security issue in which case "rm" is a nice tool.
If I have all the RPM tools installed, and bring in the tarball (not extracted, yet), how many commands are involved in making an RPM package? How many edit sessions? Would this be scriptable (a different script for each package) to make it all happen in a single command? If the answer to that last one is yes, then perhaps there's some value here, such as integrating it with Linux From Scratch.
Re:What you are looking for is (Score:2)
Now that looks like a useful tool, whether one will build RPM packages, or something else. It looks like the installwatch [asic-linux.com.mx] part could be useful during Linux From Scratch installs. Hope it works the right way inside chroot, which should be fine by making it part of the base system.
Re:/usr/local obsolete? (Score:2)
/opt also important (Score:2)
When I recompile a standard package with different options (e.g., to match my environment, to be more secure by disabling some standard servers, etc.), intending to redistribute the packages to others, where the fsck am I supposed to put the results?
Hint: put it in the standard places and expect to be burned at the stake. I'll bring the burning torch. Non-standard builds that aren't clearly identified as non-standard tend to waste a *huge* amount of time because people reasonably, but erroneously, think that the package is official one.
To be blunt, the decision of where to put files is simple and well-established:
1) The standard packages (from Red Hat, Debian, whoever) loads into the standard locations.
2) Any modified packages distributed to others load into
3) Any modified packages that are not distributed to others load into
4) Any original package not distributed by the OS has historically gone into
5) Finally, "depot" style builds use their own trees, probably following the
As an aside, I've even been experimenting with a tool that rewrites Debian packages so the load into
The thing about the article that really pisses me off is that *all* of his advice can be applied equally well in all four scenarios. The fact that I can mechanically change a package to use a different installation target really drives this home. Yet out of nowhere he makes an uninformed comment that makes life difficult for those of us distributing modified standard files. (Comment deleted for profanity)
This is plain wrong. (Score:5, Insightful)
The 'Linux' word is completely unnecessary - "Designing Good Applications" should suffice.
Application design couldn't care less of the OS that the application is planned to run on.
Not totally true... (Score:4, Interesting)
"Everybody loves graphical interfaces. Many times they make our lives easier, and in this way help to popularize software, because the learning curve becomes shallower. But for everyday use, a command at the console prompt, with many options and a good manual, becomes much more practical, making scripts easy, allowing for remote access, etc. So the suggestion is, whenever is possible, to provide both interfaces: graphical for the beginners, and the powerful command line for the expert."
This is wonderful advice in the Linux world. However, most Windows and Mac users, sadly, don't know what a command prompt is, let alone how to script it. This is a native concept to a Linux user.
I have no doubt that even in the Windows/Mac world a really powerful Command Line feature for any given app would be super useful, but it is only so for those who have climed that learning curve. In that case, it's better to focus on making the App do what it needs to do.
In any case, I'm sure I'll draw criticism for that comment. I'd prefer you didn't, though. The point I'm making is that slasho81's comment that all software should be the same despite the OS isn't quite so black and white.
Re:Not totally true... (Score:2, Informative)
In the Windows world, many applications do have powerful commandline features, as well as GUIs. However, you're trying to impose a unix-style of automation (shell script, tying a bunch of small commands together) on a system with its own methods of automation. Let me first say that there are tools you can install on Windows to do unix-style scripting, like Cygwin. I'm ignoring that for now. Typically, when you want to script something in Windows, you'll end up writing some vbscript or jscript that instantiates a COM object and does what it needs through that rather than running an app with some params and catching/piping stdin/stdout. I won't say which method is better, simply that they're different.
This is why *nix administration knowledge doesn't translate to NT administration knowledge, and vice versa. Too often people complain about NT admins trying to use linux or some other unix without ever thinking of the reverse scenario. Try writing a script to force a change of password on next login for some number of NT users. Now make sure it works for local users, NT4 domain users, and Win2K AD users. This is quite doable, but most unix admins look for a passwd-like app, find none, and give up, complaining that NT sucks because they have to go through a GUI to modify 50,000 accounts.
Mmmmnnnn... (Score:1)
With any luck... (Score:5, Interesting)
Documentation in
There will be things you don't like about the LSB and FHS. Personally, I reckon initscripts aren't config files and should live in
Re:With any luck... (Score:1)
Is amazing that Red Hat distributions were a *bit* similar to Windows sometimes.
An app which needs to be updated for making other apps to work.
Oh, maybe I'm exaggerating too much. No matter if you're using LSB or RHS you've to deal with libc's, gcc's and glibs' versions.
Thank God GIMP works with plug-ins!
Re:With any luck... (Score:2)
You can run `up2date -u' to download the newer version of RPM and all necessary security / bugfixes for Red Hat 7.2, plus their dependencies.
Re:With any luck... (Score:2)
I meant 6.2 (and yes, up2date works in 6.2).
Re:With any luck... (Score:2)
> standard installation, deinstallation,
> auditing, and management of relationships with
> other necessary software. Not some interactive
> self extracting tarball I can only use once
> unless I do the vendors job and package it
> myself (which unfortunately is necessary for
> modern sysadmins if they want to do their job
> properly).
No *nix is an island. RPM isn't the norm on even all Linux systems, let alone the rest of the *nix world. Don't forget that the x86 BSDs run Linux binaries too. Chaining dependencies like package managers and package management databases makes it more difficult for a lot of people who don't really need the overhead. The point of distributions is to allow someone to package software for it- so it's really the distributor's job to package for a specific package manager, not the vendor.
Paul
Re:With any luck... (Score:2)
Yes it is - it is the standard way of installing applications on Linux according to the LSB, which almost every Linux distribution you've heard of, with the notable exception of Slackware, aims to conform to.
Re:With any luck... (Score:2)
Yes, Debian includes RPM (so does Slackware) but it doesn't track dependencies on those systems, which makes it pretty worthless.
Agreed. The RPM situation on Debian needs to be improved. Adding some of
Re:With any luck... (Score:2)
If you already have the tarball, why are you trying to make a different package out of it? Don't forget that many applications are made for a variety of different systems. Linux isn't everything out there.
I'll agree that it is annoying to have packages making assumptions about where they put boot scripts. But Linux is about choice. There is more than one standard to choose from. Sounds to me like you are trying to make cookie cutters.
I hate symlinks, too. I want to get rid of all the symlinks in the whole /etc/rc.d. And guess what ... I did!
Re:With any luck... (Score:2)
FHS and LSB actually seem to be well thought out to me. The thing I find interesting is that the one thing I most dislike, isn't really required ... that being the SysV style init. The systems I run don't use it. They do have the directory structure in place, so if I did install a package that expects it, and wants to put a script there, it can. But it won't have any effect since I have a different init script system that's actually executed. One thing I like about this is that it leaves me in control of what starts at boot time.
I'm not going to complain to Redhat that they don't make their system exactly like Slackware. I know what their answer would be, and I use Slackware. So it seems to me that complaining to LSB that they don't do things exactly as I would is moot. I can do things my own way if I want, and they know I can ... they know everyone can. It's still choice. The goal of FHS and LSB is to make available a particular well thought out choice that meets a variety of needs, and suitable for most people. If it were the case that what most people feel fine about using was required for everyone, we'd all be using Windows right now and FHS and LSB wouldn't even be an option.
Now thats close minded... (Score:2)
Lets face it, the LSB is not an objective standard but a crappy attempt at a standard that has succeeded in nothing more than giving Redhat a supposed stamp of approval as not only the defacto Linux standard, but the dejure Linux standard. Why not just ditch the LSB and replace it with a sentence that says, "Must be Redhat compatible"? At least people wouldn't be kidding themselves.
Re:Now thats close minded... (Score:2)
You're missing the point of the LSB.
Given your Debian comments, I think you are referring to RPM as the default package manager. In all other respects, Debian is perhaps closer to the LSB than Red Hat.
What you and a lot of other people seem to miss is that there is no requirement to actually use RPM as the system package manager. The LSB requirement is to be able to install RPM packages. If those packages comply to the LSB, then there is no reason why Debian users shouldn't be able to install RPMs using alien. After all, the package itself should already conform to Debian Policy, as Debian Policy is merely an LSB implementation.
I'm a Debian user myself, but I'm getting a little tired of the Red Hat bashing that goes around.
MartRPM vs .deb (Score:2)
The LSB guys are smarter than you'd think. And Debian contributed too btw.
Of course, it might be nice if people wrote packages for each distribution and release but that doesn't happen in the "Real World." The LSB is a compromise that most people can live with.
The LSB's content and history indicated otherwise (Score:2)
Lets face it, the LSB is not an objective standard but a crappy attempt at a standard that has succeeded in nothing more than giving Redhat a supposed stamp of approval as not only the defacto Linux standard, but the dejure (sic) Linux standard.
The standard itself seems to speak otherwise.
I don't think you're very aware of the LSB, its content, or its history.
Ease of installing software (Score:2)
Asides from the lack of logic in yoru sarcasm there (where did I indictae that there aren't any apps for Debian), there's very little difference in ease of use installing software on either Deb or RPM based distro's. Many Debian folk seem unaware that tools like up2date, urmpi and apt exist and come with most RPM based linux distributions. Personally, I apt-get update my Red Hat 7.2 machine from Freshrpms each day.
Re:Ease of installing software (Score:2)
Re:With any luck... (Score:2)
Re:With any luck... (Score:2)
Yes, but its not standard (according to the LSB). That's why the links from the correct location to the incorrect one now exist. I agree that links are good, but symlinks don't solve every problem, and RH have indicated they will hopefully move to the correct location in future.
My two cents: (Score:2, Informative)
Anyway here would be my two suggestions:
1) Quit ripping off Microsoft and Apple. or at least think before you do. Using any Linux GUI you can immediately see the areas where the team said "lets make this more like Windows." on the one hand, this makes things more familiar and easy for the new users, but on the other hand, it repeats a bunch of bad and arbitrary GUI conventions that should be re-examined. For instance, in Mozilla by default, there's the same irritating password remember feature as in IE. This should not be a default-on option, the security risk is huge, whoever made that mistake at MS ought to be fired. Why do we continue it?
2) Drop the in-jokes please. Calling everything "GNU-" putting funny little things in the help files etc. etc. etc. we want to convince people that we're making a professional quality product. And nothing spoils that faster than giving the appearance of a hack.
and my suggestion to the non-developing members of the community would be:
spending some of your time filling out bug reports and posting (well thought out, politely worded) suggestions is much more effective than posting "linux roolz" on public news services.
here on Slashdot we like to speculate that Microsoft has hired a group of people to spread anti-opensource FUD in our midsts. the lamers who do nothing but insult "Micro$oft" all the time are the free equivalent.
Re:My two cents: (Score:1)
People always talk about this, but me, being a former Windows user, found switching to Linux so much easier because of the similarities. I think KDE for example has copied Windows a heck of a lot, but they've also done their own thing in many respects. I think making GUI's even more customizable would solve this problem. But Linux is already very customizable. If you try hard, you can make it look nothing like Windows. Or use one of the older UNIX-style window managers.
"Drop the in-jokes please"
I kind of like that kind of thing. It makes me think that real people actually made the stuff. On those same lines, I also like being able to e-mail the developer that made such and such a program. If there is a major bug that I spot, I can let him know, or I can just say what I like and don't like about his program.
"spending some of your time filling out bug reports and posting"
Yes! This is super important. I just got involved with doing this, mostly with Mozilla and OpenOffice. It takes a lot of my time away from studying, but it's fun!
Re:My two cents: (Score:2)
Best example is the complete breakage of point-to-type because the window managers default to having this turned off, and don't ever test it. Point to type is obviously superior (try to find anybody who uses it for a week that wants to switch back, and you can try this on Windows as well, it is a registry switch). Yet the things that make point-to-type frustrating on Windows are copied in KDE and Gnome: raising windows on click, the inability to drag a window without raising it, the raising of "parent" windows when you raise a dialog box, and a lot of other little frustrations that were solved in the simpler window managers of ten years ago.
It is great to see them copying good ideas, but it is really sad that the ease of use of Linux is being thrown away in an attempt to make an exact clone.
Re:My two cents: (Score:2)
Yeah, I mean, nothing makes things look more professional than putting in a flight simulator for the credits [eeggs.com]!!!(WARNING -- ANNOYING JAVASCRIPT POPUPS!)
On GCC and others of the same ilk. (Score:2, Interesting)
I agree with the main thesis of the article. I just wish more packages follow the ideas expounded, and specially the FHS.
For example, gcc when installed from source defaults to putting itself into /usr/local/ which is quite understandable,
because it was locally installed. Unfortunately libgcc_s.so
should have placed itself in /lib instead of /usr/local/lib because
some boot-time binaries need it. (modutils if I recall correctly.)
The first time I installed gcc from .tar.gz, my sysinit crashed because /usr wasn't mounted yet.
Other packages have this problem too: fileutils, bash and modutils come to mind. The default configuration is to install themselves into /usr/local/ despite the fact they are needed
during boot. (init's message of "rm: no such file" puzzled me
the first time I saw it.)
Now, I know that ./configure --prefix=/ fixes those
things, but my point is, the user shouldn't have to learn
from experience how to correctly install those packages. The packages
should help him.
Re:On GCC and others of the same ilk. (Score:2, Informative)
I think the reason GNU stuff defaults to /usr/local is because it comes from a background where most people would be installing the GNU utilities on UNIX systems that had vendor supplied utilities like rm, etc.
john
Designing applications, hardware (Score:1)
And Power did their own R&D, thank you. Sure, they ripped off most of the early MLB layouts, but after Alchemy, the boards were all Power's own. And they were adding the features Mac users wanted -- like faster bus speeds and modern RAM. Not to mention decent video performance. Power was doing the Mac community a favor by getting the RAM ceilings out of the double digits.
If Apple was happy with less than 1% of the total PC market, then fine. Because when it comes right down to it, to hell with Apple. I go to Apple's computers because they're the best, but at that time, they weren't. The best you could get was some 8100 piece of shit and THEN what, you're stuck with Nubus expansion and a lot of proprietary video hardware. Meanwhile, Power was producing cutting-edge machines... some of which had hardware on them that wasn't even available for PCs yet.
Power gave half a shit about producing USEABLE machines, made they way they were supposed to be made. Meanwhile, Apple was sitting around being weak and spineless. They got scared when the market was getting away from them, and so they yanked the licenses and killed the baby.
I know a guy who was at the top levels of Power's Technical Response department. (His business card said "Grand Technical Czar."). I know at an intimate level what was going on at Power, and it was not any kind of plotting effort to undermine Apple's success. They just didn't give a *fuck* about all the pissy little things that were wasting Apple's time. Most of the people working for my friend were recruited from Apple, where they were disgruntled and lethargic. But at Power, they found renewed energy for not Apple, but the Macintosh platform. And they made it better than any other out there. By the time Power closed, their machines were running not just MacOS, but BeOS and LinuxPPC as well. Would that have happened with Apple getting in the way on things like bus speeds and cache sizes? While Apple was making machines that didn't have caches, Power was redeveloping the whole concept. We have Power to thank for the Backside Level 2 Cache technology, don't forget that.
The clones were all that keps Apple alive through its darkest time. Thanks to power in particular, there are now more Mac die-hards than ever, and the mac has made tremendous progress in its technology and features thanks to people like those who used to work at Power.
If anyone's to blame for Apple's problems, it's Apple.
Re:Designing applications, hardware (Score:2)
Had the cloning efforts gone through, we'd all be bitching about Apple's industry dominance, instead of Microsoft (or at least bitching more about it).
Re:Designing applications, hardware (Score:2)
Why don't they follow their own advice ?? (Score:1)
Strangely enough, all IBM software that I've had the pleasure to deal with (DB2, IBMHTTP and Websphere) try to install themselves by default to
/opt vs. RPM (Score:4, Insightful)
The author states that /opt is obsolete, and that everything should use RPM and install in /usr. Maybe this is the ideal in a system where everything is binaries-only, but I firmly believe it is poor administration practice.
The RPM database is binary and fragile. Once it is corrupted, the data describing what belongs to what goes out the window. RPM-packages have to be trusted not to clobber existing files or make changes to configuration files that one wants left alone. The alternative is per-application directories and symlinks (or a long PATH variable); there are tools which automate this, such as stow. The advantage is that the file system is - or at least should be - the most stable thing in the system. One can just examine a symbolic link to see what package it belongs to. This makes removing and updating applications very easy, and also makes it easy to see if there are any links left around from older installations. Removing an application is typically as simple as removing the corresponding application directory.
RPMs which install in the /usr tree will require root priviledges, whereas applications that can work from a self-contained directory can be installed by a non-priviledged user in their own directory,
Also, /usr in principle can be mounted read-only. This will certainly slow down any attempts at installing software in it!
I have had Redhat's installer corrupt the RPM database on multiple occasions; and I've had to override the dependancy checking innumerable times in attempts to update packages under both Redhat and SuSE, thus rendering useless the other purported benefit of RPM. New software typically comes in source form before RPMs; and the RPMs that do become available are almost always going to be third-party ones that don't necessarily play well with your system. By the time a vendor-created RPM becomes available, the distribution version you are using is no longer actively supported, and you'll need 300MB of updates to other packages just to satisfy dependencies. I've been there, it's horrid.
Re:OS X app bundles, NOW!!! (Score:2)
Why hasn't this been adopted by the Unix community? It's just some special treatment of a damned folder! What else is needed for you to accept this solution?
the only good linux application (Score:4, Insightful)
Seriously, a lot of Linux applications try to duplicate the Windows world and end up being just as bad. For example, for audio software, a monolithic executable with GUI is a Windows-style application--hard to reuse, hard to extend. A bunch of command line applications that can be piped together and come with a simple scripted GUI, that's a good Linux application because its bits and pieces can actually be reused.
Re:the only good linux application (Score:2)
I always mix up the switches, I wish the command line tools would get some attention too - unless there are specific reasons why there isn't? Some use -f, some use --force, pick one, or better at least support both.
Re:the only good linux application (Score:2)
I always run ssh user@ftp.site.com since its the same as my email address on that box, or better yet, create and use an identical account on the local machine and just run ssh ftp.site.com.
Its like people running zcat somefile.tar.gz | tar xvf - when you can run tar zxvf somefile.tar.gz. Same for bzcat and what, 'j' instead of 'z'? Maybe it helps some people keep it straight in their heads that they're literally piping the output of a zcat into a tar, like peeling layers of an onion. Me.. I just like to save typing.
Re:the only good linux application (Score:2)
peace,
(jfb)
PS: The reason some people zcat into tar is because not every tar is gnu-tar. Not every Unix user uses Linux, you know.
Re:the only good linux application (Score:2)
My articles on software quality (Score:2)
Think outside the Linux box (Score:5, Insightful)
Ditch the concept of spreading pieces of your app all around the FHS. This is organizationally similar to Microsoft's registry. It becomes a maintenance nightmare. Yes, RPM keeps track of some pesky details that let us get away with a messier install. Yes, the FHS does impose a common structure on what is an otherwise unstructured mess. But programmers are human beings, subject to the whims of ego, ignorance, and yes, even creativity and sheer brilliance. We're going to deviate from the suggested standards if given the opportunity, for one reason or another.
Give me one main point of access to everything the application does. If you need to use config files, give me the option of manipulating them through the application itself, preferably in the context of my current task. Give me one place to go looking for all the bits and pieces of the app. No, the FHS isn't simple enough. Give me context-sensitive documentation so I don't have to wander outside the app to get my job done. Don't make me wade through a spaghetti-code config file, with the documentation propped open on a separate screen to keep from getting lost.
Programmers are lazy. I should know, I am one. The last thing I want to do when I'm getting ready to release a program to non-techie users is tie up all the loose ends that seem ok to me, but not to the non-techie user. I'd rather document how to get a tricky task done than write the code that automates the tricky parts. I'd rather tell the user how to go tweak the flaky data in the database by hand than add another error-correcting routine. And it's more work to give the user one simple, full-featured point of entry to each piece of a complex application. But that additional work will make the application more usable, for the expert and the novice alike.
Re:Think outside the Linux box (Score:2)
First off, I have no problem with _some_ applications being under their own directories, especially if they're pre-designed to run chrooted as services. However, I _will_ demand that I can log my app logs to a partition mounted on
There are lots of considerations that go into good filesystem use and several 'problems' we have now can be fixed by some more discourse, not by just taking an option and doing it because that's what you like as a developer -- developers are not the target market; users are (although many users may be developpers).
Re:Think outside the Linux box (Score:2)
My configuration files are (almost) all under
I keep all state information under
Software I manage with RPM is all under
Application-specific binaries, however, I sometimes keep under
Many sysadmins with different needs are prone to NFS mount system binaries and/or home dirtectories which necessitates further classification work.
The quote of the moment (Score:4, Funny)
That is what I found in the fortune at the bottem of the this thread.
Designing Linux Applications (Score:2, Funny)
1) Command line only. We all know real users only use command line.
2) Don't comment your source code. Ever. It just wastes valuable programming time.
3) No installation/usage documentation. If they deserved to use your app, they can go figure it out themselves. What are you, tech support?
If you follow these simple instructions, you are guaranteed a rabid cult following, or at the very least a feeling of superiority over your users.
</sarcasm>
Partly in Portuguese (Score:2)
The article is still partly in Portuguese:
sobretudo = above all
destacado = detached (apparently he means that KDE is the most developed)
I emailed the author about the HTML problems.
Partly in Portuguese, 2 (Score:2)
portanto = consequently
clareza = clarity
obrigatoriamente = necessarily
garantindo = guaranteeing
separando = separating
The article is actually about creating high-quality installation programs, so the title is misleading. Here is a link to a copy that is better formatted, but not as well edited:
Creating Integrated High Quality Linux Installation Programs [geocities.com]
FHS is the FileSystem Hierarchy Standard [pathname.com], an important standard.
I love Brazil and Brazilian culture, but not everything about the culture is wonderful. Brazilians generally don't like to give attention to detail.
An open letter to Avi Alkalay (Score:2)
This line-length is quite hostile to the reader; human factors experts say that line-length optimally should be on the order of 60 characters; much longer lines--such as yours--make the text very difficult to read. This principle is even evident in the HTML source for your article, which (one observes) uses indentation for readability, together with an evident right margin column of 75, and a mean line length of 41.0485 characters. You have preserved readability for *yourself* but have seriously com promised it for others.
Please reconsider!
Thank you
Re:An open letter to Avi Alkalay (Score:2)
Unix used to be friendlier! (Score:2, Insightful)
Do you remember...
/August, feeling old today.
Re:Unix used to be friendlier! (Score:2)
That was pre-open source, when paying customers demanded well-rounded, finished systems...
Hehe... (Score:2)
Just struck me as funny.
Good Documentation (Score:2)
Re:User friendly? (Score:1, Informative)
Re:Integrated applications (Score:2, Insightful)
Re:Init levels (Score:3, Interesting)
My sparc runs fine at run level 5. What would you like to see the various run levels be for? I changed mine around, and it looks like: