Petreley On Simplifying Software Installation for Linux 310
markcappel writes "RAM, bandwidth, and disk space are cheap while system administrator time is expensive. That's the basis for Nicholas Petreley's 3,250-word outline for making Linux software installation painless and cross-distro." The summary paragraph gives some hint as to why this isn't likely to happen anytime soon.
Word (Score:5, Funny)
Was it necessary to include the word count? It's hard enough to get slashdotters to read a small article and post intelligently, this cant help...
Re:Word (Score:4, Interesting)
Petreley is undoubtedly getting the grip of this "writing thing"...
Seriously, though, however smart and logical are his conclusions, one thing bothers me: the installation should be simplified but "right", too.
I mean, there are other objectives besides being easy.
Last week I tried to install Red Hat 8.0 on a Pentium 75Mhz with 32MB RAM (testing an old machine as X-terminals). It didn't work.
The installation froze at the first package -- glibc (it was a network installation) -- probably due to lack of memory (as evidenced by free et al.).
Why? It was a textmode installation. I know from past experience that older versions of Red Hat would install ok (I used to have smaller computers).
My suspect is that Red Hat has become too easy -- and bloated. Mind you, I opted for Red Hat instead of Slack or Debian because of my recent experiences, in which RH showed to recognize hardware better than others.
I hope Petreley's proposed simplification, when implemented, takes size into consideration. The way it is (using static libs, for instance), it seems the other way.
The article as a whole, though, present neat ideas and it's one of the best I've recently read.
Java (Score:2, Informative)
Autopackage comes to mind (Score:5, Informative)
from the site:
* Build packages that will install on many different distros
* Packages can be interactive
* Multiple front ends: best is automatically chosen so GUI users get a graphical front end, and command line users get a text based interface
* Multiple language support (both in tools and for your own packages)
* Automatically verifies and resolves dependancies no matter how the software was installed. This means you don't have to use autopackage for all your software, or even any of it, for packages to succesfully install.
Re:Autopackage comes to mind (Score:2, Offtopic)
Static linking problems (Score:5, Insightful)
Re:Static linking problems (Score:5, Interesting)
Take zlib as an example of a library that is commonly used. When a security hole was found in zlib a few months ago, dynamically linked packages can be fixed by replacing the zlib library. This is as it should be. But those that for some reason disdained to use the standard installed libz.so and insisted on static linking needed to be rebuilt and reinstalled.
(OK I have mostly just restated what the parent post said, so mod him up and not me.)
Quite apart from the stupidity of having ten different copies of the same library loaded into memory rather than sharing it between processes (and RAM may be cheap, but not cheap enough that you want to do this... consider also the CPU cache).
A similar problem applies to an app which includes copies of libraries in its own package. This is a bit like static linking in that it too means more work to update a library and higher disk/RAM usage.
Finally there is a philosophical issue. What right has FooEdit got to say that it needs libfred exactly version 1.823.281a3, and not only that exact version but the exact binary image included in the package? The app should be written to a published interface of the library and then work with whatever version is installed. If the interface provided by libfred changes, the new version should be installed with a different soname, that is libfred.so.2 rather than libfred.so.1. It's true that some libraries make backwards-incompatible changes without updating the sonames, but the answer then is to fix those libraries.
Not just a linux problem (Score:4, Interesting)
As with Linux, if there's a bug in the library you have to update either one file or search through the computer and update all instances. But, as with linux, the update can mess up some programs, others might be poorly coded and not run with newer versions of the dll. I've seen this last problem in both windows and linux; it looks like the programmer did if version != 3.001 then fail instead of if version 3.001 then fail.
If everyone is forced to use the same library, you get these problems and benefits:
--1 easy point of update
--1 easy point of failure
--older software may not run with newer versions
--programmers may insist on a specific version number
--updates to the libraries can benefit all programs; if kde or windows gets a new file open dialog box, then all programs that link to the common library can have the newer look and feel by updating just one library.
On the other hand, if you let each program have its own, you get these problems and benefits:
--difficult to update libraries when bugs are found
--can run into problems if a different version of the library is already loaded into memory (does this happen with linux?)
--guarantee that libraries are compatible with your app
--compartmentalization; everything you need for an app is in it's directory. Want to uninstall? Just delete the directory. No need to worry that deleting the app will affect anything else.
--no weird dependencies. Why does app X need me to install app Y when they clearly aren't related at all. The answer is shared libraries. Which is why many people like Gentoo and building from source.
Microsoft has waffled back and forth on the issue. Under dos, everything just went into one directory and that was it. Windows brought in the system directory for shared dll's. Now the latest versions of windows are back to having each app and all of its dlls in one directory.
Personally, I think compartmentalization is the key, provided we get some intelligent updaters. If libthingy needs to be updated, the install procedure should do a search and find all instances of the library, back up existing versions and then update all of them. This wouldn't be that hard to do.
Not always a problem (Score:5, Interesting)
Important features of the way AmigaOS libraries worked:
* All libraries were versioned, but not on the file system level. Each library contained a version number in it's header.
* Versions of the same library were always backwards compatible. This was Law. Software using an old version of a library must continue to work on future versions. This also meant that developers had to think out thier library API beforehand (because you would have to maintain that API). Libraries could be extended though with extra functions.
* Application programs had to 'open' libraries before using them. When opening a library an application would specify the minimum version that it required of the library. (If no matching or later version was found then the program would have to exit gracefully).
* There tended to be few (compared to Linux anyway) libraries. Libraries tended to be biggish. A few big libraries instead of many tiny libraries. This made them manageable.
* The backwards compatibility rule/law for libraries meant that software could bring it's own version of a library and update the existing version of that library, but *only* if it was a more up-to-date version.
As a previous poster pointed out, a lot of the problem would disappear if people (library writers) maintained compatible interfaces for the same library soname. I'm pretty sure that this is the way it was designed to work.
anyway, a FYI.
--
Simon
Re:Not always a problem (Score:2)
Re:Not always a problem (Score:3, Interesting)
That's not to say that things are not changing. The KDE project for example, aims for maintaining binary compatabil
Re:Not just a linux problem (Score:3, Insightful)
Re:Not just a linux problem (Score:3, Insightful)
Re:Not just a linux problem (Score:3, Informative)
Back in the Win16 day, M$Office/WinWord didn't play nice with anything if it could avoid it. The trick was to install M$Office or WinWord *first*, because otherwi
Re:Static linking problems (Score:2)
Easily solved, all you have to do is,
1. Go the Bank and change $1 for 5000000 Indian Rupees
2. Hire 1000 Indian programmers with above currency
3. Tell the programmers to recompile all statically-inked applications with the new libraries
4. Hire unemployed American programmer [oddtodd.com] for $20000 to translate the program from Hindi to English
5. Charge large corpo
Re:Static linking problems (Score:3, Informative)
but 1$ -> 45 Indian Ruppes
So you wouldn't be able to hire 1000 Indian programmers with that exchange rate.
Also your racist comments show how little you know about Indian programmers some of who are big names in the American Computer Industry.
Also almost every Indian programmer is well versed with English language and most of them can understand/write better english than you, so you won't have to hire a crappy american undergrad to do the translation work.
Re:Static linking problems (Score:2)
<AOL>Me too.</AOL>
The malformed zlib attack comes to mind. There's several slightly different static copies in the kernel, nevermind the many static copies in an endless variety of programs. Red Hat Network was shitting errata for a week.
Re:Static linking problems (Score:2)
Gentoo (Score:4, Informative)
Doesn't get any simpler than that. Come back in a minute to 12 hours (Depending on the package), and *poof* new software. Ditto BSD ports.
Re:Gentoo (Score:5, Interesting)
For me it was more like
1. emerge
2 come back in 8 hours and then:
a. see whole bunch of compilation errors,
b. dependencies were not sorted out correct, so nothing works
c. combination of above
I specially liked (still do) the optimization potential (where debian is stuck at i386), but it didn't work for me.
Gentoo is not the only source based distro (Score:2, Informative)
In descending order of (my) preference:
Re:Gentoo (Score:3, Insightful)
This is typically a result of a technique known as 'skimming the documentation' and 'thinking you know how to do it yourself'.
People are too quick to blame the distribution (any distribution, even Debian) when something goes wrong.</rant>
Re:Gentoo (Score:2)
There are also some post commit tests that will rebuild the port with every change.
I've not run into a compile issue since -- but I also don't install anything by hand.
Re:Gentoo (Score:3, Interesting)
Re:Gentoo (Score:2)
That's if you can get through the complexity of the install, which requires that you do everything yourself.
Re:Gentoo (Score:2)
Re:Gentoo (Score:4, Informative)
emerge doesn't pick the latest versions of stuff either... you end up installing from source anyway. eg. I need the kerberos enabled ssh to work with my network. I had krb5 in my USE but it didn't build with kerberos. It also built an out of date version. I had to manually go in to the package directory and force it to build the latest version (which emerge insisted didn't exist).. which still didn't build with kerberos, so I gave up on it and ftp'd a prebuilt one from a debian machine.
Also the dependencies suck rocks. I wanted to build a minimal setup and get it working, so I decided to install links. Bad move. It pulled in svgalib (??), most of X and about a million fonts - for a *text mode* browser.
12 hours is also a bit optimistic - On a dual processor machine I had it building for 3 days.. and at the end half the stuff didn't work anyway. Luckily I can get a debian install on in 20 minutes with a following wind, so I got my machine back without much hassle.
Re:Gentoo (Score:2)
Re:Gentoo (Score:3, Interesting)
Hmm, this seems to ignore one of the big advantages of an always-build-from-source distribution. If you were using Debian or RedHat and the links package required svgalib, I'd think 'fair enough: it was probably built with the svgalib drivers'. But if you are building from source there
Re:Gentoo (Score:3, Informative)
Correct, emerge doesn't automatically pick 'the latest stuff'. Which distro does? The true route to madness for any distro designer is to insure all the default installs are cutting edge. Forcing a higher version is simple, use 'emerge explicit-path-to-ebuild'. Typing 'emerge icewm' builds the d
Re:Gentoo (Score:4, Interesting)
I have installed windows, redhat, and gentoo. Yes, windows and redhat have much prettier interfaces. However, I have spent countless hours trying to install windows and redhat because the install tried to do something I didn't want it to do and crashed.
Windows 2000: the box has IDE and SCSI drives. I wanted windows on the SCSI drive as C. I had to take the IDE drive out to get it to let me. I don't even know where to start installing windows 2000 on a box without a CD-ROM drive.
RedHat: Anybody ever try installing RedHat onto a new box using ReiserFS and network install when the card is listed but the module won't load? I gave up and installed a CD-ROM drive.
Gentoo's install does take a long time but I never had these problems. When I was selecting where to install, I just used
Slackware has a similar install procedure (all console) but it doesn't compile everything like gentoo.
So the point is, "Assuming redhat, freebsd, windows and mac osx installers installed and setup how you like" is a very big assumption.
Re:Gentoo (Score:2, Interesting)
I remember back in the BBS days, it took me an hour or too to realize that...
Continue (Y/n):
Meant Y was the default. Not that Gentoo does or doesn't do it. But it's guilty of the same thing OpenBSD does. The interface is very VERY simple. Just not intuitive.
For the general case, win2k and redhat have intuitive install interfaces. Skip the actual working or not working of some driver
Re:Gentoo (Score:2)
Although I will admit, you need to have the BuildRequires packages installed - rpm tells you if they're not, but won't download and install them automatically... some tool like urpmi or apt-rpm would be needed for that part.
But some of the problems another person mentioned with emerge can sometimes apply to rpm --rebuild too. That is, a package doesn't state its build dependen
Re:Gentoo (Score:2, Insightful)
What I know is that Gentoo is perma-beta software. When the hell are they going to stop putting updates in the main release and make it possible to get security-only updates as a default?
The other day I did an emerge -u world and an application I'd just installed the day before broke. With an error message that my current version of the nvidia drivers wasn't current enough, which they were, no less.
And this is common. My entire KDE system broke. And kep
Fallback (Score:4, Insightful)
Place user applications in their own directories
This single rule alone would eliminate most of the problems. It enables fallback to manual package management, it resolves library conflicts, it avoids stale files after uninstallation and it prevents damaging the system which can be caused by overwriting files during installation and subsequently removing files during uninstallation.
software reliability == complex install process??? (Score:2, Funny)
If a software is extremely complex to install, one can safely assume it is reliable
If a software is easy to install, it is not reliable. for e.g. MS products.
But seriously I dont think applications are complex to install, it is just that a learning curve is involved in doing anything.
Re:software reliability == complex install process (Score:2)
What needs to be made easier is making good third party packages. It needs to be as easy as making a tarball or using WinZip. Obviously, the distros can't keep up with providi
General bad attitude towards anything easy (Score:5, Insightful)
As I see it, many would like to keep the learning curve very, very steep and high to maintain their exclusivity and "leetness" if you will.
For instance, the post above mine displays the ignorant attitude that "easy to install" by definition equals "unstable software" and has only a jab at MS to cite as a reference.
That's truly sad (though that may just be a symptom of being a slashdot reader.)
As I see it, not everyone finds:
make
make install
to be intuitive, much less easy, never mind what happens if you get compiler errors, or your build environment isn't the one the package wants *cough*mplayer*cough*, or if you even have said development environment.
Nor does it mean the software is any more stable. Could be just as shitty. _That_ is a matter of the developer of the program, not the install process.
Re:General bad attitude towards anything easy (Score:2)
People who want to change the build process to make 'installation' easier are barking up the wrong tree. Building the software from source is something that the packager should do, or at le
Building your own packages is not always the way (Score:2, Informative)
Basically your suggestion amounts to building a binary package from a source package as a stage to having it actually installed. While that is something I actually do (using Slackware package.tgz format), and even recommend it to many people, it's not necessarily suitable for everyone or every purpose. I still run an experimental machine where everything I install beyond the distribution is installed from source. That's good for quickly checking out some new package to see if it really does what the blur
LeetLinux distro (Score:2)
If a bunch of Linux geeks want to have a hard to install Linux system in order to raise the leetness level, they can always put together their own "LeetLinux" distribution. We can't (and shouldn't) stop them. There shouldn't be a requirement that all distributions be "easy". This even applies to the BSD's. I personally find the command line install of OpenBSD more flexible (and even easier anyway) than the menu driven install of FreeBSD. But as I use Linux mostly, my preferred leetness distro is Slackw
Re:General bad attitude towards anything easy (Score:2)
Where are these theoretical more-leet-than-thou users? Ok, maybe a few hanging out on IRC channels, but in general, this is a ridiculous myth. Linux users want easy-to-use as much as anyone. However, in general, we don't want to sacrifice ease of use for advanced users just for a questionable gain in ease of use for new folks. I can see how someone in frustrat
"Easy to use" !== "Dumbed down" (Score:3, Interesting)
Look at the apps that have options to use either basic or advanced interface. Selecting the basic interface doesn't mean that the app somehow no longer knows any of its more 1337 functions; it just means they aren't in the user's face, baffling the newbie with a million option
No, please (Score:5, Insightful)
Re:No, please (Score:3, Insightful)
I say this as a longtime linux user and booster, if installing sofware on Windows was one-tenth as hard as it often is on Linux then everyone would be using Macs.
Ease of use really should be the ultimate goal with all appliances and software. Would it really be some benefit if cars were twice as difficult to use?
To take your example, Windows servers are not more vulnerable because they are easier to use, they are/were
Re:No, please (Score:4, Insightful)
But, I have to say this article was so far off the mark that it's funny. `Let's take all the ideas from Windows of what an installation package should be, and apply them to Unix.' No, I think not.
RANT:
I dare say the biggest problem is that everyone is going the wrong direction. RPM is the standard, yet it sucks. Binary packages sepearate the `devel' portions into another package, making the system fail miserably if you ever need no compile software. It has piss-poor depend management. Instead of checking if a library is installed, it checks if another RPM has been installed. If it has been installed, it assumes the library is there. If it isn't installed, it assumes the library isn't there... Crazy! To have an RPM depend on a library I've compiled, I have to install the RPM of the library, then compile and install my own over the top of the RPM's files. RPM is like the government system of package management. You have to do everything their way, or it won't let you do anything at all.
I liked Slackware's simplistic packages more than anything else. At least there I could just install the package, and it wouldn't give me shit about dependencies. If I didn't install the dependencies, I got an error message, but it wouldn't refuse to install or try to install something for me automatically. I can take care of the dependencies any way I want. RPMs are supposed to save you time, but instead, because of it's dependency management, it used up far more of my time trying to deal with it's quirks, than it could have *possibly* saved me.
Another thing I find annoying is that there is only one version available. You can only get a package compiled without support for XYZ... Well that's fine if I don't have XYZ, but what if I do? I like the ports system, although it does some things automatically that I don't like (I would rather it asked me), it doesn't step on your toes much at all, it gives you all the customizability you could want (and only if you want it), and it's much simpler and faster than untaring and configure/make-ing everything.
OpenStep / OS X frameworks (Score:5, Informative)
Frameworks (very roughly) are self contained libraries containing different versions, headers, and documentation. Java jar libraries are somewhat similar.
The problem is that using frameworks requires major changes to the tool chain - autoconf et al, cc, ld etc.
Apple shipped zlib as a framework in OS X 10.0 (IIRC) but getting unix apps to use it was very difficult. Apple now only seem to use frameworks for things above the unix layer.
I suspect there are lessons to be learned from this. As another poster said, evolution rather than revolution is more likely to succeed.
emerge maybe easy. (Score:4, Funny)
Insert cd.
login in from the command line
net-setup ethx
cfdisk
mkrieserfs
wget ftp.gentoo-mirror.net/pub/gentoo/stage1.tar
tar xzjdocmnaf stage1.tar
mkdir
chroot
scripts/bootstrap.sh
(10 hours later)
emerge ufed
edit use flags
emerge system
emerge gentoo-sources
configure kernel, having do lspci and googling obscure serial numbers to find out what modules to compile
install kernel
muck around with it's non standard bootloader
install cron and sysloggers
umount
reboot
spend two days sorting out the kernel panics
wait all week for kde to emerge.
processor dies of over work
huge nasty electricty bill arrives after running emerge for over a week 24/7
in other words, no
Re:emerge maybe easy. (Score:3, Insightful)
Re:emerge maybe easy. (Score:2)
Re:emerge maybe easy. (Score:2)
Re:emerge maybe easy. (Score:2)
Are you saying Linux should be restricted to use by the expert elite??
In that case, how do you expect it to ever make any serious penetration into the desktop market and user environment, where 99% of users are NOT experts?? Or are you saying linux is only suitable for use in servers and ivory towe
Re:emerge maybe easy. (Score:2)
I don't. In fact, I'd sooner that it didn't. The end-user market is already bad enough in the way of computer illiteracy and user interface frustration without adding Linux to the mix.
As long as there are people out there who believe "Internet Explorer" is "the Internet" and "Outlook Express" is "e-mail", Linux has no place in the desktop market. No
Re:emerge maybe easy. (Score:3, Insightful)
It has nothing to do with my "eliteness", it has to do with the readiness of the general populous for something as architecturally advanced as Linux.
What will (mark my words; will happen is that people will start logging in as root all the time - for ease of use - and Linux on the desktop will come to a screaming halt as trojan after trojan anhialates desktops all o
why? (Score:3, Informative)
apt-get install foo#installs foo and any prerequisites.
Apt-get can also download and build the source of the package if needed. The biggest advantage of this is that:apt-get update && apt-get upgrade will upgrade every single installed package to the latest version. I can get binaries for all the architectures I run (mostly PPC and x86).
On my laptop, Blackdown and NetBeans(unused at the moment) are the only two programs that I had to install manually. Those who need a pretty frontend can use gnome-apt or the like.
It's hard enough making all the packages of one distro to play nice with eachother, imagine the headache of attempting it with multiple ones!
Shipping software on disc without source is such a headache. The program will only work on platforms it was built for, it will be build against archaic libraries, and it can't be fixed by the purchaser.
As for your "universal installer", it should work as follows.
tar -xzf foo.tgz
cd foo
Any idiot can manage that.
Here's what we did... (Score:5, Interesting)
Installation goes like this:
1. tar xzvf textmaker.tgz
2. There is no 2.
After that, you simply start TextMaker and it asks you where you want to place your documents and templates. No muss, no fuss, no external dependencies except for X11 and glibc. People like it that way and we intend to keep it this way with our spreadsheet and database [softmaker.de]
Martin Kotulla
SoftMaker Software GmbH
Re:Here's what we did... (Score:2)
Re:Here's what we did... (Score:3, Insightful)
Typical windows installation (Score:2)
1. Insert CD
2. There is no 2.
Executive summary: (Score:4, Insightful)
Re:Executive summary: (Score:2)
Re:Executive summary: (Score:3, Insightful)
Funny you should ask that. Yes, it could be much easier. Software Update pops up a window every so often with a list of software for which I don't have the latest versions. I uncheck anything that I don't want to install, and click a button. Minutes later, the software are downloaded and installed, and I'm prompted to restart the computer if necessary. Unsurprisingly, Software Update is a MacOS X feature.
apt-get is a wonderful foundatio
How about this (Score:4, Insightful)
No no no! (Score:5, Interesting)
What's even less cheap is bandwidth. Not everybody has broadband. Heck, many people can't get broadband. I have many friends who are still using 56k. It's just wrong to alienate them under the philosophy "bandwidth is cheap".
And just look at how expensive broadband is (at least here): 1 mbit downstream and 128 kbit upstream (cable), for 52 euros per month (more than 110 Dutch guilders!), that's just insane. And I even have a data limit.
There is no excuse for wasting resources. Resources are NOT cheap dispite what everbody claims.
Re:No no no! (Score:3, Interesting)
256 megs of good ram is 35 euros or less or 25 euros for some cheap PC100 ram. If you can't call that cheap, let me remind you that years ago it was $70 (62 euros) for 8 megs of ram. And a 200 gig Western Digital drive is less than 200 euros on New Egg [newegg.com] which is a very good computer hardware site. 60 Gigs is like 50 euros. I'm sorry you have to live in a country where hardware is so expensive, but where I live it's incredibly cheap.
Re:No no no! (Score:2)
In the store. Heck, I checked out several stores, and even advertisements in computer magazines! The DIMM modules I bought in Vobis was actually the cheapest modules I could find.
This is The Netherlands. I don't know where you live.
Re:No no no! (Score:2, Funny)
Re:No no no! (Score:2)
Re:No no no! (Score:2)
Re:No no no! (Score:2)
We don't have Flemish computer shops in this country.
Re:No no no! (Score:2)
Re:No no no! (Score:2)
Whoa! That IS expensive! Let me quote some prices:
256MB DDR266 CL2: 44e (could be had for 34e if you want generic brand)
Western Digital Caviar SE 120GB EI
Re:No no no! (Score:2)
Re:No no no! (Score:2)
Re:No no no! (Score:2)
Considering back in 1992 I paid US$300 for an 80 Gig drive, and RAM was $50 a megabyte...
Resources *ARE* cheap.
Re:No no no! (Score:2)
Re:No no no! (Score:2)
Corporation who *do* have a paid sysadmin should use the RPMs or whatever provided by their vendor. RedHat's up2date resolves dependancies automatically, provided that you're using RPMs made by RedHat. Of course, all you *should* use is RPMs made by RedHat anyway if you're a corporation, because those packages are supported by RedHat.
Re:No no no! (Score:2)
No I didn't. I checked out several stores. I read lots of advertisements in computer magazines. Nowhere can I find DIMM modules that are compatible with my VIA motherboard and are cheaper.
Petreley should do it (Score:2, Funny)
It is not just the ease but the language... (Score:5, Insightful)
Etc...
Re:It is not just the ease but the language... (Score:2)
Moreover, your scenario is the complete antithesis of choice, which is a major driving force for using Linux. People choose Linux or another OS for a variety of reasons, and they choose their applications accordingly. For example, when choosing Linux because it performs well on slower hardware, you'll also want to choose leaner applications. If we didn't have choice we could just as well consider this:
Would you like to use
Re:It is not just the ease but the language... (Score:2)
This is absolutely absurd. If anything it is promoting choice by providing people who would otherwise unable to choose another OS over Wintel and Mac OSX the choice of Linux. What I am suggesting does not in anyway mean that there would be a lack of an "expert" setup that provided all of the options (no matter how obscure and arcane) you would expect and demand of an open source project. In fact, it may even provide greater granularity in the setup
Re:It is not just the ease but the language... (Score:2)
I'm Glad It Isn't Easy (Score:2, Funny)
I'm glad it isn't brainless. I'm glad it's different across distros. Then I can pick and choose what I like.
The less each distro is like any other, the happier I am.
This is why I enjoy Linux.
Not too big an issue (Score:2)
Not that
Apple has it right (Score:5, Interesting)
As I see it, the following things need to happen to really make application installation be very clean under any Unix like operating system:
Too damn many times I've tried to install FOO, only to be told by the packaging system "FOO needs BAR". But FOO doesn't *need* BAR, it just works "better" if BAR is present (e.g. the XFree packages from RedHat requiring kernel-drm to install, but working just fine (minus accelerated OpenGL) without it).
Were venders to do this, then a program install could be handled by a simple shell script - untar to
The system could provide a means to access the HTML (a simple, stupid server bound to a local port, maybe?) so that you could browse all installed apps' help files online.
As a final fanciness, you could have an automatic process to symlink apps into a
ls
and see them.
Re:Apple has it right (Score:2)
As regards 1, I believe the intention of the
3 - Great. I always wondered why this hadn't been done before on popular unixes. The two reasons I came up with were a) search times; b) possible command-line security implications and search ambiguities (i.e. it doesn't do you any good having
Instances don't really matter for static linking (Score:4, Informative)
Nicholas Petreley writes: [linuxworld.com]
Linking libthingy statically into application foo does not preclude the sharing. Each of the instances of application foo will still share all the code of that executable. So if libthingy takes up 5K, and you launch 10 instances, that does not mean the other 9 will take up separate memory. Even statically linked, as long as the executable is in a shared linking format like ELF, which generally will be the case, each process VM will be mapped from the same file. So we're still looking at around 5K of real memory occupancy for even 1000 instances of application foo. The exact details will depend on how many pages get hit by the run-time linker when it has to make some address relocations. With static linking there is less of that, anyway. Of course if libthingy has its own static buffers space it modified (bad programming practice in the best case, a disaster waiting to happen in multithreading) then the affected pages will be copied-on-write and no longer be shared (so don't do that when developing any library code).
Where a shared library gives an advantage is when there are many different applications all using the same library. So the "shared" part of "shared library" means sharing between completely different executable files. Sharing between multiple instances of the same executable file is already done by the virtual memory system (less any CoW).
The author's next point about sharing between other applications is where the size of libthingy becomes relevant. His point being that if libthingy is only 5K, you're only saving 45K by making it a shared (between different executables) library. So that's 45K more disk space used up and 45K more RAM used up when loading those 10 different applications in memory. The idea is the hassle savings trumps the disk and memory savings. The situation favors the author's position to use static linking for smaller less universal libraries even more than he realized (or at least wrote about).
For a desktop computer, you're going to see more applications, and fewer instances of each, loaded. So here, the issue really is sharing between applications. But the point remains valid regarding small specialty libraries that get used by only a few (such as 10) applications. However, on a server computer, there may well be hundreds of instances of the same application, and perhaps very few applications. It might be a mail server running 1000 instances of the SMTP daemon trying to sift through a spam attack. Even if the SMTP code is built statically, those 1000 instances still share unmodified memory mapped from the executable file.
Re:Instances don't really matter for static linkin (Score:2)
If your entire GNOME desktop is statically linked to GTK+, and you launch panel, nautilus and metacity, then you're loading 3 seperate copies of GTK+ into memory that don't share any memory at all!
Re:Instances don't really matter for static linkin (Score:2)
The articles point was it's easier to staticly link the small obscure libraries. I don't know why a developer packaging a binary for general distribution can't statically link certain libraries.
I'm a bit rusty at static linking, but can't they just do a gcc -s -lgtk -ljpeg -o executable /usr/lib/libobscure.a foo.o bar.o widget.o to generate the binary? Then I wouldn't have to hunt down and attempt to install libobscure--sometimes a very frustrating process.
Re:Instances don't really matter for static linkin (Score:3, Interesting)
A good rule to follow is to never dynamically link to something that's substantially smaller than your program.
Dynamic linking tends to pull in too much. If you use "cos",
This is the price to pay.. (Score:3, Interesting)
One of the greatest strengths of the UNIX platform is its diversity..
Package installation is a simple prospect on the Windows platform for the simple reason that the platform has little diversity.
Windows supports a very limited set of processors.. So there's one factor that windows packaging doesn't have to worry about.
Windows doesn't generally provide seperately compiled binaries for slightly different processors ("Fat binaries" are used instead, wasting space).. So the packaging system doesn't have to worry about that. On linux, on the other hand, you can get separate packages for an athlon-tbird version and an original athlon version.
On an MS system, the installers contain all the libraries the package needs that have the potential to not be on the system already. This could make the packages rather large, but ensures the user doesn't have to deal with dependencies. Personally, I'd rather deal with dependencies myself than super-size every installer that relies on a shared object..
Furthermore, on windows there arn't several different distributions to worry about, so the installers don't have to deal with that either.
All of these point confer more flexibility to the unix system but have the inevitable consequence that package management can get to be rather a complex art. We could simplify package management a great deal, but it'd mean giving up the above advantages.
So you are you saying... (Score:3, Insightful)
It's funny how hypocritical the crowd here on
Two seconds was a huge amount of time in the average Linux user's day yesterday, but today, hours and hours spent installing software
thesis + antithesis - ? synthesis ? (Score:4, Interesting)
1) RAM & Disk space is not always cheap, or even readily available. There are many legacy systems where users would benefit from these advantages but the users are unable or unwilling to upgrade the system. What happens to old 486 and 586 systems where the motherboard doesn't support drives larger than X - there are work arounds, but the people who need easier install processes aren't going to tackle the complex system configuration issues to implement these. What happens when you can no longer obtain RAM in your community for your old machine, or it no longer has spare slots, etc. What happens if you have a second hand computer and simply don't have the available $$ to spend on upgrades, no matter how cheap they are. I don't like the idea of designing an easier-to-use system that excludes such people, no matter how small a portion of the market they may be. Hence redundant copies of libraries and staticly linked libraries are a very inelegant solution for these people.
2) We musn't impose requirements on application developers to use a given installer library, or code their apps to conform with particular standards that the installer requires - it is again unfeasible and undesireable in many circumstances. Developers have more than enough to worry about as it is without having to reimplement the way their app behaves to be installer friendly. The installer must exist at a level independant of the way the application has been coded, to a reasonable degree. I think that much of the problem that exists currently is that too much of the "packager" issues of making apps compatible to a hundred and one different unices has been getting dumped on developers and this both reduces their time for actual development and means that we have a hodge-podge of apps that are compatible to an unpredictable degree, because essentially developers don't want to be burdened with this.
3) Diversity is the spice of life, and it is the spice of unix. The community of unices is robust because it has adapted systems which are generally stable and reliable across a vast array of hardware and software. We want to capitalize on this tradition and expand and enhance it, not force anyone to use a particular layout for their apps & installations. This being said, I find the idea of local copies of libraries in the application directory unappealing, because it forces one to have a local directory ( rather than using
5) Aside from all these criticisms, there are many things I do agree with. Particularly that dependencies should be file specific, not package specific, that an integration of installer & linker is key to the organization of such a tool. I also agree that the installer should make use of auto-generated scripts wherever possible, and should provide detailed, useful messages to the end user that will help them to either resolve the conflicts in as friendly a way as possible, or to report the conflicts to their distribution. Also the installer should have advanced modes that allow for applications to be installed in accordance with a user or administrator prefered file system. That is one shouldn't be forced to install into
Given all this, is there any possible way to solve all of this in one consistent system? I think so - but it may require something that many will immediately wretch over. A registry. That's write, I used the foul windoze word registry. I propose a per-file database for libraries & applications that would record where given versions of given libraries are installed, under what names, in what directories, of what versions, providing what
Zero Install (Score:3, Interesting)
For those who haven't tried it:
"The Zero Install system removes the need to install software or libraries by running all programs from a network filesystem. The filesystem in question is the Internet as a whole, with an aggressive caching system to make it as fast as (or faster than) traditional systems such as Debian's APT repository, and to allow for offline use. It doesn't require any central authority to maintain it, and allows users to run software without needing the root password."
static linking not a good idea, here's why (Score:5, Insightful)
On making install process' simple. I think that a graphical installation does not necessarily make things any easier. Anyone here played Descent 2? That installed by a good old-fashioned DOS-installation. And it was not particularly hard to install, even though it was not a GUI-install.
It is also not necessarily a good idea to abstract into oblivion the technical details behind an install. Part of the philosophy behind Gentoo, for example, is to take newbies and turn them into advanced users. I think that a clear well thought-out install guide is a useful thing. Gentoo's install guide is thorough and has virtually no noise. Compare that to the install-guides for Debian, which are affirmative nightmares, filled with irrelevant stuff. Furthermore, a helpful and friendly user-community is always a good way to help new users orient themselves. New users are going to ask questions on forums that advanced users find obvious. That should not be an invitation to say, "RTFM bitch" at the top of your lungs. All of us were newbies at one point, and just because we may have had to learn things the hard way doesn't mean that others should too.
DLL Hell On Windoze (Score:4, Informative)
1. No backwards compatibility. All too often, new versions are released that break older programs. Even Microsoft has done this with major DLLs.
2. Stupid installer writers. You're supposed to check the version number of a file before overwriting it. All too often the file is overwritten without regard to the version numbers.
So to overcome these two problems, the smart installer coder would put all the DLLs in a private directory of the application (not in system/system32).
Of course, Microsoft came up with a new system that broke this simple fix. Registered libraries. Instead of using a specified path to get a DLL, you would ask the system to load the DLL (using registry information). The path was no longer considered. One, and only one, version of the DLL was allowed on the system, and there was no feasible way to get around this limitation. Someone came up with a fix. It would have been a major pain to implement and would require cooperation amongst the DLL coders, which isn't about to happen since the lack of cooperation was one of the core problems in the first place.
For a commercial level installer, missing libraries was absolutely unacceptable. My personal rule was to ALWAYS include dependencies in the installer package. This meant the installer was bigger and more complicated, but it guaranteed the application could be successfully installed without the user having to run off to find a missing library. Or did it? No - Microsoft decided that some libraries could not be independently distributed. The only legal means of getting the library was through the official Microsoft installer. And no suprise here, half the time the only official installer for a library was the latest version of Internet Explorer.
Requiring an upgrade to IE is a major problem for large companies. They standardize on specific software and don't allow the users to change it. Requiring a site-wide upgrade of something like IE (or the MDAC package) was not to be taken lightly. Especially when it was dicovered that the required upgrade would break other applications (back to DLL hell).
FYI, when a major customer pays your mid-sized company a couple of million dollars a year in license fees, they can definately tell you they won't upgrade IE. It's our job to come up with a work around. Too bad a measly few million paid to Microsoft wasn't enough to get them to change their ridiculous library polices.
Compatibility slows progress? (Score:4, Insightful)
The response is that compatibility slows progress by locking down the api. This is so short sighted that it is not even funny.
If programmers thought out how their libraries would be used it would be simple to add another call in a newer version. Instead they make short sighted decisions and ruin the use of a shared library.
IMHO any newer version of a library should work better than the previous version and be a 100% replacement.
This would fix a huge chunk of DLL hell and installer issues.
Device Detection! (Score:3, Interesting)
MOST of the problems I've had with installing Windows, Linux or OS X involve the fact that when I am all done, not all the components of my machine are working the way I expected them to. I end up with no sound, or bad sound, or video that isn't right, or a mouse that doesn't work, or in the really bad cases, disk drives that work well enough to boot the system but then fail after I'm in the middle of something important.
Once I get past the initial installation I feel I am home free. If the devices all work the way they are supposed to, then I can avoid most other problems by just sticking with the distro that I started with. If it was Debian Stable I stay with that, and if I need to install something that isn't part of that system I install it as a user (new version of Mozilla, Evolution, Real*, Java for example).
It would definitely be nice if developers who used shared libraries didn't seem to live in a fantasy land where they are the only users of those libraries. But I *don't* think that this is Linux's biggest problem with acceptance. What Linux needs is an agreement by all the distros to use something like the Knoppix device detection process... and then to cooperatively improve on it. A run-from-CD version of every distro would be great. Why blow away whatever you are running now just to find out if another version of Linux might suit you better?
I'd like a system that does a pre-install phase where every component of my system can be detected and tested before I commit to doing the install. The results of that could be saved somewhere so that when I commit to the install I don't have to answer any questions a second time (and possibly get it wrong).
There is nothing that can guarantee that what appears to be a good install doesn't go bad a week later, but I personally haven't had this happen. I usually know I have a bad install within a few minutes of booting up the first time, and by then, its too late to easily go back to the system that was "good enough".
Matrix Reloaded spoiler in parent (Score:5, Informative)