Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Linux Software

Petreley On Simplifying Software Installation for Linux 310

markcappel writes "RAM, bandwidth, and disk space are cheap while system administrator time is expensive. That's the basis for Nicholas Petreley's 3,250-word outline for making Linux software installation painless and cross-distro." The summary paragraph gives some hint as to why this isn't likely to happen anytime soon.
This discussion has been archived. No new comments can be posted.

Petreley On Simplifying Software Installation for Linux

Comments Filter:
  • Word (Score:5, Funny)

    by Anonymous Coward on Sunday May 04, 2003 @08:56AM (#5873904)
    "That's the basis for Nicholas Petreley's 3,250-word outline for making Linux software installation painless and cross-distro."

    Was it necessary to include the word count? It's hard enough to get slashdotters to read a small article and post intelligently, this cant help...
    • Re:Word (Score:4, Interesting)

      by Anonymous Coward on Sunday May 04, 2003 @10:08AM (#5874116)
      The article is surprisingly dense for such a word count -- yet is easy to read.

      Petreley is undoubtedly getting the grip of this "writing thing"... ;-)

      Seriously, though, however smart and logical are his conclusions, one thing bothers me: the installation should be simplified but "right", too.

      I mean, there are other objectives besides being easy.

      Last week I tried to install Red Hat 8.0 on a Pentium 75Mhz with 32MB RAM (testing an old machine as X-terminals). It didn't work.
      The installation froze at the first package -- glibc (it was a network installation) -- probably due to lack of memory (as evidenced by free et al.).

      Why? It was a textmode installation. I know from past experience that older versions of Red Hat would install ok (I used to have smaller computers).

      My suspect is that Red Hat has become too easy -- and bloated. Mind you, I opted for Red Hat instead of Slack or Debian because of my recent experiences, in which RH showed to recognize hardware better than others.

      I hope Petreley's proposed simplification, when implemented, takes size into consideration. The way it is (using static libs, for instance), it seems the other way.

      The article as a whole, though, present neat ideas and it's one of the best I've recently read.
  • Java (Score:2, Informative)

    Yes it should be automated to install software, especially in the case of Java. Anybody wanting to run LimeWire has to download a 20MB file, then mess around in a terminal. Not good. Though Synaptic is close to full automation....

  • by Simon (S2) ( 600188 ) on Sunday May 04, 2003 @08:58AM (#5873911) Homepage
    Autopackage [] comes to mind.

    from the site:
    * Build packages that will install on many different distros
    * Packages can be interactive
    * Multiple front ends: best is automatically chosen so GUI users get a graphical front end, and command line users get a text based interface
    * Multiple language support (both in tools and for your own packages)
    * Automatically verifies and resolves dependancies no matter how the software was installed. This means you don't have to use autopackage for all your software, or even any of it, for packages to succesfully install.
  • by digitalhermit ( 113459 ) on Sunday May 04, 2003 @09:03AM (#5873922) Homepage
    Static linking might be useful as a workaround for the more esoteric distros, but it has its problems. For one, if you statically link your application then anytime there's a security fix or change to the linked library you'll need to recompile the application, not just upgrade the library. This would probably cost more in administration time than upgrading a single library since multiple applications may be dependent on the one library.
    • by Ed Avis ( 5917 ) <> on Sunday May 04, 2003 @09:39AM (#5874014) Homepage
      Static linking is a seriously bad idea. Part of the job of a packager is to arrange the app so it doesn't include its own copies of packages but uses the standard ones available on the system (and states these dependencies explicitly, so the installer can resolve them automatically).

      Take zlib as an example of a library that is commonly used. When a security hole was found in zlib a few months ago, dynamically linked packages can be fixed by replacing the zlib library. This is as it should be. But those that for some reason disdained to use the standard installed and insisted on static linking needed to be rebuilt and reinstalled.

      (OK I have mostly just restated what the parent post said, so mod him up and not me.)

      Quite apart from the stupidity of having ten different copies of the same library loaded into memory rather than sharing it between processes (and RAM may be cheap, but not cheap enough that you want to do this... consider also the CPU cache).

      A similar problem applies to an app which includes copies of libraries in its own package. This is a bit like static linking in that it too means more work to update a library and higher disk/RAM usage.

      Finally there is a philosophical issue. What right has FooEdit got to say that it needs libfred exactly version 1.823.281a3, and not only that exact version but the exact binary image included in the package? The app should be written to a published interface of the library and then work with whatever version is installed. If the interface provided by libfred changes, the new version should be installed with a different soname, that is rather than It's true that some libraries make backwards-incompatible changes without updating the sonames, but the answer then is to fix those libraries.
      • by jd142 ( 129673 ) on Sunday May 04, 2003 @10:53AM (#5874318) Homepage
        This same problem occurs in the windows world as well, dll hell as it is often called. Here's how it works for windows. Say your program needs vbrun32.dll. You have a choice. You can put the dll in the same folder as the executable, in which case your program will find it and load the right dll. Or you can put it in the system or system32 dll in which case your program and others can find it and load it. However, if vbrun32.dll is already loaded into memory, your program will use that one. I remember we used to have problems with apps only working if loaded in the right order so the right dll would load.

        As with Linux, if there's a bug in the library you have to update either one file or search through the computer and update all instances. But, as with linux, the update can mess up some programs, others might be poorly coded and not run with newer versions of the dll. I've seen this last problem in both windows and linux; it looks like the programmer did if version != 3.001 then fail instead of if version 3.001 then fail.

        If everyone is forced to use the same library, you get these problems and benefits:

        --1 easy point of update
        --1 easy point of failure
        --older software may not run with newer versions
        --programmers may insist on a specific version number
        --updates to the libraries can benefit all programs; if kde or windows gets a new file open dialog box, then all programs that link to the common library can have the newer look and feel by updating just one library.

        On the other hand, if you let each program have its own, you get these problems and benefits:

        --difficult to update libraries when bugs are found
        --can run into problems if a different version of the library is already loaded into memory (does this happen with linux?)
        --guarantee that libraries are compatible with your app
        --compartmentalization; everything you need for an app is in it's directory. Want to uninstall? Just delete the directory. No need to worry that deleting the app will affect anything else.
        --no weird dependencies. Why does app X need me to install app Y when they clearly aren't related at all. The answer is shared libraries. Which is why many people like Gentoo and building from source.

        Microsoft has waffled back and forth on the issue. Under dos, everything just went into one directory and that was it. Windows brought in the system directory for shared dll's. Now the latest versions of windows are back to having each app and all of its dlls in one directory.

        Personally, I think compartmentalization is the key, provided we get some intelligent updaters. If libthingy needs to be updated, the install procedure should do a search and find all instances of the library, back up existing versions and then update all of them. This wouldn't be that hard to do.
        • Not always a problem (Score:5, Interesting)

          by Simon ( 815 ) * <simon@simonzone. c o m> on Sunday May 04, 2003 @11:23AM (#5874461) Homepage
          Just want point out that problems with shared libraries aren't universal. Years ago AmigaOS had shared libraries that basically worked without significant administration problems, or a 'dll hell'.

          Important features of the way AmigaOS libraries worked:

          * All libraries were versioned, but not on the file system level. Each library contained a version number in it's header.

          * Versions of the same library were always backwards compatible. This was Law. Software using an old version of a library must continue to work on future versions. This also meant that developers had to think out thier library API beforehand (because you would have to maintain that API). Libraries could be extended though with extra functions.

          * Application programs had to 'open' libraries before using them. When opening a library an application would specify the minimum version that it required of the library. (If no matching or later version was found then the program would have to exit gracefully).

          * There tended to be few (compared to Linux anyway) libraries. Libraries tended to be biggish. A few big libraries instead of many tiny libraries. This made them manageable.

          * The backwards compatibility rule/law for libraries meant that software could bring it's own version of a library and update the existing version of that library, but *only* if it was a more up-to-date version.

          As a previous poster pointed out, a lot of the problem would disappear if people (library writers) maintained compatible interfaces for the same library soname. I'm pretty sure that this is the way it was designed to work.

          anyway, a FYI.


          • The AmigaOS solutions sounds great (and windows and linux have some of your points already) but I don't know if either windows or linux could enforce the Law of backward compatibility. Too many people writing too much software.
            • by Simon ( 815 ) *
              Well, I think that technically Linux probably has everything neccessary already. It's really a policy issue. I just browsed the Program Library HOWTO, and although binary compatibility is briefly mentioned, and what is meant to happen when you break it. The whole idea that libraries have APIs that should only be broken as a last resort, is simply not explained, let alone handed down as Law...

              That's not to say that things are not changing. The KDE project for example, aims for maintaining binary compatabil

        • I know DLL Hell has become an anti-Windows buzzword, but frankly I've not seen the issue at all since the Win3.1 era (and not often then), not by lack of opportunity -- my WinBoxen wind up loaded to the gills with software from every era, and usually multiple versions of the same programs to boot. My clients install all sorts of random crap, and they don't get any version conflicts either.

          • Yeah, the worst offenders were under windows 3.1, that's true, but because of the mix of apps our users install, we still see it occassionally. They like older versions of WordPerfect, like 6.0, 6.1 and some old proprietary research apps that weren't well written. Because they are still in use, we still see problems now and again. But not nearly as bad as we used to. "Ok, just remember that you always have to start WordPerfect 6.1 *before* you start Eudora 3.0, otherwise WP won't work right." And even
            • M$OfficeXP's voice recognition component disables some WP functions. There's even an article about it in the M$ KB. Somehow this wasn't regarded as a bug or problem, so I have to consider it as left deliberately unfixed. On WinXP, even going back to a restore point before OfficeXP does not fix the problem -- OXP is allowed to clobber system files.

              Back in the Win16 day, M$Office/WinWord didn't play nice with anything if it could avoid it. The trick was to install M$Office or WinWord *first*, because otherwi
    • For one, if you statically link your application then anytime there's a security fix or change to the linked library you'll need to recompile the application,

      Easily solved, all you have to do is,

      1. Go the Bank and change $1 for 5000000 Indian Rupees
      2. Hire 1000 Indian programmers with above currency
      3. Tell the programmers to recompile all statically-inked applications with the new libraries
      4. Hire unemployed American programmer [] for $20000 to translate the program from Hindi to English
      5. Charge large corpo

      • Nice try!
        but 1$ -> 45 Indian Ruppes

        So you wouldn't be able to hire 1000 Indian programmers with that exchange rate.

        Also your racist comments show how little you know about Indian programmers some of who are big names in the American Computer Industry.

        Also almost every Indian programmer is well versed with English language and most of them can understand/write better english than you, so you won't have to hire a crappy american undergrad to do the translation work.
    • <AOL>Me too.</AOL>

      The malformed zlib attack comes to mind. There's several slightly different static copies in the kernel, nevermind the many static copies in an endless variety of programs. Red Hat Network was shitting errata for a week.

  • Gentoo (Score:4, Informative)

    by Tyler Eaves ( 344284 ) on Sunday May 04, 2003 @09:05AM (#5873929)

    Doesn't get any simpler than that. Come back in a minute to 12 hours (Depending on the package), and *poof* new software. Ditto BSD ports.
    • Re:Gentoo (Score:5, Interesting)

      by sirius_bbr ( 562544 ) on Sunday May 04, 2003 @09:15AM (#5873961)
      I tried it (gentoo) some time ago. After two weeks of frustration I moved back to debian.

      For me it was more like
      1. emerge
      2 come back in 8 hours and then:
      a. see whole bunch of compilation errors,
      b. dependencies were not sorted out correct, so nothing works
      c. combination of above

      I specially liked (still do) the optimization potential (where debian is stuck at i386), but it didn't work for me.
      • In descending order of (my) preference:

      • Re:Gentoo (Score:3, Insightful)

        by brad-x ( 566807 )

        This is typically a result of a technique known as 'skimming the documentation' and 'thinking you know how to do it yourself'.

        People are too quick to blame the distribution (any distribution, even Debian) when something goes wrong.</rant>

      • Yeah.. FreeBSD ports used to have a ton of compile issues as well. Thats been fixed through various compile farms configured in standard ways on various code lines. (Sparc, IA64, Alpha, i386 on both -CURRENT and -STABLE where applicable).

        There are also some post commit tests that will rebuild the port with every change.

        I've not run into a compile issue since -- but I also don't install anything by hand.
    • Doesn't get any simpler than that

      That's if you can get through the complexity of the install, which requires that you do everything yourself.
      • The Gentoo doc is *VERY* good. I find it hard to beleive someone has trouble with it. Yes, it's doing it by hand, but it walks you through it step by step.
        • Re:Gentoo (Score:4, Informative)

          by Tony Hoyle ( 11698 ) <> on Sunday May 04, 2003 @10:13AM (#5874146) Homepage
          Firstly, when gentoo boots you have to wing it to work out how to get online to read the damned thing. Then it tells you to set your USE variables, with *no* documentation about what any of them do (not that it matters, half of the packages ignore them anyway). I also found several factual errors in it (for example stating that files are on the CD that aren't there).

          emerge doesn't pick the latest versions of stuff either... you end up installing from source anyway. eg. I need the kerberos enabled ssh to work with my network. I had krb5 in my USE but it didn't build with kerberos. It also built an out of date version. I had to manually go in to the package directory and force it to build the latest version (which emerge insisted didn't exist).. which still didn't build with kerberos, so I gave up on it and ftp'd a prebuilt one from a debian machine.

          Also the dependencies suck rocks. I wanted to build a minimal setup and get it working, so I decided to install links. Bad move. It pulled in svgalib (??), most of X and about a million fonts - for a *text mode* browser.

          12 hours is also a bit optimistic - On a dual processor machine I had it building for 3 days.. and at the end half the stuff didn't work anyway. Luckily I can get a debian install on in 20 minutes with a following wind, so I got my machine back without much hassle.

          • PEBKAC
          • Re:Gentoo (Score:3, Interesting)

            by Ed Avis ( 5917 )

            Also the dependencies suck rocks. I wanted to build a minimal setup and get it working, so I decided to install links. Bad move. It pulled in svgalib (??), most of X and about a million fonts - for a *text mode* browser.

            Hmm, this seems to ignore one of the big advantages of an always-build-from-source distribution. If you were using Debian or RedHat and the links package required svgalib, I'd think 'fair enough: it was probably built with the svgalib drivers'. But if you are building from source there

          • Re:Gentoo (Score:3, Informative)

            by antiMStroll ( 664213 )
            Not what I remember. The install docs are on the CD. Alt-F2 to open a second console, then 'less README.TXT' or 'less README' to view the instructions. The last Gentoo install I did was 1.2, so this may have changed.

            Correct, emerge doesn't automatically pick 'the latest stuff'. Which distro does? The true route to madness for any distro designer is to insure all the default installs are cutting edge. Forcing a higher version is simple, use 'emerge explicit-path-to-ebuild'. Typing 'emerge icewm' builds the d

    • Emerge is nothing special. 'rpm --rebuild whatever-1.2.3.src.rpm', come back in a few minutes and *poof* a freshly built package.

      Although I will admit, you need to have the BuildRequires packages installed - rpm tells you if they're not, but won't download and install them automatically... some tool like urpmi or apt-rpm would be needed for that part.

      But some of the problems another person mentioned with emerge can sometimes apply to rpm --rebuild too. That is, a package doesn't state its build dependen
    • Re:Gentoo (Score:2, Insightful)

      by ichimunki ( 194887 )
      Yeah. Ok. You just keep telling yourself that.

      What I know is that Gentoo is perma-beta software. When the hell are they going to stop putting updates in the main release and make it possible to get security-only updates as a default?

      The other day I did an emerge -u world and an application I'd just installed the day before broke. With an error message that my current version of the nvidia drivers wasn't current enough, which they were, no less.

      And this is common. My entire KDE system broke. And kep
  • Fallback (Score:4, Insightful)

    by Anonymous Coward on Sunday May 04, 2003 @09:05AM (#5873931)

    Place user applications in their own directories

    This single rule alone would eliminate most of the problems. It enables fallback to manual package management, it resolves library conflicts, it avoids stale files after uninstallation and it prevents damaging the system which can be caused by overwriting files during installation and subsequently removing files during uninstallation.

  • I have noticed that installation complexity is directly propotional to the reliability of the software.

    If a software is extremely complex to install, one can safely assume it is reliable :)

    If a software is easy to install, it is not reliable. for e.g. MS products.

    But seriously I dont think applications are complex to install, it is just that a learning curve is involved in doing anything.
    • I'd agree that installing Linux software isn't typically that hard once you've done it a while. Red Carpet makes updating/installing RPM files really easy. Apt-get makes the same process almost as easy in Debian based systems. Anything not available as a package just download and compile (which has gotten much easier in recent years).

      What needs to be made easier is making good third party packages. It needs to be as easy as making a tarball or using WinZip. Obviously, the distros can't keep up with providi
  • by Microlith ( 54737 ) on Sunday May 04, 2003 @09:13AM (#5873952)
    The first obstacle to overcome is the bad attitude many linux users have that if something is easy to install, or easy to use, it is therefore bad.

    As I see it, many would like to keep the learning curve very, very steep and high to maintain their exclusivity and "leetness" if you will.

    For instance, the post above mine displays the ignorant attitude that "easy to install" by definition equals "unstable software" and has only a jab at MS to cite as a reference.

    That's truly sad (though that may just be a symptom of being a slashdot reader.)

    As I see it, not everyone finds: ./configure
    make install

    to be intuitive, much less easy, never mind what happens if you get compiler errors, or your build environment isn't the one the package wants *cough*mplayer*cough*, or if you even have said development environment.

    Nor does it mean the software is any more stable. Could be just as shitty. _That_ is a matter of the developer of the program, not the install process.
    • './configure && make install' is not an installation process. It's the first stage in building some sources and making them into a package such as an RPM, Debian package or Slackware .tgz. Then you use your system's package manager to install, and it tracks where the files go so you can easily uninstall later.

      People who want to change the build process to make 'installation' easier are barking up the wrong tree. Building the software from source is something that the packager should do, or at le
      • Basically your suggestion amounts to building a binary package from a source package as a stage to having it actually installed. While that is something I actually do (using Slackware package.tgz format), and even recommend it to many people, it's not necessarily suitable for everyone or every purpose. I still run an experimental machine where everything I install beyond the distribution is installed from source. That's good for quickly checking out some new package to see if it really does what the blur

    • If a bunch of Linux geeks want to have a hard to install Linux system in order to raise the leetness level, they can always put together their own "LeetLinux" distribution. We can't (and shouldn't) stop them. There shouldn't be a requirement that all distributions be "easy". This even applies to the BSD's. I personally find the command line install of OpenBSD more flexible (and even easier anyway) than the menu driven install of FreeBSD. But as I use Linux mostly, my preferred leetness distro is Slackw

    • The first obstacle to overcome is the bad attitude many linux users have that if something is easy to install, or easy to use, it is therefore bad.

      Where are these theoretical more-leet-than-thou users? Ok, maybe a few hanging out on IRC channels, but in general, this is a ridiculous myth. Linux users want easy-to-use as much as anyone. However, in general, we don't want to sacrifice ease of use for advanced users just for a questionable gain in ease of use for new folks. I can see how someone in frustrat
      • And, sorry, but your post is an example of "more leet than thou"! This isn't meant to flame you, but you make the common mistake of assuming that "easy for a newbie to use" MUST EQUAL "dumbed down", and that's absolutely not the case.

        Look at the apps that have options to use either basic or advanced interface. Selecting the basic interface doesn't mean that the app somehow no longer knows any of its more 1337 functions; it just means they aren't in the user's face, baffling the newbie with a million option
  • No, please (Score:5, Insightful)

    by Yag ( 537766 ) on Sunday May 04, 2003 @09:16AM (#5873963)
    Thats the reason windows servers are more vulnerable to attacks, because they give you the idea that its easy to mantain them... Its the same thing saying that you dont need any pilot on an airplane (and that you can put there anyone) if you make a good autopilot engine... We need more knowledge in system administration, not more automatisms.
    • Re:No, please (Score:3, Insightful)

      by bogie ( 31020 )
      So you would deny the use of free software only to those who are experts with their OS?

      I say this as a longtime linux user and booster, if installing sofware on Windows was one-tenth as hard as it often is on Linux then everyone would be using Macs.

      Ease of use really should be the ultimate goal with all appliances and software. Would it really be some benefit if cars were twice as difficult to use?

      To take your example, Windows servers are not more vulnerable because they are easier to use, they are/were
    • Re:No, please (Score:4, Insightful)

      by evilviper ( 135110 ) on Sunday May 04, 2003 @10:33AM (#5874234) Journal
      I can't agree with that. There are lots of programs that could add one or two features, and simply no longer require so much human work... XF86 comes to mind immediately.

      But, I have to say this article was so far off the mark that it's funny. `Let's take all the ideas from Windows of what an installation package should be, and apply them to Unix.' No, I think not.

      I dare say the biggest problem is that everyone is going the wrong direction. RPM is the standard, yet it sucks. Binary packages sepearate the `devel' portions into another package, making the system fail miserably if you ever need no compile software. It has piss-poor depend management. Instead of checking if a library is installed, it checks if another RPM has been installed. If it has been installed, it assumes the library is there. If it isn't installed, it assumes the library isn't there... Crazy! To have an RPM depend on a library I've compiled, I have to install the RPM of the library, then compile and install my own over the top of the RPM's files. RPM is like the government system of package management. You have to do everything their way, or it won't let you do anything at all.

      I liked Slackware's simplistic packages more than anything else. At least there I could just install the package, and it wouldn't give me shit about dependencies. If I didn't install the dependencies, I got an error message, but it wouldn't refuse to install or try to install something for me automatically. I can take care of the dependencies any way I want. RPMs are supposed to save you time, but instead, because of it's dependency management, it used up far more of my time trying to deal with it's quirks, than it could have *possibly* saved me.

      Another thing I find annoying is that there is only one version available. You can only get a package compiled without support for XYZ... Well that's fine if I don't have XYZ, but what if I do? I like the ports system, although it does some things automatically that I don't like (I would rather it asked me), it doesn't step on your toes much at all, it gives you all the customizability you could want (and only if you want it), and it's much simpler and faster than untaring and configure/make-ing everything.
  • by pldms ( 136522 ) on Sunday May 04, 2003 @09:21AM (#5873975)
    Did some of the suggestions remind anyone of the OpenStep frameworks idea?

    Frameworks (very roughly) are self contained libraries containing different versions, headers, and documentation. Java jar libraries are somewhat similar.

    The problem is that using frameworks requires major changes to the tool chain - autoconf et al, cc, ld etc.

    Apple shipped zlib as a framework in OS X 10.0 (IIRC) but getting unix apps to use it was very difficult. Apple now only seem to use frameworks for things above the unix layer.

    I suspect there are lessons to be learned from this. As another poster said, evolution rather than revolution is more likely to succeed.
  • by Anonymous Coward on Sunday May 04, 2003 @09:22AM (#5873978)
    But installing gentoo is still hard.

    Insert cd.
    login in from the command line
    net-setup ethx
    tar xzjdocmnaf stage1.tar
    mkdir /mnt/gentoo/
    chroot /mnt/gentoo
    (10 hours later)
    emerge ufed
    edit use flags
    emerge system
    emerge gentoo-sources
    configure kernel, having do lspci and googling obscure serial numbers to find out what modules to compile
    install kernel
    muck around with it's non standard bootloader
    install cron and sysloggers
    spend two days sorting out the kernel panics
    wait all week for kde to emerge.
    processor dies of over work
    huge nasty electricty bill arrives after running emerge for over a week 24/7

    in other words, no
      1. bootstrap doesn't take anywhere near 10 hours on a modern machine
      2. The stage-x tarballs come on the boot CD ISOs. Wget is not required.
      3. If you have to resort to lspci to compile a bootable kernel, Gentoo is not for you (IOW you don't know your hardware well enough). BTW - You could grep the pci.ids datafile that comes with the kernel rather than Googling "obscure" (International standard, unique) PCI IDs.
      4. GRUB is a standard boot loader now. If you don't like it, emerge LILO.
      5. KDE takes nowhere near a week to
      • Gentoo was nice for a while, but things broke when I tried to go from gcc 2.95.x to 3.x. That's what finally convinced me to go to Debian.
      • "The trouble with Linux isn't the installation. That procedure should remain somewhat esoteric (else we find ourselves plagued by every Windows user out there who's ever run regedit and thinks he's a sysadmin)."

        Are you saying Linux should be restricted to use by the expert elite??

        In that case, how do you expect it to ever make any serious penetration into the desktop market and user environment, where 99% of users are NOT experts?? Or are you saying linux is only suitable for use in servers and ivory towe
        • In that case, how do you expect it to ever make any serious penetration into the desktop market and user environment, where 99% of users are NOT experts??

          I don't. In fact, I'd sooner that it didn't. The end-user market is already bad enough in the way of computer illiteracy and user interface frustration without adding Linux to the mix.

          As long as there are people out there who believe "Internet Explorer" is "the Internet" and "Outlook Express" is "e-mail", Linux has no place in the desktop market. No

  • why? (Score:3, Informative)

    by SHEENmaster ( 581283 ) <travis@utk . e du> on Sunday May 04, 2003 @09:24AM (#5873982) Homepage Journal
    Debian's system, or possible something like gentoo has, is preferable to any "easy" installation process.

    apt-get install foo#installs foo and any prerequisites.

    Apt-get can also download and build the source of the package if needed. The biggest advantage of this is that:apt-get update && apt-get upgrade will upgrade every single installed package to the latest version. I can get binaries for all the architectures I run (mostly PPC and x86).

    On my laptop, Blackdown and NetBeans(unused at the moment) are the only two programs that I had to install manually. Those who need a pretty frontend can use gnome-apt or the like.

    It's hard enough making all the packages of one distro to play nice with eachother, imagine the headache of attempting it with multiple ones!

    Shipping software on disc without source is such a headache. The program will only work on platforms it was built for, it will be build against archaic libraries, and it can't be fixed by the purchaser.

    As for your "universal installer", it should work as follows.

    tar -xzf foo.tgz
    cd foo
    ./configure && make && sudo install

    Any idiot can manage that.
  • by martin-k ( 99343 ) on Sunday May 04, 2003 @09:35AM (#5874004) Homepage
    We [] just released our first Linux app, TextMaker [], our non-bloated word processor.

    Installation goes like this:

    1. tar xzvf textmaker.tgz
    2. There is no 2.

    After that, you simply start TextMaker and it asks you where you want to place your documents and templates. No muss, no fuss, no external dependencies except for X11 and glibc. People like it that way and we intend to keep it this way with our spreadsheet and database []

    Martin Kotulla
    SoftMaker Software GmbH

    • But I'm sure you realize that if every application used its own installation procedure that requires the command line, life would be awkward. The point of packaging systems like RPM is that they _standardize_ things. Need to push an RPM to multiple workstations? Use Red Carpet, autorpm or similar tool to automatically install new packages at regular intervals. Want to keep track of exactly what is installed? Use rpm --query. Want to smoothly upgrade between versions? No problem, rpm --upgrade keeps t
    • by mattdm ( 1931 )
      That's perfectly fine for a single proprietary app, but is in no way a scalable solution for a whole distro.
    • For running a game with statically linked libraries:

      1. Insert CD

      2. There is no 2.

  • Executive summary: (Score:4, Insightful)

    by I Am The Owl ( 531076 ) on Sunday May 04, 2003 @09:36AM (#5874010) Homepage Journal
    Use Debian and apt-get. No, seriously, could it be much easier?
    • Or Mandrake and urpmi.
    • Use Debian and apt-get. No, seriously, could it be much easier?

      Funny you should ask that. Yes, it could be much easier. Software Update pops up a window every so often with a list of software for which I don't have the latest versions. I uncheck anything that I don't want to install, and click a button. Minutes later, the software are downloaded and installed, and I'm prompted to restart the computer if necessary. Unsurprisingly, Software Update is a MacOS X feature.

      apt-get is a wonderful foundatio

  • How about this (Score:4, Insightful)

    by Fluffy the Cat ( 29157 ) on Sunday May 04, 2003 @09:42AM (#5874028) Homepage
    The complaints are, almost entirely, about libraries. But there's already a robust mechanism for determining that a library dependency is satisfied - the SONAME defines its binary compatibility. So if stuff is breaking, it's because library authors are changing binary compatibility without changing the SONAME. How about we just get library authors to stop breaking stuff?
  • No no no! (Score:5, Interesting)

    by FooBarWidget ( 556006 ) on Sunday May 04, 2003 @09:43AM (#5874030)
    First of all, RAM and disk space are NOT cheap. I spent 60 euros for 256 MB RAM, that's is not cheap (it's more than 120 Dutch guilders for goodness's sake!). A 60 GB harddisk still costs more than 200 euros. Again: not cheap. Until I can buy 256 MB RAM for 10 euros or less, and 60 GB harddisks for less than 90 euros, I call them everything but cheap.

    What's even less cheap is bandwidth. Not everybody has broadband. Heck, many people can't get broadband. I have many friends who are still using 56k. It's just wrong to alienate them under the philosophy "bandwidth is cheap".
    And just look at how expensive broadband is (at least here): 1 mbit downstream and 128 kbit upstream (cable), for 52 euros per month (more than 110 Dutch guilders!), that's just insane. And I even have a data limit.

    There is no excuse for wasting resources. Resources are NOT cheap dispite what everbody claims.
    • Re:No no no! (Score:3, Interesting)

      by Zakabog ( 603757 )
      Wow you're getting ripped off. Where are you buying this stuff?

      256 megs of good ram is 35 euros or less or 25 euros for some cheap PC100 ram. If you can't call that cheap, let me remind you that years ago it was $70 (62 euros) for 8 megs of ram. And a 200 gig Western Digital drive is less than 200 euros on New Egg [] which is a very good computer hardware site. 60 Gigs is like 50 euros. I'm sorry you have to live in a country where hardware is so expensive, but where I live it's incredibly cheap.
      • > Wow you're getting ripped off. Where are you buying this stuff?

        In the store. Heck, I checked out several stores, and even advertisements in computer magazines! The DIMM modules I bought in Vobis was actually the cheapest modules I could find.

        This is The Netherlands. I don't know where you live.
    • by Ed Avis ( 5917 )
      More than 120 Dutch guilders? Wow! It's a good job you didn't express the amount only in those obscure euros.
    • I don't know about the memory since I don't know exactly what kind you need, but your HD is extremaly expensive. Checking out the website of a Flemish computer shop, I can get a 80 GB HD for 95 euro. The most expensive HD in that particular shop is a Seagate Barracuda V 120 GB which costs 164 euro.
      • > Checking out the website of a Flemish computer shop,

        We don't have Flemish computer shops in this country.
        • No, but you have (assuming you live in the Netherlands, because of your reference to Dutch guilders) for example, with HD's much cheaper than what you said. Unless of course you mean SCSI instead of IDE, in that case you're absolutely right.
    • First of all, RAM and disk space are NOT cheap. I spent 60 euros for 256 MB RAM, that's is not cheap (it's more than 120 Dutch guilders for goodness's sake!). A 60 GB harddisk still costs more than 200 euros. Again: not cheap. Until I can buy 256 MB RAM for 10 euros or less, and 60 GB harddisks for less than 90 euros, I call them everything but cheap.

      Whoa! That IS expensive! Let me quote some prices:

      256MB DDR266 CL2: 44e (could be had for 34e if you want generic brand)

      Western Digital Caviar SE 120GB EI

      • Again, I'm not getting ripped off. I've checked out several stores as well as lots of computer advertisements. All the prices are more or less the same: too expensive.
        • It could be that there are no cheaper alternatives in the Netherlands. But if products are considerably cheaper in Finland, where just about everything costs an arm and a leg, I would say that the Dutch IT-retailers are collectively ripping you off.
    • "Resources are NOT cheap dispite what everbody claims."

      Considering back in 1992 I paid US$300 for an 80 Gig drive, and RAM was $50 a megabyte...

      Resources *ARE* cheap.
  • by Anonymous Coward
    He's got time, motivation, money and computer. Sounds like the right guy for the job!
  • by terraformer ( 617565 ) <> on Sunday May 04, 2003 @09:48AM (#5874043) Journal
    It is not just the ease of installation but also the language used during that installation that is foreign to many users. Having a nice point and click interface on linux installs is a major leap forward but these still reference things like firewalls, kernels, services, protocols etc. Most people, when faced with new terms become disoriented and their frustration level rises. These setup routines have to ask users what they are looking to accomplish with their brand spanking new linux install.

    • Would you like to serve web pages? (Yes or No where the answer installs and configures Apache)
    • Would you like to share files with other users in your home/office/school? (Yes or No where the answer installs and configures Samba)


    • Users who need easy point n' click installations should not be installing servers.

      Moreover, your scenario is the complete antithesis of choice, which is a major driving force for using Linux. People choose Linux or another OS for a variety of reasons, and they choose their applications accordingly. For example, when choosing Linux because it performs well on slower hardware, you'll also want to choose leaner applications. If we didn't have choice we could just as well consider this:

      Would you like to use

      • ...your scenario is the complete antithesis of choice...

        This is absolutely absurd. If anything it is promoting choice by providing people who would otherwise unable to choose another OS over Wintel and Mac OSX the choice of Linux. What I am suggesting does not in anyway mean that there would be a lack of an "expert" setup that provided all of the options (no matter how obscure and arcane) you would expect and demand of an open source project. In fact, it may even provide greater granularity in the setup

  • I'm glad it isn't easy. I like a challenge, even if it takes longer, even at work.

    I'm glad it isn't brainless. I'm glad it's different across distros. Then I can pick and choose what I like.

    The less each distro is like any other, the happier I am.

    This is why I enjoy Linux.
  • I was just talking with my brother about this. My take is since a linux install comes with just about anything you need to use a computer (networking and comminucation software, office software, media players, etc), it really doesn't matter if there's tons of old software you can install. Especially since your probably not paying too much (if anything) for your licenses. Backwards compatiblity was a lot more important when your wordprocessor costs $500 and you don't want to buy the new version :).

    Not that
  • Apple has it right (Score:5, Interesting)

    by wowbagger ( 69688 ) * on Sunday May 04, 2003 @10:23AM (#5874188) Homepage Journal
    From what I am given to understand of the way the Mac OS 10.* handles such things, Apple got it more closely to right.

    As I see it, the following things need to happen to really make application installation be very clean under any Unix like operating system:
    1. All apps install in their own directory under /usr/[vender name]/[app name] - the reason for including the vender name is so that when two venders release different apps with the same name (Phoenix comes to mind) you can still dis-ambiguate it. Also allow apps to install into ~/apps/[vender name]/[app name] to allow for non-root installation.
    2. Under an app's directory, create the following subdirs:
      • [arch]/bin - any binaries that are OS/CPU dependent.
      • bin - shell scripts to correctly pick the right [arch]/bin file.
      • man - man pages for the app
      • html - help files in HTML, suitable for browsing
      • [arch]/lib - any shared libraries specific to the app.
      • system - desktop icons and description files, perferably in a WM-agnostic format, MIME type files, magic files (for the file command, and a description of each program in the app, giving the type(s) of application for each binary (e.g. Application/Mapping; Application/Route Planning).

    3. Shells and WMs are extended to search under /usr/*/*/bin for programs, /usr/*/*/man for man pages, etc.
    4. Programs shall look for ~/.{vender]/[appname] for their per-user storage area, and will create this as needed.
    5. The system must provide an API for asking if a given library/application/whatever is installed.
    6. The system must provide an API for installing a missing component - this API should be able to *somehow* locate an appropriate package. The requesting app will provide a list of acceptable items (e.g. need,,
    7. This is the biggest item, so I'm really going to stress it:

      Too damn many times I've tried to install FOO, only to be told by the packaging system "FOO needs BAR". But FOO doesn't *need* BAR, it just works "better" if BAR is present (e.g. the XFree packages from RedHat requiring kernel-drm to install, but working just fine (minus accelerated OpenGL) without it).

    Were venders to do this, then a program install could be handled by a simple shell script - untar to /tmp, run script to install needed pre-reqs, move files to final location.

    The system could provide a means to access the HTML (a simple, stupid server bound to a local port, maybe?) so that you could browse all installed apps' help files online.

    As a final fanciness, you could have an automatic process to symlink apps into a /usr/apps/[application class] directory, so that if you wanted to find all word processing apps you could
    ls /usr/apps/WordProcessors
    and see them.
    • Thanks for the informative post. It looks like Apple have done a very good job.

      As regards 1, I believe the intention of the /opt directory was similar (although sub-dividing by vendor seems new).

      3 - Great. I always wondered why this hadn't been done before on popular unixes. The two reasons I came up with were a) search times; b) possible command-line security implications and search ambiguities (i.e. it doesn't do you any good having /usr/VendorX/SpiffyApp if it's possible to install /usr/VendorNew/Spiff
  • by Skapare ( 16644 ) on Sunday May 04, 2003 @10:26AM (#5874204) Homepage

    Nicholas Petreley writes: []

    The following numbers are hypothetical and do not represent the true tradeoff, but they should serve well enough to make the point. If libthingy is 5K, and your application launches a maximum of 10 instances, all of which are statically linked with libthingy, you would only save about 45K by linking to libthingy dynamically. In normal environments, that is hardly worth the risk of having your application break because some other build or package overwrites the shared version of libthingy.

    Linking libthingy statically into application foo does not preclude the sharing. Each of the instances of application foo will still share all the code of that executable. So if libthingy takes up 5K, and you launch 10 instances, that does not mean the other 9 will take up separate memory. Even statically linked, as long as the executable is in a shared linking format like ELF, which generally will be the case, each process VM will be mapped from the same file. So we're still looking at around 5K of real memory occupancy for even 1000 instances of application foo. The exact details will depend on how many pages get hit by the run-time linker when it has to make some address relocations. With static linking there is less of that, anyway. Of course if libthingy has its own static buffers space it modified (bad programming practice in the best case, a disaster waiting to happen in multithreading) then the affected pages will be copied-on-write and no longer be shared (so don't do that when developing any library code).

    Where a shared library gives an advantage is when there are many different applications all using the same library. So the "shared" part of "shared library" means sharing between completely different executable files. Sharing between multiple instances of the same executable file is already done by the virtual memory system (less any CoW).

    The author's next point about sharing between other applications is where the size of libthingy becomes relevant. His point being that if libthingy is only 5K, you're only saving 45K by making it a shared (between different executables) library. So that's 45K more disk space used up and 45K more RAM used up when loading those 10 different applications in memory. The idea is the hassle savings trumps the disk and memory savings. The situation favors the author's position to use static linking for smaller less universal libraries even more than he realized (or at least wrote about).

    For a desktop computer, you're going to see more applications, and fewer instances of each, loaded. So here, the issue really is sharing between applications. But the point remains valid regarding small specialty libraries that get used by only a few (such as 10) applications. However, on a server computer, there may well be hundreds of instances of the same application, and perhaps very few applications. It might be a mail server running 1000 instances of the SMTP daemon trying to sift through a spam attack. Even if the SMTP code is built statically, those 1000 instances still share unmodified memory mapped from the executable file.

    • Yes if you start the same app twice, they will share memory. But that isn't the problem.
      If your entire GNOME desktop is statically linked to GTK+, and you launch panel, nautilus and metacity, then you're loading 3 seperate copies of GTK+ into memory that don't share any memory at all!
      • The articles point was it's easier to staticly link the small obscure libraries. I don't know why a developer packaging a binary for general distribution can't statically link certain libraries.

        I'm a bit rusty at static linking, but can't they just do a gcc -s -lgtk -ljpeg -o executable /usr/lib/libobscure.a foo.o bar.o widget.o to generate the binary? Then I wouldn't have to hunt down and attempt to install libobscure--sometimes a very frustrating process.

    • Yes. I've been saying this for a while. The only time dynamic linking saves you something is when you have multiple, different programs running that share the same library. This happens more on Windows than on Linux, because everything is forced to link to Microsoft DLLs. It's much less true on UNIX, and far less true on the Mac.

      A good rule to follow is to never dynamically link to something that's substantially smaller than your program.

      Dynamic linking tends to pull in too much. If you use "cos",

  • by DuSTman31 ( 578936 ) on Sunday May 04, 2003 @10:27AM (#5874209)

    One of the greatest strengths of the UNIX platform is its diversity..

    Package installation is a simple prospect on the Windows platform for the simple reason that the platform has little diversity.

    Windows supports a very limited set of processors.. So there's one factor that windows packaging doesn't have to worry about.

    Windows doesn't generally provide seperately compiled binaries for slightly different processors ("Fat binaries" are used instead, wasting space).. So the packaging system doesn't have to worry about that. On linux, on the other hand, you can get separate packages for an athlon-tbird version and an original athlon version.

    On an MS system, the installers contain all the libraries the package needs that have the potential to not be on the system already. This could make the packages rather large, but ensures the user doesn't have to deal with dependencies. Personally, I'd rather deal with dependencies myself than super-size every installer that relies on a shared object..

    Furthermore, on windows there arn't several different distributions to worry about, so the installers don't have to deal with that either.

    All of these point confer more flexibility to the unix system but have the inevitable consequence that package management can get to be rather a complex art. We could simplify package management a great deal, but it'd mean giving up the above advantages.

    • Linux is better than Windows because it's so complex and difficult to use?

      It's funny how hypocritical the crowd here on /. can be. One day they are making out a bug in IE that crashes the browser to be a HUGE deal. Something that just about has to be purposely coded and would take 2 seconds out of your day to relaunch the window if you did run across a site with the code.

      Two seconds was a huge amount of time in the average Linux user's day yesterday, but today, hours and hours spent installing software
  • by corvi42 ( 235814 ) on Sunday May 04, 2003 @11:47AM (#5874577) Homepage Journal
    Some very good points and ideas, but also IMHO some misguided assumptions and directions.

    1) RAM & Disk space is not always cheap, or even readily available. There are many legacy systems where users would benefit from these advantages but the users are unable or unwilling to upgrade the system. What happens to old 486 and 586 systems where the motherboard doesn't support drives larger than X - there are work arounds, but the people who need easier install processes aren't going to tackle the complex system configuration issues to implement these. What happens when you can no longer obtain RAM in your community for your old machine, or it no longer has spare slots, etc. What happens if you have a second hand computer and simply don't have the available $$ to spend on upgrades, no matter how cheap they are. I don't like the idea of designing an easier-to-use system that excludes such people, no matter how small a portion of the market they may be. Hence redundant copies of libraries and staticly linked libraries are a very inelegant solution for these people.

    2) We musn't impose requirements on application developers to use a given installer library, or code their apps to conform with particular standards that the installer requires - it is again unfeasible and undesireable in many circumstances. Developers have more than enough to worry about as it is without having to reimplement the way their app behaves to be installer friendly. The installer must exist at a level independant of the way the application has been coded, to a reasonable degree. I think that much of the problem that exists currently is that too much of the "packager" issues of making apps compatible to a hundred and one different unices has been getting dumped on developers and this both reduces their time for actual development and means that we have a hodge-podge of apps that are compatible to an unpredictable degree, because essentially developers don't want to be burdened with this.

    3) Diversity is the spice of life, and it is the spice of unix. The community of unices is robust because it has adapted systems which are generally stable and reliable across a vast array of hardware and software. We want to capitalize on this tradition and expand and enhance it, not force anyone to use a particular layout for their apps & installations. This being said, I find the idea of local copies of libraries in the application directory unappealing, because it forces one to have a local directory ( rather than using /usr/bin, /usr/lib, etc. ) Also the idea of having configuration files that resolve dependancies forces the application to use such configurations, which is also undesireable.

    5) Aside from all these criticisms, there are many things I do agree with. Particularly that dependencies should be file specific, not package specific, that an integration of installer & linker is key to the organization of such a tool. I also agree that the installer should make use of auto-generated scripts wherever possible, and should provide detailed, useful messages to the end user that will help them to either resolve the conflicts in as friendly a way as possible, or to report the conflicts to their distribution. Also the installer should have advanced modes that allow for applications to be installed in accordance with a user or administrator prefered file system. That is one shouldn't be forced to install into /opt or into /usr/bin or /usr/local if you don't want it there.

    Given all this, is there any possible way to solve all of this in one consistent system? I think so - but it may require something that many will immediately wretch over. A registry. That's write, I used the foul windoze word registry. I propose a per-file database for libraries & applications that would record where given versions of given libraries are installed, under what names, in what directories, of what versions, providing what
  • Zero Install (Score:3, Interesting)

    by tal197 ( 144614 ) on Sunday May 04, 2003 @12:39PM (#5874825) Homepage Journal
    Looks like Zero Install [] would solve those problems. (technically, it doesn't make installation easier, since it does away with installation altogether, but I don't think anyone will mind ;-)

    For those who haven't tried it:

    "The Zero Install system removes the need to install software or libraries by running all programs from a network filesystem. The filesystem in question is the Internet as a whole, with an aggressive caching system to make it as fast as (or faster than) traditional systems such as Debian's APT repository, and to allow for offline use. It doesn't require any central authority to maintain it, and allows users to run software without needing the root password."

  • I posted this elsewhere, but it is worth posting again. There are at least 6 reasons why shared libraries are still better than every app having it's own library:
    1. Bandwidth. No-one wants to have to take 2-4x as long to download programs.

    2. Hard-drive space. Even if we all had 40GB hard-drives, no-one wants to waste it reproducing the same information a hundred times. People buy hard-drives to store data, not twenty copies of the same library.

    3. RAM.Loading two copies of the same library wastes RAM.

    4. Load-time.Having to load all of the libraries will increase load-time compared to cases where some were already opened (by other apps) and you don't have to load them.

    5. Consistency.Part of the benefit of having shared libraries is shared behavior. Destroyed if app X uses Qt 2.0 and app Y uses Qt 3.0.

    6. The Big 3S: Security, Stability, and Speed.Who knwos what insecure, unstable, and poorly performing version of a library each app comes with. And who knows what crappy options it was compiled with. Resolving these issues at one central point can be counted out. You want to deal with any of these issues, you'd have to do it for every application's version of a library. That means doing it many times separately.
    The solution to dependency-hell is to design better dependency management. Reverse-dependency management -- so as to remove useless libraries when no-longern needed and avoid bloat -- would also be good. Gentoo is doing pretty well in these categories.

    On making install process' simple. I think that a graphical installation does not necessarily make things any easier. Anyone here played Descent 2? That installed by a good old-fashioned DOS-installation. And it was not particularly hard to install, even though it was not a GUI-install.

    It is also not necessarily a good idea to abstract into oblivion the technical details behind an install. Part of the philosophy behind Gentoo, for example, is to take newbies and turn them into advanced users. I think that a clear well thought-out install guide is a useful thing. Gentoo's install guide is thorough and has virtually no noise. Compare that to the install-guides for Debian, which are affirmative nightmares, filled with irrelevant stuff. Furthermore, a helpful and friendly user-community is always a good way to help new users orient themselves. New users are going to ask questions on forums that advanced users find obvious. That should not be an invitation to say, "RTFM bitch" at the top of your lungs. All of us were newbies at one point, and just because we may have had to learn things the hard way doesn't mean that others should too.
  • DLL Hell On Windoze (Score:4, Informative)

    by rossz ( 67331 ) < minus language> on Sunday May 04, 2003 @02:58PM (#5875613) Homepage Journal
    My specialty is software installation. I've written dozens of installers on a multitude of platforms. On the windoze platform, DLL happens for two reasons:

    1. No backwards compatibility. All too often, new versions are released that break older programs. Even Microsoft has done this with major DLLs.

    2. Stupid installer writers. You're supposed to check the version number of a file before overwriting it. All too often the file is overwritten without regard to the version numbers.

    So to overcome these two problems, the smart installer coder would put all the DLLs in a private directory of the application (not in system/system32).

    Of course, Microsoft came up with a new system that broke this simple fix. Registered libraries. Instead of using a specified path to get a DLL, you would ask the system to load the DLL (using registry information). The path was no longer considered. One, and only one, version of the DLL was allowed on the system, and there was no feasible way to get around this limitation. Someone came up with a fix. It would have been a major pain to implement and would require cooperation amongst the DLL coders, which isn't about to happen since the lack of cooperation was one of the core problems in the first place.

    For a commercial level installer, missing libraries was absolutely unacceptable. My personal rule was to ALWAYS include dependencies in the installer package. This meant the installer was bigger and more complicated, but it guaranteed the application could be successfully installed without the user having to run off to find a missing library. Or did it? No - Microsoft decided that some libraries could not be independently distributed. The only legal means of getting the library was through the official Microsoft installer. And no suprise here, half the time the only official installer for a library was the latest version of Internet Explorer.

    Requiring an upgrade to IE is a major problem for large companies. They standardize on specific software and don't allow the users to change it. Requiring a site-wide upgrade of something like IE (or the MDAC package) was not to be taken lightly. Especially when it was dicovered that the required upgrade would break other applications (back to DLL hell).

    FYI, when a major customer pays your mid-sized company a couple of million dollars a year in license fees, they can definately tell you they won't upgrade IE. It's our job to come up with a work around. Too bad a measly few million paid to Microsoft wasn't enough to get them to change their ridiculous library polices.
  • by FullCircle ( 643323 ) on Sunday May 04, 2003 @04:08PM (#5876164)
    My issue with Linux is that every time a new version of a library comes out, it breaks all prior apps. (usually)

    The response is that compatibility slows progress by locking down the api. This is so short sighted that it is not even funny.

    If programmers thought out how their libraries would be used it would be simple to add another call in a newer version. Instead they make short sighted decisions and ruin the use of a shared library.

    IMHO any newer version of a library should work better than the previous version and be a 100% replacement.

    This would fix a huge chunk of DLL hell and installer issues.

  • Device Detection! (Score:3, Interesting)

    by cmacb ( 547347 ) on Sunday May 04, 2003 @05:07PM (#5876641) Homepage Journal
    While I think the debate over static vs dynamic libraries, DLL Hell, and registry vs central vs distributed storage of program parameters and settings is all worthwhile, he didn't cover what I think is *the* most important issue in the Linux installation process, and that is device detection.

    MOST of the problems I've had with installing Windows, Linux or OS X involve the fact that when I am all done, not all the components of my machine are working the way I expected them to. I end up with no sound, or bad sound, or video that isn't right, or a mouse that doesn't work, or in the really bad cases, disk drives that work well enough to boot the system but then fail after I'm in the middle of something important.

    Once I get past the initial installation I feel I am home free. If the devices all work the way they are supposed to, then I can avoid most other problems by just sticking with the distro that I started with. If it was Debian Stable I stay with that, and if I need to install something that isn't part of that system I install it as a user (new version of Mozilla, Evolution, Real*, Java for example).

    It would definitely be nice if developers who used shared libraries didn't seem to live in a fantasy land where they are the only users of those libraries. But I *don't* think that this is Linux's biggest problem with acceptance. What Linux needs is an agreement by all the distros to use something like the Knoppix device detection process... and then to cooperatively improve on it. A run-from-CD version of every distro would be great. Why blow away whatever you are running now just to find out if another version of Linux might suit you better?

    I'd like a system that does a pre-install phase where every component of my system can be detected and tested before I commit to doing the install. The results of that could be saved somewhere so that when I commit to the install I don't have to answer any questions a second time (and possibly get it wrong).

    There is nothing that can guarantee that what appears to be a good install doesn't go bad a week later, but I personally haven't had this happen. I usually know I have a bad install within a few minutes of booting up the first time, and by then, its too late to easily go back to the system that was "good enough".

Don't sweat it -- it's only ones and zeros. -- P. Skelly