Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software Linux

Zero Install: The Future of Linux on the Desktop? 718

SiegeX writes "Zero Install ,which is apart of the ROX desktop environment is not just a new packaging system, it's a whole new way of thinking; a way that I believe is exactly what Linux needs to become a serious contender for Joe User's desktop. Zero Install uses an NFS to both run *and* install apps from. The apps are all self-contained in their own directory; binaries, docs, source code and all. Once the app has been downloaded its kept in a cache from that point on to minimize delay. The beauty becomes apparent when Zero Install is combined with ROX which runs the application by just clicking on the directory it was installed to. Deleting the application along with all the other misc files is as simple as removing the directory it's contained in. This method of partitioning applications in their own directories also allows installing multiple versions of any application trivial. This is something even the greatest of technophobes could understand and use with ease."
This discussion has been archived. No new comments can be posted.

Zero Install: The Future of Linux on the Desktop?

Comments Filter:
  • Re:waste? (Score:5, Informative)

    by JaredOfEuropa ( 526365 ) on Saturday April 03, 2004 @01:37PM (#8756422) Journal
    To me, it would be better to have to package load onto the HDD, and if there are any missing libraries, have that go and fetch them as well.
    That's exactly what is happening: the software is cached. From their website: "I've only got dial-up; can I still use Zero Install? Yes! Run each program you want while on-line and it will be cached. When you're off-line, the cached copy is used automatically."
  • by Ummagumma ( 137757 ) on Saturday April 03, 2004 @01:44PM (#8756468) Journal
    No, actually, not at all. Most programs on your windows pc that are 'installed' into Program Files still have obscure registry entries, and may require dlls and such in the \winnt and \winnt\system32 directories. You cant just remove a progam from the 'Program Files' folder, and have it gone.
  • Yes (Score:5, Informative)

    by mrsev ( 664367 ) <mrsev&spymac,com> on Saturday April 03, 2004 @01:44PM (#8756471)
    This sounds great. Im no linux guru and the hardest thing I find is to install a programme that requires other files but one version is required for one app and the other for another. In this age disk space is trivial and stability and ease of use much more important. Granted many people like tinkering with their systems but for me I just want to get my work done..(and then play games).
  • by Koyaanisqatsi ( 581196 ) on Saturday April 03, 2004 @01:46PM (#8756482)
    Flame as you want, but .Net assemblies not published to the GAC (Global Assembly Cache) are exactely like that: all of the application files are kept under a single directory and all you need to setup the app is a "xcopy" of its files.

    Delete the directoty and the app is gone.

    This is here now, and altough .Net still have to catch on into the desktop, it is very much real on the server side. Gotta love it!
  • by tepples ( 727027 ) * <tepplesNO@SPAMgmail.com> on Saturday April 03, 2004 @01:47PM (#8756494) Homepage Journal

    It's still very much like this in Windows, in fact, with the "Program Files" directory often containing everything (although "Documents and Settings" is becoming more used for user settings storage). Personally I like the idea. I've always been confused trying to locate various files which belong to a single application in *nix.

    Most *n?x apps seem to store all the per-user settings in a dot-file or dot-folder in the user's home directory. In Windows, they're often strewn about in at least three places: C:/Documents and Settings/Me/Application Data/, C:/Documents and Settings/Me/Local Settings/Application Data/, and HKEY_CURRENT_USER in the registry. In addition, a lot of the apps I have installed on my Windows 2000 machine came bundled with peripherals, where the app and a device driver came as part of the same install, the app in C:/Program Files/ and pieces of the driver in various folders in C:/Windows/.

    How does Rox handle it?

  • Re:waste? (Score:2, Informative)

    by RdsArts ( 667685 ) on Saturday April 03, 2004 @01:48PM (#8756506) Homepage Journal
    To me, it would be better to have to package load onto the HDD, and if there are any missing libraries, have that go and fetch them as well.


    That's what it does. That is what they mean when they say 'cache'.

    What happens is the software is cached first. Then, all dependancies are listed as 0install directories, so if you have it already it'll use the cached version, but if not it'll pull a copy and cache that. This is nice because if, say, you don't use the help files, it won't pull them until you ask for them, so you don't have them clutering up your HDD or being downloaded. You just point the ROX-Filer at the 0install AppDir, click on it, and without you even needing to think about it everything is installed and the program runs.

    It also has a way to quickly update all apps, or if you just want one upgraded, open it in the Filer and hit 'reload.' Simple, easy, and intuitive. In general, I'd say this is even easier then the Windows Update and installers combo.
  • by Anonymous Coward on Saturday April 03, 2004 @02:02PM (#8756589)
    Close. I believe these guys actually export an NFS directory over the network and the system simply mounts it as the root file system. Apple does this also with NetBoot.
  • by Anonymous Coward on Saturday April 03, 2004 @02:07PM (#8756629)
    As one other commenter pointed out, this is the way things have been done on the glorrious Disk Operating System since DOS 2.0 some 22 years ago!

    The headline for this story should be "Linux developer gets a clue".
  • by bfree ( 113420 ) on Saturday April 03, 2004 @02:08PM (#8756632)
    In the last few months klik [knoppix.net] came into being. klik is a point and click software store for Knoppix which uses AppDir (quoting from the architecture description):
    Mainly a philosophy about making each app package "self contained" (at least relative to some defined base system, Knoppix in our case).
    If you have a recent (say from last November or so) version of Knoppix fire it up and give it a go! You can even install software while running from the liveCD and retain it in a persistent home.
  • by pigpogm ( 70382 ) <michael@pigpog.com> on Saturday April 03, 2004 @02:08PM (#8756638) Homepage
    This sounds to me as though it has some similarities to the way the old Acorn Archimedes used to work (What? Oh, it was quite big over here in the UK ;)

    An 'application' looked like a single file that started with a '!'. It ran as though it was one file, copied and moved as though it was one file. If you used a modifier to open it (Ctrl-click, or something similar), though, it actually opened up as a folder. The app was really made of a number of files - the icon that the application/folder would have, the actual programs, any config files, a script that was run when the program was launched, and another script that would be run as soon as the OS 'saw' the app.

    Part of the config would tell the OS what file types the app could handle, so as long as the app had been 'seen' (ie, it's parent folder had been opened), the filetypes would be recognised until the next reboot.
  • This is so 1990s... (Score:1, Informative)

    by Anonymous Coward on Saturday April 03, 2004 @02:12PM (#8756662)
    We've seen this all before, back in 1992 when Next first started using "bundles" for appication resources and treating them as executable files. It's good to see that the world is finally catching up, if 10 years later.
  • As many pointed out (Score:1, Informative)

    by TechniMyoko ( 670009 ) on Saturday April 03, 2004 @02:12PM (#8756665) Homepage
    Macs have been doing this for years. And Windows programs can do so if they want to
  • by ananke ( 8417 ) on Saturday April 03, 2004 @02:19PM (#8756714)
    Insightfull my ass. ./configure --prefix=~/whatever
  • by Anonymous Coward on Saturday April 03, 2004 @02:20PM (#8756720)
    To install to an alternate directory all you have to do is add prefix=/home/foo as a switch configure: ./configre --prefix=/home/foo
    make
    make install
  • by SuperBanana ( 662181 ) on Saturday April 03, 2004 @02:33PM (#8756789)
    Apple has had this for years going back to the old System 9, 8, 7, etc

    Actually, it hasn't. Ask any Mac pro; applications started making "library" files that went into the System folder(or worse, programs like Norton Utilities insisted on putting libraries into the Extensions folder, which was not what Apple told developers it was for). Apple caved in and 9.x started sprouting "Application Support" folders, a "Libraries" folder, etc. Developers just couldn't wrap their brains around the single-file, applications-don't-mess-with-the-system-folder model. Often times, commercial programs would blatantly disregard Apple's filesystem guidelines. Often times extensions has such weird names, Cassidy&Greene developed an extension manager with a database of all the known files so you could figure out what the hell stuff was.

    While you tout OS X as better than Linux or Windows, as an experienced long-time Mac user I saw OS X as a step down from the old MacOS with regards to filesystem simplicity. Applications now install stuff into zillions of different places. Virtually none of their installers ask if you want to install just for your user(ie using your Library, Application etc folders), or install system-wide(a few- VERY few- do). Application installers that have no business needing my password ask for it; why does Acrobat reader need sudo to install itself into Applications? Answer- it doesn't, but it's probably saving some prefs file somewhere it shouldn't.

    Even worse...you can install packages using a "package system", but Apple will be damned if they'll give you a way to UNINSTALL a package, system or otherwise. Want to remove all the localization crap you forgot to turn off during system install? You have to download a third-party app to remove almost a gigabyte of files from your system, instead of just going into a "Software" panel and clicking remove. Windows has had it for years, with its only flaw being that it calls the developer's uninstall program, which often times doesn't work, especially if you've deleted the app folder but nothing else.

    Another side effect of the multiple-files problem is added complexity; the # of files in the filesystem has ballooned enormously, because instead of an application being one big file with a resource fork, it's now at least 3 folders, and often times hundreds(or even thousands) of files. Moving an application used to be easy- you moved one big file, the Finder just did a straight copy very efficiently. Now it has to copy hundreds of small files, so it takes forever(and amusingly, copying just a bunch of raw non-app files takes about 5 times longer in the Finder than it does via cp or ditto).

    Don't get too uppity about not having a registry. OS X uses a number of preference files, and even though they've changed to XML and the like, users are seeing the same problems with OS 9- corrupt preference files causing odd behavior. Remove the naughty pref file, things start working again. There are now third party utils that specialize in checking these prefs; if they can do it, why can't it be part of the bootup process?

    Oh, and lastly- Apple has made it even more difficult to make a boot disk for your mac to do disk maintenance. It used to be you just copied over your system folder, removed all the extensions, control panels, prefs, etc you knew you didn't need. Now? You need some stupid shareware program to do it, and half of 'em still haven't been updated for 10.3.

  • Re:waste? (Score:1, Informative)

    by matithyahu ( 560061 ) on Saturday April 03, 2004 @02:35PM (#8756804)
    Well, yeah and the guy who wrote it admits it too. He says, however that he started working on Zero Install before he had heard of Java Net Start
  • by tal197 ( 144614 ) on Saturday April 03, 2004 @02:36PM (#8756807) Homepage Journal
    I'm the author of Zero Install (and much of ROX) so I'd better clear up a few points here.

    The main one is that there are actually two installation systems being discussed in the article:

    1. ROX uses application directories (bundles). That means that instead of downloading gimp.tgz and then copying the files inside it all over the place (/usr/bin, /usr/share, etc), they stay in a single directory and you access them from there. That allows drag-and-drop installing, and uninstalling by deleting the directory.
    2. Zero Install is a caching network filesystem, where all software is available at a fixed, globally unique, location (like web pages).

    ROX application directories can be made available via Zero Install. In that case, running the application is a lot like running a program from a network share (but more aggressively cached). Or, you can DnD them onto your local disk manually (without Zero Install).

    You can also use Zero Install for non-ROX type applications.

    Secondly, when we say that application directories are self-contained, we mean that a single .tgz download corresponds to a single installed directory. Application directories can (and do) still depend on shared libraries (possibly other application directories).

    Without Zero Install, after installing an application by drag-and-drop, running it may tell you that you need to install some other library before it will work.

    With Zero Install, the application just tries to access it from its fixed location (URI) and it gets fetched.

  • by expro ( 597113 ) on Saturday April 03, 2004 @02:41PM (#8756840)

    I install Mozilla on OSX. It says, rather than using an install program, I should just drag the icon into a directory, such as the applications directory, making it writable by any unprivileged program the user may execute.

    This is not unlike I have seen for other programs for Mac as well.

    When I try to take the initiative and protect these directories, the programs often stop working, because the program writes these directories when run by the user.

    I am told by admins that program directories on OS X should generally be made user-writable.

    When I do a default install Eclipse on OS X, it places the user workspace underneath the Eclipse program directory, not naturally leading to program directories that are not virus writable. I run it on Linux, and the workspace is separated from the program directory off of my home directory, and the eclipse stuff installs nicely into a protected area.

  • by tal197 ( 144614 ) on Saturday April 03, 2004 @02:46PM (#8756880) Homepage Journal
    Talk about encouraging waste.... The article states:

    The apps are all self-contained in their own directory; binaries, docs, source code and all. * * * This method of partitioning applications in their own directories also allows installing multiple versions of any application trivial.

    What happened to the idea that we wanted programmers and users to share libraries and code? To solve rather than avoid dependancy problems?

    Applications are self-contained in that everything from a single package is in a single directory, rather than being spread over /usr/bin, /usr/share, etc. They can still depend on other packages.

    Without Zero Install, this means that although installing an individual package is quite easy, you may then have to install dependant packages in a similar fashion. With Zero Install, you get automatic dependancy resolution and freedom from install scripts.

  • Re:This is why... (Score:3, Informative)

    by gobbo ( 567674 ) on Saturday April 03, 2004 @02:55PM (#8756934) Journal
    Now I find more handwaving and mistruths and wonder if 10 does the same messy things. . .

    System 7 on up to 9.2 wound up with all kinds of scat dribbled throughout the various subdirectories in the system folder, as well as associated files in the application folder -- but only some applications were to blame -- many classic apps are self enclosed and install with a drag'n'drop. OS X improves the ratio somewhat with its bundles, an even greater proportion of apps are install-friendly. Some developers still love to pollute files everywhere (are you listening Adobe?) -- though since we're generally talking about /Library, it isn't as big an issue as the conflicts arising out of the Extensions folder in Classic.

    It's all relative, though. Give me the 100,000+ file complexity of OS X with it's well-designed directory structure over a simpler but kludgy Mac OS 9 or Win9x anyday.

  • by tal197 ( 144614 ) on Saturday April 03, 2004 @02:56PM (#8756943) Homepage Journal
    Are you all guys seriously claiming that first
    a) finding right foo for your system,
    b) downloading foo,
    c) knowing where it went,
    d) knowing where to drag it,
    e) actually dragging it

    is simpler than cryptic
    a) "install foo"?
    [ b) enter root password and hope it doesn't mess up your system ]

    a) You don't have to find the right version for your system, just run the application. It will try to access the appropriate binary and that will get cached.
    b) Downloading happens automatically, just like viewing a web page through a web cache.
    c) It goes in the cache directory, whose structure mirrors the URI scheme (eg /var/cache/zero-inst/gimp.org/bin/gimp). You shouldn't care though, any more that you care about the structure of squid's web cache.
    d) Drag it where you want it. On your desktop, panel, start menu, etc.
    e) You could just run it where it is, without creating a short-cut at all.

  • by w3weasel ( 656289 ) on Saturday April 03, 2004 @02:58PM (#8756956) Homepage
    Apple has already implemented this archetecture... albeit without the nfs... but since ftp servers mount as any remote disk would (in 10.3) the nfs is pretty much there too.
    all the various bits and pieces of a well formed Mac app go into a 'package' (not RPM). See it here [apple.com]. a double-click on that folder launches the app, deleting the folder removes the app, and updates need only to replace affected files within the app.

    When developers take the time to use this structure, it works really really well. Unfortunately, Apple is not in a position to tell developers "if you want to write software for Mac, you must do it like this". So they gave developers the option of being messy.

  • by tal197 ( 144614 ) on Saturday April 03, 2004 @03:02PM (#8756980) Homepage Journal
    How does Rox handle it? [ configuration files]

    Usually, ~/Choices/ROX-Filer/Options.xml, etc. Choices cascade (/usr/local/share/Choices, /usr/share/Choices, ~/Choices). You can change the location with CHOICESPATH.

    In future, we'll probably move over to the (very similar) freedesktop.org base dir system, which defaults to ~/.config instead but is otherwise pretty-much the same.

  • by Anonymous Coward on Saturday April 03, 2004 @03:09PM (#8757029)
    > I install Mozilla on OSX. It says, rather than using an install program,
    > I should just drag the icon into a directory, such as the applications
    > directory, making it writable by any unprivileged program the user may
    > execute.

    Great, the power user can trow away unneeded language files. Other users have read only access.

    > When I try to take the initiative and protect these directories, the
    > programs often stop working, because the program writes these directories
    > when run by the user

    What?! Mozilla.app must be badly written / packaged. In no way should the application dir change during execution. Preferences or caches may not be stored in the application directory, they must be stored in user space for example: ~/library/preferences.
    --
    Dennis SCP
  • by tal197 ( 144614 ) on Saturday April 03, 2004 @03:28PM (#8757141) Homepage Journal
    Everybody here is oohing and aahing but let me go on the record as saying that this is actually a BadIdea (TM). Why?

    Sorry to spoil your arguments, but each of your starting assumptions is wrong ;-)

    • It does use shared libraries, so just upgrading libxml will do, same as usual.
    • User data isn't kept inside the applications. The whole /uri/0install filesystem is read-only anyway.
    • Configuration isn't inside either.

    Just a bad idea taken to it's expreme.

    If you could point me to the places on the site where you got your information, I'll try to fix them / make it clearer. Thanks.

  • by Sax Maniac ( 88550 ) on Saturday April 03, 2004 @03:38PM (#8757190) Homepage Journal
    Probably not, but I recall reading that modern virtual memory systems are so good, they reduce the actual benefits of dynamic libraries down almost nil.

    I think future versions of Windows know how to scan the disk periodically, find redundant files, and essentially link them together automatically. That's pretty cool - you deliver your app with FOO.DLL version whatever and drop it in your app's directory. If someone else installs a FOO.DLL in their app's directory that matches the exact same bits, the system coalesces them together. Then you upgrade application #1, which installs a modified version of FOO.DLL, and the OS unshares them.

    Yeah, it's copy-on-write for files, but the new bit is the auto-coalescing.

    To me that sounds like all the benefit of shared libraries with none of the DLL hell. It will be interesting to see if it actually works.

  • Re:Screw drag & drop (Score:4, Informative)

    by be-fan ( 61476 ) on Saturday April 03, 2004 @03:41PM (#8757207)
    Most people don't want to learn a new packaging system.
    With APT, they don't have to. They just have to be able to double-click on the program they want. I've seen lots of Windows users who love the concept and wish Windows had it.

    The problem with Apt is it relies on someone else saying 'oh, that's great, I'll make some debs.'
    Isn't that true for installers as well? Somebody has to make those too.

    And it only works for someone with Debian or a Debian-compatable system like Lindows, Xandros, Knoppix, et al.
    Ximian has a nifty tool called Build Buddy [ximian.com] that automates the process of generating packages compatible with not just RPM and DEB systems, but Solaris and HP-UX package managers too.

    Additionally, to install a deb you need to be root.
    This needs improvement, yes.
  • by tal197 ( 144614 ) on Saturday April 03, 2004 @03:55PM (#8757291) Homepage Journal
    But one thing I wonder about Zero Install: what if you launch an application, it needs a piece that you don't have cached, and the server hosting it is down? Is it possible for a maintainer to unpublish an application?

    Zero Install can download from mirrors, peer-to-peer, etc, provided it gets the master index with the GPG signature from the main server.

    If you want to get the master index from a backup server, you need manual intervention (root needs to indicate that the backup server can be trusted).

    However, since the signature part is small (about 1K), a single trusted backup site (debian.org?) could easily host every index in the world. The rest of the data can come through peer-to-peer, etc.

  • Re:Screw drag & drop (Score:2, Informative)

    by Xoro ( 201854 ) on Saturday April 03, 2004 @03:57PM (#8757303)

    Be-fan -- your deep unwavering belief is a sure sign you are missing something. There's no "drag and drop" involved -- zero install means zero install.

    It's based on a sort of virtualized internet filesystem. Here is their example:

    $ cd /uri/0install/www.gimp.org
    $ ls
    gimp1.2
    gimp1.3
    $ 0refresh
    $ ls
    gimp1.2
    gimp1.3
    gimp2.0

    You never install, you just run, either by command line or some form of gui contraption. Of course, this system places a lot of faith in the upstream developers, but their website paints a far more interesting picture than the summary given here. Here [sourceforge.net], for instance, is a decent comparison with apt. Check it out.

  • by tal197 ( 144614 ) on Saturday April 03, 2004 @03:59PM (#8757317) Homepage Journal
    If just running an application can send it stalking over the Internet downloading and installing libraries, what happens to security? Or is this automatic install only for objects on the LAN repository?

    System security? Nothing. All code runs as you. As for your own security, it doesn't allow any attack that couldn't have been done without Zero Install too.

    Reducing the security risk from traditional installation systems (APT, RPM, etc where you're running a downloaded install script as root) was an important goal for Zero Install.

    See The Zero Install system [sourceforge.net]

  • by Anonymous Coward on Saturday April 03, 2004 @04:00PM (#8757323)
    Which is the point of the XDG Base Directory Specification [freedesktop.org]. I wish more people liked this idea (I certainly do). It's not a long spec, either. From the spec:
    $XDG_CONFIG_HOME defines the base directory relative to which user specific configuration files should be stored. If $XDG_CONFIG_HOME is either not set or empty, a default equal to $HOME/.config should be used.
  • by Monx ( 742514 ) <MonxSlash AT exp ... bilities DOT com> on Saturday April 03, 2004 @04:04PM (#8757340) Journal
    Why doesn't the finder give you the full path to the item you have selected so you can copy it and paste it elsewhere.?

    Have you tried dragging the icon to where you want the pathname pasted? This works for the terminal and many html editors. It probably won't help with a word processor though, since it might try to embed the file.

    Personally, I think the new Finder blows compared to even the old System 6 version. I want a real Finder replacement that brings back all of the keyboard navigability (and key bindings) of the old one. I don't think it would be for everyone, but it would be great for people like me who keep hitting cmd-n only to close the window and hit shift-cmd-n. I want my old Finder Secrets back.
  • Re:static or what? (Score:5, Informative)

    by Afrosheen ( 42464 ) on Saturday April 03, 2004 @04:06PM (#8757349)
    Here's the difference in a nutshell.

    Statically linked binaries include all the libs and dependencies along with the binaries used to actually 'run' the program in one fat package. Depending on what it is you're packaging, it can add a shitload of weight to the package.

    Dynamically linked binaries expect your system to contain dependencies already. They have the benefit of giving you a small, tight package but don't always work right away. IE you, the user, have to hunt down packages it needs or apt or rpm has to handle that for you.

    It's a trade-off either route you choose. Statically linked binaries add bloat but usually work great without user or system intervention. Dynamically linked binaries are smaller and bloat-free but depend on you, your package manager or something else to make sure it works.

    The typical stance of developers has been to build good packages that are small and dynamically linked. After all, what's the point of having 20 copies of a common system library that you may have had since your OS install? That's just bloat. Ultimately, the best developers, in my opinion, give you the choice when you go to download. Click here for static, here for dynamic.
  • by Anonymous Coward on Saturday April 03, 2004 @04:08PM (#8757354)
    I have never used the option, but dpkg has a --instdir option that appears to do this too.
  • You can upgrade to new versions with bugfixes and security fixes without recompiling every single program on your system.
  • by kundor ( 757951 ) <kundor@mem[ ].fsf.org ['ber' in gap]> on Saturday April 03, 2004 @04:29PM (#8757449) Homepage
    ln -s /opt/*/bin/* /usr/bin/
  • by cgenman ( 325138 ) on Saturday April 03, 2004 @04:48PM (#8757598) Homepage
    whereas x86 PC hardware was produced by loads of companies and hence the competition drove the price down.

    Except that, as the above poster pointed out, many Apples are cheaper. iBooks most famously, but their integrated line is also cheaper if you count the cost of the monitor. The high end of their line is very affordable... Compare the cost of a dual 1.8 G5 to a dual 2.8 Xenon, and recognize that the dual G5 is faster. Plus multiprocessing on the Mac has far fewer of the hitches of multiprocessing on the PC.

    Not to undercut my own argument, but in the (long) past Apple did make a few supplier decisions that didn't pan out. Specifically, apple's use of SCSI drives became a liability. While it was expected that with competition SCSI prices would eventually fall to be on parity with ATA drives, it never happened, despite being significantly faster for about the same cost to produce. Still, hot swapping external SCSI drives was one of the best reasons to own a Mac (and one of the easiest ways to transfer large amounts of data), you would just pay for the privilege. Now there is USB / Firewire to do that job affordably, and Apple has transitioned to ATA. Except for the Power PC processor and resultant motherboard, which I've always respected above the hot-running Pentium line, Apples these days pretty much use standard parts from the x86 universe.

    You'll notice that your x86 which you've had since before Win95 has come out has had the graphics card, sound card, power supply, hard drives, motherboard, cpu, network card, and probably the case replaced multiple times. I'm sorry, at that point it is not the same PC, you have just bought a new one in parts. You could probably rummage through your parts bin and reassemble your original 486, if you were so inclined. You can do the same thing (or, at least, similar things) in the Mac with judicious use of online retailers. In other words, the implication that an x86 is a better investment because it has stayed around for the past 10 years is a false one, as nothing of the original machine remains.

    Most people buy computers as a whole, and not as individual upgrades. Personally, if I get another motherboard incompatibility I'm going to track down and strangle the engineer who decided it should only work with Kingston(tm) ram. While I do buy pretty cheap for my non-main system, my new motherboard DOA rate is hovering around %50. I personally recommend to my family that if they want a machine either A: I should build it for them, and the flight out to California that would entail, or B: they should buy new. Building a system, while not requiring as many steps as it used to, is still a lot more effort than an average non technophile would want to put into it in the same way that non gearheads wouldn't want to build their car.

    And if you want to only have one computer, run windows under a virtualization layer on the Mac. Unlike the monolithic, hot-running Pentium line, the Power PC is adept at simulating other computing environments. While you're at it, add Linux for network administration duties. Apple for every day, Linux for networking, and Windows for legacy dependencies. Sounds just about right.

  • Knoppix (Score:2, Informative)

    by devnul73 ( 749914 ) on Saturday April 03, 2004 @05:11PM (#8757754)
    Linux already has something like this (minus the cache on demand) in the form of Knoppix [knopper.net] + Klik [berlios.de]
    All the files are stored in a single directory under your home directory which can be on a USB drive or anything. It also works with a hard-drive install of Knoppix.
  • Re:static or what? (Score:1, Informative)

    by Anonymous Coward on Saturday April 03, 2004 @05:14PM (#8757770)
    A computer program is written as a text file.

    This text file is then compiled down to an executable.

    At the end of the compile all the parts and peices from all the libraries on the system that the program used are linked to the executable in one of two ways. This is called link phase.

    If the libraries are dynamically linked then the executable is small and only has references to the other files so that it can finish linking as it starts to run.

    If the libraries are statically linked then the executable is large as all the parts and peices that the program needs are put into the executable all at once.

    There are trade offs for each method.

    The dynamically linked programs are smaller, but they often take longer to load the first time and if you don't have a required library file present, the app won't run. Bug fixes can be done to the libraries and all programs will typically just keep right on running.

    The statically linked program is huge, loads fast, but takes up a lot more memory and disk space. If there is a security hole in a compiled in library then the whole executable needs to be replaced.

    On a single user system it then either way will work just fine.

    On a multi user system with 10's of users and lots of apps the dynamically loadable modules wins everytime.

    I typically compile everything seperately and then distribute the libraries beside the program. The program is dynamically loadable, but all the dependencies are present and upgradable if the user wants to install a patched version.

    Personally I think that the entire program should be in one directory, but have dependencies to other packages that are each all contained in their own directory.

    Then have all these packages managed by a single massive portage system that any unix distribution can use to install applications. Make various forward facing front ends to this portage system so that it appears like *bsd*, redhad, debian, fink, and a native "better" package system will all work.
  • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Saturday April 03, 2004 @05:16PM (#8757781) Journal
    For production servers running many applications, dynamic libraries can be much faster. With static libraries, the OS must load a seperate copy of each library every time a program is run. With dynamic libraries, I believe, most operating systems load the library the first time a program requests it. From then on, programs use the copy in memory.

    In older versions of Windows, this led to some really hard to track down flakey behavior. Suppose one application used a non standard version of some system library, which it keeps in it's folder. Load that application first, and other applications, using the non standard library, might crash. Load something else first, and the one application might crash.

    This is one reason Linux libraries have version numbers. Ever look in your /lib or /usr/lib folders and see two or three versions of each library like: libfoo.so, libfoo.so.1, libfoo.1.2.87? Notice how the first two are links to the third? Ideally, programs should link against the library name with the major version number, and libraries should change the major version number if and only if they change the interface.

    Because no one does this consistently, we have to have things like libfoo2.so.2.2.17 so that libfoo version 2 doesn't bork stupid programs that linked to libfoo.so instead of libfoo.so.2. This, in my opinion, defeats the whole purpose of having those symlinks in the first place.

    Wherever you go, there you are. Wherever you are, libraries are borked. This is why the grandparent poster links his libraries statically. Might be a touch slower and use a bit more memory, but you know it is going to work, wherever you go.
  • Re:Well duh... (Score:4, Informative)

    by deblau ( 68023 ) <slashdot.25.flickboy@spamgourmet.com> on Saturday April 03, 2004 @06:02PM (#8758055) Journal
    The /bin, /lib, /usr structure has to go.

    This kind of proposal about scrapping the current directory structure has been discussed ad nauseum on the Filesystem Hierarchy Standard [pathname.com] mailing lists. Here is the Standard Rebuttal against scrapping /bin and /usr/bin:

    With each app in its own directory, your $PATH becomes a mile long, and too difficult to maintain.
    You can't have your cake and eat it too. Some have suggested the use of symbolic links in /bin and /usr/bin, but then you run into this Standard Counterargument:
    Different application packages can have identically-named binaries. Upgraded packages
    always have the same binary names.
    The best combination seems to be symbolic links to the most recently-installed apps, but overriding your $PATH in ~/.bash_profile for legacy versions.

    The Standard Rebuttal against scrapping /lib:

    Apps which depend on other apps for libraries won't know where to look. This is especially true if each installed version of a required app is stored in its own numbered directory.
    Another argument involves the use of 32-bit vs 64-bit libraries. Best practice seems to be making copies of the most recently installed libs in /lib and /usr/lib, and using environment variables ($LD_LIBRARY_PATH, e.g.) to run older apps.

    Rebuttals for getting rid of /usr (i.e., having a One (Partition) Size Fits All approach):

    #1: Some boxes have read-only disks for security (CD-ROM firewalls come to mind). Now you can't install new applications.

    #2: You have one 100GB partition and you get a power spike. Now you have to wait for the fsck to finish before you can troubleshoot the damage.
    #3: You're in a diskless environment with centralized, NFS-mounted applications. With no /usr, you have no suitable mount point.
    #3 is especially common in large enterprise and government environments. If you've ever talked to someone who admins 1,000 desktops for their department, you'll know what I mean.

    On the mailing lists, the use of /package (or /pkg) also has been discussed ad nauseum. Keep in mind that the filesystem hierarchy is designed so that non-local (commercial) packages don't step all over each other when installing. Local (enterprise) software installation can happen wherever the hell you want it to, as long as it doesn't have to play nice with COTS software.

    Executive summary: you can run whatever directory structure you want -- I won't stop you. Just expect to hear lots of complaints from your developers and sysadmins. The reason things are the way they are is partially due to industry inertia, but mostly due to the fact that they just work better that way. If you don't like it, go contribute [sourceforge.net].

  • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Saturday April 03, 2004 @06:04PM (#8758070) Journal
    Seriously, this rocks. Yeah, yeah, sure. Other projects have done things like this before. But I love this idea even more than Gentoo's system, which also rocks. So I read some of the site to try to answer some of my own first asked questions.

    Q. Do I have to add a bunch of crap to my $PATH?
    A. No, you just use a shell that is application directory aware, and it will find the binary just fine if the application directory is in a directory in $PATH.

    Q. Will it let me recompile critical applications, either to patch them or optimize them?
    A Sure. Keep three different verions of Apache around, one with mod_perl, one with mod_rewrite, another with mod_php. Optimize for your new Sexium X CPU. Turn on full foo support, even though it's not recommended!

    Q. What about apps with hardcoded pathnames?
    A. Edit and recompile. HAND.

    Q. What about libraries?
    A. (From this page [sourceforge.net] on the ROX Application directory system.) Applications link to libraries in /uri/0install. If the required version isn't there, then instead of reporting an error (as traditional applications do), they run 0refresh. Software can be uncached when it hasn't been accessed for a long time (eg, months or years). If it's needed again, it gets refetched.

    Q. What about versioning?
    A. You can keep different versions of an application around in different directories. I couldn't find any information regarding library versioning. Hopefully libraries in /uri/0install have directories by major version number, and ROX applications are linked correctly. Prepare to have much fun with compiler and linker flags finding all your include files and libraries when you convert your application to ROX.

    Q. DND Saving? What's that?
    A. Rox aware apps support dragging files from a save box to a directory in a file browser to save. Finally, someone does this right.
  • by Davoid ( 5734 ) on Saturday April 03, 2004 @06:13PM (#8758128) Journal
    RPM does have this ability:

    rpm -ivh --prefix ~/whatever packagename.rpm

    That only works IF the package is "relocateable". Some packages, quite naturally, are not but most apps will (or should) be.

    -DU-...etc...
  • by dr.badass ( 25287 ) on Saturday April 03, 2004 @08:14PM (#8758717) Homepage
    It's ready! [sourceforge.net]
  • by nathanh ( 1214 ) on Saturday April 03, 2004 @10:53PM (#8759410) Homepage
    Yes indeed! Gnome seems to have a strange desire to mimic the bad sides of Windows while claiming to be an alternative. I'm not saying Gnome is worse than Windows, but seeing things like a registry-like configuration system (that, although it's more "open" due to XML, actually has a per-user daemon associated with it) and an extra virtual file system layer, makes me worry very much about its future course.

    Except gconf is nothing like the Windows registry.

    The Windows registry is a single file (the infamous REG.DAT) which is a single point of failure. The keys are undocumented thus leading to large 1000+ tomes describing the well-known keys but many keys are unfathomable.

    The gconf "registry" is an illusion. Gconf files are all separate. Go look in ~/.gconf and see every application has its own file. If any file is corrupted, it only affects that app. All the other files are not corrupted and other apps are not affected. Also the gconf schema permits human-discovery of the keys and values; the developers can attach a textual description to every key.

    The gconf "registry" is nothing more than a formalisation of rc-files (more commonly called dotfiles). Instead of each rc-file having a unique format, all gconf files are XML with a known schema. Instead of scattering rc-files randomly through ~/ with strange names, gconf files have a consistent naming scheme. Gconf is hierarchial, self-discoverable, and has event triggers so applications can receive feedback if values change.

    But gconf does not throw away the benefits of rc-files! Gconf files are still human readable, they are still separate files, you can pick them up and copy them around to other machines, etc. Gconf files are entirely unlike the Windows registry. It's a testament to the quality of the illusion that you saw gconf-editor and thought "it's a registry" but it's just an illusion. You are really seeing a tree view of separate gconf files.

    Factual clarification, the gconf backend can be swapped, so in theory you could have a single monolithic windows-registry-alike backend. But in practise the separate files are more convenient.

  • by jonadab ( 583620 ) on Saturday April 03, 2004 @11:51PM (#8759591) Homepage Journal
    > can't remember a time when the Mac OS wasn't better than the comparable
    > Windows version available at the time.

    In all respects except maybe security, this is a pretty bogus claim. MacOS
    was *different*, but it was not objectively *better* and in many ways was
    a good deal *worse*. For example, Windows 95 OSR2 had real multitaking --
    actual, factual, preemptive multitasking -- in 1996. MacOS didn't get it
    until OS X came out. We're talking here about an important key feature
    that *good* systems have had since the seventies: the ability for any
    random application to be running at the same time as any other random
    application, and the system doesn't freeze up for seconds-on-end while you
    wait for something to finish; each application is responsive as if it were
    running all the time (because, it is). So you can click on a link in Mozilla
    and *immediately* switch to another window and do stuff while you wait for
    the page to load. In MacOS 9.1, you can't do this; the browser monopolizes
    the system, as if the whole OS were frozen up (in a sense, it is), waiting
    for the page to load. This is a really big deal. It is, of course, fixed
    in OS X. But Windows had this in late 1995 (sooner, if you count NT).

    Then there's the dock. The Mac dock today (and since 10.0) is better than
    anything Windows will have at least until Longhorn comes out[1], but prior
    to that there was... there was... well, there was that little thing in the
    upper-right-hand corner, a holdover from the System 6 Multifinder, that shows
    which app is running, and you can pull it down and switch to a different one.
    And the menubar at the top did have a clock. But there was nothing that
    really could pass for an actual dock, bar, or panel.

    Windows also has had, since 1995 or before, better memory management than
    MacOS 9. This really shows up if you have more apps open than will fit in
    RAM all at once. The classic Mac does not perform well under these
    conditions; Windows 95 handles it much better; you only really notice the
    swapping delays when you switch between applications. Additionally, Windows
    95 does not require the user to configure VM manually, as MacOS 9 does.
    (Windows 3.1 did require this; going to automatic handling of virtual
    memory was a major leap forward in 1995.)

    The Apple HIG is arguably better and certainly more closely followed by
    most ISVs, but there are other niggly things about the Mac interface that
    suck in ways that almost make up for that. With the Classic Mac you
    basically *have* to have a third-party macro application, for example,
    because way too few things have keyboard shortcuts. (The mouse is great
    for discovering stuff you didn't know how to do, but it sucks for quickly
    doing something that you do very frequently.) Keyboard shortcuts are much
    more common in the Windows world (and almost totally universal in the *nix
    world). OS X is starting to fix this (though there is still more to be done).

    There are other things. To say that the MacOS is better than Windows *now*
    is a fairly credible claim; to say that it has *always* been better than the
    Windows available at the time is... bogus.

    Although, in the Windows 3.1 days, I'd agree that the Mac system was better.
    (I didn't use Windows much back then, though; I used DOS :-)

    [1] Gnome, however, is in some ways better. Notably, the applets are better,
    and the Mac dock doesn't support drawers, which are IMO a killer feature.
    The icon zooming on the Mac dock is better, however, and unifying the
    running process list with the launchers, only listing a given app once
    and indicating that it's running with the little arrow underneath, is a
    nice touch. But the Windows taskbar is clearly inferior. But it was
    better in 1995 than anything equivalent that the Mac had to offer until
    OS X came out. It took Apple four or five years to catch on to this.
  • by TheCrazyFinn ( 539383 ) on Sunday April 04, 2004 @12:26AM (#8759720) Homepage
    Except for the fact that the eMac has a real GPU in it instead of the Equivalent PC's Intel Extreme Graphics with Shared Memory. Even the eMac's lowly Radeon 7500 is 2-3x faster than Intel's crap graphics. And it has 32MB of dedicated VRAM, instead of stealing from system memory.

    Just that will do wonders to equalize the eMac's slower CPU. And if you're comparing the eMac to a Celeron, the CPU ain't even slower.

    Another thing to consider is the fact that Apple uses high-quality tubes in their CRT's. This isn't the $75 special you get at the low end. Price out an equivalent box from a major maker (include a decent GPU and a Good monitor) and you'll see Apple fairs decently against them price-wise.
  • by Anonymous Coward on Sunday April 04, 2004 @01:46AM (#8759979)
    Do a man on lsbom and use it to check the .bom files in the directories under /Library/Receipts. That should tell you exactly which package installed which files. lsbom is part of the developer packages so this isn't a solution for the non-technical unfortunately. I think DarwinPorts installer should also be able to understand packages in the /Library/Receipts directory, but I'm not really sure about that.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...