Zero Install: The Future of Linux on the Desktop? 718
SiegeX writes "Zero Install ,which is apart of the ROX desktop environment is not just a new packaging system, it's a whole new way of thinking; a way that I believe is exactly what Linux needs to become a serious contender for Joe User's desktop. Zero Install uses an NFS to both run *and* install apps from. The apps are all self-contained in their own directory; binaries, docs, source code and all. Once the app has been downloaded its kept in a cache from that point on to minimize delay. The beauty becomes apparent when Zero Install is combined with ROX which runs the application by just clicking on the directory it was installed to. Deleting the application along with all the other misc files is as simple as removing the directory it's contained in. This method of partitioning applications in their own directories also allows installing multiple versions of any application trivial. This is something even the greatest of technophobes could understand and use with ease."
Re:waste? (Score:5, Informative)
Re:apps contained in their own directories.... (Score:5, Informative)
Yes (Score:5, Informative)
This is also what Microsoft is trying (Score:5, Informative)
Delete the directoty and the app is gone.
This is here now, and altough
User settings storage in win32 (Score:5, Informative)
It's still very much like this in Windows, in fact, with the "Program Files" directory often containing everything (although "Documents and Settings" is becoming more used for user settings storage). Personally I like the idea. I've always been confused trying to locate various files which belong to a single application in *nix.
Most *n?x apps seem to store all the per-user settings in a dot-file or dot-folder in the user's home directory. In Windows, they're often strewn about in at least three places: C:/Documents and Settings/Me/Application Data/, C:/Documents and Settings/Me/Local Settings/Application Data/, and HKEY_CURRENT_USER in the registry. In addition, a lot of the apps I have installed on my Windows 2000 machine came bundled with peripherals, where the app and a device driver came as part of the same install, the app in C:/Program Files/ and pieces of the driver in various folders in C:/Windows/.
How does Rox handle it?
Re:waste? (Score:2, Informative)
That's what it does. That is what they mean when they say 'cache'.
What happens is the software is cached first. Then, all dependancies are listed as 0install directories, so if you have it already it'll use the cached version, but if not it'll pull a copy and cache that. This is nice because if, say, you don't use the help files, it won't pull them until you ask for them, so you don't have them clutering up your HDD or being downloaded. You just point the ROX-Filer at the 0install AppDir, click on it, and without you even needing to think about it everything is installed and the program runs.
It also has a way to quickly update all apps, or if you just want one upgraded, open it in the Filer and hit 'reload.' Simple, easy, and intuitive. In general, I'd say this is even easier then the Windows Update and installers combo.
Re:Someone should tell Apple (Score:1, Informative)
DOS: been there done that. (Score:1, Informative)
The headline for this story should be "Linux developer gets a clue".
If you have Knoppix try klik (Score:5, Informative)
Similarities to Archimedes (Score:5, Informative)
An 'application' looked like a single file that started with a '!'. It ran as though it was one file, copied and moved as though it was one file. If you used a modifier to open it (Ctrl-click, or something similar), though, it actually opened up as a folder. The app was really made of a number of files - the icon that the application/folder would have, the actual programs, any config files, a script that was run when the program was launched, and another script that would be run as soon as the OS 'saw' the app.
Part of the config would tell the OS what file types the app could handle, so as long as the app had been 'seen' (ie, it's parent folder had been opened), the filetypes would be recognised until the next reboot.
This is so 1990s... (Score:1, Informative)
As many pointed out (Score:1, Informative)
Re:Why I dislike about installing softwareunder Li (Score:2, Informative)
Re:Why I dislike about installing softwareunder Li (Score:1, Informative)
make
make install
It's not quite that simple (Score:5, Informative)
Actually, it hasn't. Ask any Mac pro; applications started making "library" files that went into the System folder(or worse, programs like Norton Utilities insisted on putting libraries into the Extensions folder, which was not what Apple told developers it was for). Apple caved in and 9.x started sprouting "Application Support" folders, a "Libraries" folder, etc. Developers just couldn't wrap their brains around the single-file, applications-don't-mess-with-the-system-folder model. Often times, commercial programs would blatantly disregard Apple's filesystem guidelines. Often times extensions has such weird names, Cassidy&Greene developed an extension manager with a database of all the known files so you could figure out what the hell stuff was.
While you tout OS X as better than Linux or Windows, as an experienced long-time Mac user I saw OS X as a step down from the old MacOS with regards to filesystem simplicity. Applications now install stuff into zillions of different places. Virtually none of their installers ask if you want to install just for your user(ie using your Library, Application etc folders), or install system-wide(a few- VERY few- do). Application installers that have no business needing my password ask for it; why does Acrobat reader need sudo to install itself into Applications? Answer- it doesn't, but it's probably saving some prefs file somewhere it shouldn't.
Even worse...you can install packages using a "package system", but Apple will be damned if they'll give you a way to UNINSTALL a package, system or otherwise. Want to remove all the localization crap you forgot to turn off during system install? You have to download a third-party app to remove almost a gigabyte of files from your system, instead of just going into a "Software" panel and clicking remove. Windows has had it for years, with its only flaw being that it calls the developer's uninstall program, which often times doesn't work, especially if you've deleted the app folder but nothing else.
Another side effect of the multiple-files problem is added complexity; the # of files in the filesystem has ballooned enormously, because instead of an application being one big file with a resource fork, it's now at least 3 folders, and often times hundreds(or even thousands) of files. Moving an application used to be easy- you moved one big file, the Finder just did a straight copy very efficiently. Now it has to copy hundreds of small files, so it takes forever(and amusingly, copying just a bunch of raw non-app files takes about 5 times longer in the Finder than it does via cp or ditto).
Don't get too uppity about not having a registry. OS X uses a number of preference files, and even though they've changed to XML and the like, users are seeing the same problems with OS 9- corrupt preference files causing odd behavior. Remove the naughty pref file, things start working again. There are now third party utils that specialize in checking these prefs; if they can do it, why can't it be part of the bootup process?
Oh, and lastly- Apple has made it even more difficult to make a boot disk for your mac to do disk maintenance. It used to be you just copied over your system folder, removed all the extensions, control panels, prefs, etc you knew you didn't need. Now? You need some stupid shareware program to do it, and half of 'em still haven't been updated for 10.3.
Re:waste? (Score:1, Informative)
Clearing up a few things... (Score:5, Informative)
The main one is that there are actually two installation systems being discussed in the article:
ROX application directories can be made available via Zero Install. In that case, running the application is a lot like running a program from a network share (but more aggressively cached). Or, you can DnD them onto your local disk manually (without Zero Install).
You can also use Zero Install for non-ROX type applications.
Secondly, when we say that application directories are self-contained, we mean that a single .tgz download corresponds to a single installed directory. Application directories can (and do) still depend on shared libraries (possibly other application directories).
Without Zero Install, after installing an application by drag-and-drop, running it may tell you that you need to install some other library before it will work.
With Zero Install, the application just tries to access it from its fixed location (URI) and it gets fetched.
Re:Could there be a difference here? (Score:5, Informative)
I install Mozilla on OSX. It says, rather than using an install program, I should just drag the icon into a directory, such as the applications directory, making it writable by any unprivileged program the user may execute.
This is not unlike I have seen for other programs for Mac as well.
When I try to take the initiative and protect these directories, the programs often stop working, because the program writes these directories when run by the user.
I am told by admins that program directories on OS X should generally be made user-writable.
When I do a default install Eclipse on OS X, it places the user workspace underneath the Eclipse program directory, not naturally leading to program directories that are not virus writable. I run it on Linux, and the workspace is separated from the program directory off of my home directory, and the eclipse stuff installs nicely into a protected area.
Re:I thought we wanted people to reuse code? (Score:4, Informative)
The apps are all self-contained in their own directory; binaries, docs, source code and all. * * * This method of partitioning applications in their own directories also allows installing multiple versions of any application trivial.
What happened to the idea that we wanted programmers and users to share libraries and code? To solve rather than avoid dependancy problems?
Applications are self-contained in that everything from a single package is in a single directory, rather than being spread over /usr/bin, /usr/share, etc. They can still depend on other packages.
Without Zero Install, this means that although installing an individual package is quite easy, you may then have to install dependant packages in a similar fashion. With Zero Install, you get automatic dependancy resolution and freedom from install scripts.
Re:This is why... (Score:3, Informative)
System 7 on up to 9.2 wound up with all kinds of scat dribbled throughout the various subdirectories in the system folder, as well as associated files in the application folder -- but only some applications were to blame -- many classic apps are self enclosed and install with a drag'n'drop. OS X improves the ratio somewhat with its bundles, an even greater proportion of apps are install-friendly. Some developers still love to pollute files everywhere (are you listening Adobe?) -- though since we're generally talking about /Library, it isn't as big an issue as the conflicts arising out of the Extensions folder in Classic.
It's all relative, though. Give me the 100,000+ file complexity of OS X with it's well-designed directory structure over a simpler but kludgy Mac OS 9 or Win9x anyday.
Re:This sounds perfect... (Score:5, Informative)
a) finding right foo for your system,
b) downloading foo,
c) knowing where it went,
d) knowing where to drag it,
e) actually dragging it
is simpler than cryptic
a) "install foo"?
[ b) enter root password and hope it doesn't mess up your system ]
a) You don't have to find the right version for your system, just run the application. It will try to access the appropriate binary and that will get cached. /var/cache/zero-inst/gimp.org/bin/gimp). You shouldn't care though, any more that you care about the structure of squid's web cache.
b) Downloading happens automatically, just like viewing a web page through a web cache.
c) It goes in the cache directory, whose structure mirrors the URI scheme (eg
d) Drag it where you want it. On your desktop, panel, start menu, etc.
e) You could just run it where it is, without creating a short-cut at all.
Re:Someone should tell Apple (Score:3, Informative)
all the various bits and pieces of a well formed Mac app go into a 'package' (not RPM). See it here [apple.com]. a double-click on that folder launches the app, deleting the folder removes the app, and updates need only to replace affected files within the app.
When developers take the time to use this structure, it works really really well. Unfortunately, Apple is not in a position to tell developers "if you want to write software for Mac, you must do it like this". So they gave developers the option of being messy.
Re:User settings storage in win32 (Score:3, Informative)
Usually, ~/Choices/ROX-Filer/Options.xml, etc. Choices cascade (/usr/local/share/Choices, /usr/share/Choices, ~/Choices). You can change the location with CHOICESPATH.
In future, we'll probably move over to the (very similar) freedesktop.org base dir system, which defaults to ~/.config instead but is otherwise pretty-much the same.
Re:Could there be a difference here? (Score:1, Informative)
> I should just drag the icon into a directory, such as the applications
> directory, making it writable by any unprivileged program the user may
> execute.
Great, the power user can trow away unneeded language files. Other users have read only access.
> When I try to take the initiative and protect these directories, the
> programs often stop working, because the program writes these directories
> when run by the user
What?! Mozilla.app must be badly written / packaged. In no way should the application dir change during execution. Preferences or caches may not be stored in the application directory, they must be stored in user space for example: ~/library/preferences.
--
Dennis SCP
Re:This sounds perfect... (Score:4, Informative)
Sorry to spoil your arguments, but each of your starting assumptions is wrong ;-)
Just a bad idea taken to it's expreme.
If you could point me to the places on the site where you got your information, I'll try to fix them / make it clearer. Thanks.
Re:What about shared libraries and memory? (Score:3, Informative)
I think future versions of Windows know how to scan the disk periodically, find redundant files, and essentially link them together automatically. That's pretty cool - you deliver your app with FOO.DLL version whatever and drop it in your app's directory. If someone else installs a FOO.DLL in their app's directory that matches the exact same bits, the system coalesces them together. Then you upgrade application #1, which installs a modified version of FOO.DLL, and the OS unshares them.
Yeah, it's copy-on-write for files, but the new bit is the auto-coalescing.
To me that sounds like all the benefit of shared libraries with none of the DLL hell. It will be interesting to see if it actually works.
Re:Screw drag & drop (Score:4, Informative)
With APT, they don't have to. They just have to be able to double-click on the program they want. I've seen lots of Windows users who love the concept and wish Windows had it.
The problem with Apt is it relies on someone else saying 'oh, that's great, I'll make some debs.'
Isn't that true for installers as well? Somebody has to make those too.
And it only works for someone with Debian or a Debian-compatable system like Lindows, Xandros, Knoppix, et al.
Ximian has a nifty tool called Build Buddy [ximian.com] that automates the process of generating packages compatible with not just RPM and DEB systems, but Solaris and HP-UX package managers too.
Additionally, to install a deb you need to be root.
This needs improvement, yes.
Re:Potential for unpublishing apps? (Score:4, Informative)
Zero Install can download from mirrors, peer-to-peer, etc, provided it gets the master index with the GPG signature from the main server.
If you want to get the master index from a backup server, you need manual intervention (root needs to indicate that the backup server can be trusted).
However, since the signature part is small (about 1K), a single trusted backup site (debian.org?) could easily host every index in the world. The rest of the data can come through peer-to-peer, etc.
Re:Screw drag & drop (Score:2, Informative)
Be-fan -- your deep unwavering belief is a sure sign you are missing something. There's no "drag and drop" involved -- zero install means zero install.
It's based on a sort of virtualized internet filesystem. Here is their example:
$ cd$ ls
gimp1.2
gimp1.3
$ 0refresh
$ ls
gimp1.2
gimp1.3
gimp2.0
You never install, you just run, either by command line or some form of gui contraption. Of course, this system places a lot of faith in the upstream developers, but their website paints a far more interesting picture than the summary given here. Here [sourceforge.net], for instance, is a decent comparison with apt. Check it out.
Re:What about security? (Score:5, Informative)
System security? Nothing. All code runs as you. As for your own security, it doesn't allow any attack that couldn't have been done without Zero Install too.
Reducing the security risk from traditional installation systems (APT, RPM, etc where you're running a downloaded install script as root) was an important goal for Zero Install.
See The Zero Install system [sourceforge.net]
Re:User settings storage in win32 (Score:1, Informative)
Re:Someone should tell Apple (Score:2, Informative)
Have you tried dragging the icon to where you want the pathname pasted? This works for the terminal and many html editors. It probably won't help with a word processor though, since it might try to embed the file.
Personally, I think the new Finder blows compared to even the old System 6 version. I want a real Finder replacement that brings back all of the keyboard navigability (and key bindings) of the old one. I don't think it would be for everyone, but it would be great for people like me who keep hitting cmd-n only to close the window and hit shift-cmd-n. I want my old Finder Secrets back.
Re:static or what? (Score:5, Informative)
Statically linked binaries include all the libs and dependencies along with the binaries used to actually 'run' the program in one fat package. Depending on what it is you're packaging, it can add a shitload of weight to the package.
Dynamically linked binaries expect your system to contain dependencies already. They have the benefit of giving you a small, tight package but don't always work right away. IE you, the user, have to hunt down packages it needs or apt or rpm has to handle that for you.
It's a trade-off either route you choose. Statically linked binaries add bloat but usually work great without user or system intervention. Dynamically linked binaries are smaller and bloat-free but depend on you, your package manager or something else to make sure it works.
The typical stance of developers has been to build good packages that are small and dynamically linked. After all, what's the point of having 20 copies of a common system library that you may have had since your OS install? That's just bloat. Ultimately, the best developers, in my opinion, give you the choice when you go to download. Click here for static, here for dynamic.
Re:Why I dislike about installing softwareunder Li (Score:1, Informative)
Re:Why do we have shared libraries at all? (Score:3, Informative)
Re:This is just a silly statement (Score:3, Informative)
Re:Someone should tell Apple (Score:2, Informative)
Except that, as the above poster pointed out, many Apples are cheaper. iBooks most famously, but their integrated line is also cheaper if you count the cost of the monitor. The high end of their line is very affordable... Compare the cost of a dual 1.8 G5 to a dual 2.8 Xenon, and recognize that the dual G5 is faster. Plus multiprocessing on the Mac has far fewer of the hitches of multiprocessing on the PC.
Not to undercut my own argument, but in the (long) past Apple did make a few supplier decisions that didn't pan out. Specifically, apple's use of SCSI drives became a liability. While it was expected that with competition SCSI prices would eventually fall to be on parity with ATA drives, it never happened, despite being significantly faster for about the same cost to produce. Still, hot swapping external SCSI drives was one of the best reasons to own a Mac (and one of the easiest ways to transfer large amounts of data), you would just pay for the privilege. Now there is USB / Firewire to do that job affordably, and Apple has transitioned to ATA. Except for the Power PC processor and resultant motherboard, which I've always respected above the hot-running Pentium line, Apples these days pretty much use standard parts from the x86 universe.
You'll notice that your x86 which you've had since before Win95 has come out has had the graphics card, sound card, power supply, hard drives, motherboard, cpu, network card, and probably the case replaced multiple times. I'm sorry, at that point it is not the same PC, you have just bought a new one in parts. You could probably rummage through your parts bin and reassemble your original 486, if you were so inclined. You can do the same thing (or, at least, similar things) in the Mac with judicious use of online retailers. In other words, the implication that an x86 is a better investment because it has stayed around for the past 10 years is a false one, as nothing of the original machine remains.
Most people buy computers as a whole, and not as individual upgrades. Personally, if I get another motherboard incompatibility I'm going to track down and strangle the engineer who decided it should only work with Kingston(tm) ram. While I do buy pretty cheap for my non-main system, my new motherboard DOA rate is hovering around %50. I personally recommend to my family that if they want a machine either A: I should build it for them, and the flight out to California that would entail, or B: they should buy new. Building a system, while not requiring as many steps as it used to, is still a lot more effort than an average non technophile would want to put into it in the same way that non gearheads wouldn't want to build their car.
And if you want to only have one computer, run windows under a virtualization layer on the Mac. Unlike the monolithic, hot-running Pentium line, the Power PC is adept at simulating other computing environments. While you're at it, add Linux for network administration duties. Apple for every day, Linux for networking, and Windows for legacy dependencies. Sounds just about right.
Knoppix (Score:2, Informative)
All the files are stored in a single directory under your home directory which can be on a USB drive or anything. It also works with a hard-drive install of Knoppix.
Re:static or what? (Score:1, Informative)
This text file is then compiled down to an executable.
At the end of the compile all the parts and peices from all the libraries on the system that the program used are linked to the executable in one of two ways. This is called link phase.
If the libraries are dynamically linked then the executable is small and only has references to the other files so that it can finish linking as it starts to run.
If the libraries are statically linked then the executable is large as all the parts and peices that the program needs are put into the executable all at once.
There are trade offs for each method.
The dynamically linked programs are smaller, but they often take longer to load the first time and if you don't have a required library file present, the app won't run. Bug fixes can be done to the libraries and all programs will typically just keep right on running.
The statically linked program is huge, loads fast, but takes up a lot more memory and disk space. If there is a security hole in a compiled in library then the whole executable needs to be replaced.
On a single user system it then either way will work just fine.
On a multi user system with 10's of users and lots of apps the dynamically loadable modules wins everytime.
I typically compile everything seperately and then distribute the libraries beside the program. The program is dynamically loadable, but all the dependencies are present and upgradable if the user wants to install a patched version.
Personally I think that the entire program should be in one directory, but have dependencies to other packages that are each all contained in their own directory.
Then have all these packages managed by a single massive portage system that any unix distribution can use to install applications. Make various forward facing front ends to this portage system so that it appears like *bsd*, redhad, debian, fink, and a native "better" package system will all work.
Random Thoughts on Libraries (Score:3, Informative)
In older versions of Windows, this led to some really hard to track down flakey behavior. Suppose one application used a non standard version of some system library, which it keeps in it's folder. Load that application first, and other applications, using the non standard library, might crash. Load something else first, and the one application might crash.
This is one reason Linux libraries have version numbers. Ever look in your
Because no one does this consistently, we have to have things like libfoo2.so.2.2.17 so that libfoo version 2 doesn't bork stupid programs that linked to libfoo.so instead of libfoo.so.2. This, in my opinion, defeats the whole purpose of having those symlinks in the first place.
Wherever you go, there you are. Wherever you are, libraries are borked. This is why the grandparent poster links his libraries statically. Might be a touch slower and use a bit more memory, but you know it is going to work, wherever you go.
Re:Well duh... (Score:4, Informative)
This kind of proposal about scrapping the current directory structure has been discussed ad nauseum on the Filesystem Hierarchy Standard [pathname.com] mailing lists. Here is the Standard Rebuttal against scrapping /bin and /usr/bin:
You can't have your cake and eat it too. Some have suggested the use of symbolic links inThe Standard Rebuttal against scrapping /lib:
Another argument involves the use of 32-bit vs 64-bit libraries. Best practice seems to be making copies of the most recently installed libs inRebuttals for getting rid of /usr (i.e., having a One (Partition) Size Fits All approach):
#3 is especially common in large enterprise and government environments. If you've ever talked to someone who admins 1,000 desktops for their department, you'll know what I mean.On the mailing lists, the use of /package (or /pkg) also has been discussed ad nauseum. Keep in mind that the filesystem hierarchy is designed so that non-local (commercial) packages don't step all over each other when installing. Local (enterprise) software installation can happen wherever the hell you want it to, as long as it doesn't have to play nice with COTS software.
Executive summary: you can run whatever directory structure you want -- I won't stop you. Just expect to hear lots of complaints from your developers and sysadmins. The reason things are the way they are is partially due to industry inertia, but mostly due to the fact that they just work better that way. If you don't like it, go contribute [sourceforge.net].
Libraries, Preferences Other Issues. (my mini FAQ) (Score:5, Informative)
Q. Do I have to add a bunch of crap to my $PATH?
A. No, you just use a shell that is application directory aware, and it will find the binary just fine if the application directory is in a directory in $PATH.
Q. Will it let me recompile critical applications, either to patch them or optimize them?
A Sure. Keep three different verions of Apache around, one with mod_perl, one with mod_rewrite, another with mod_php. Optimize for your new Sexium X CPU. Turn on full foo support, even though it's not recommended!
Q. What about apps with hardcoded pathnames?
A. Edit and recompile. HAND.
Q. What about libraries?
A. (From this page [sourceforge.net] on the ROX Application directory system.) Applications link to libraries in
Q. What about versioning?
A. You can keep different versions of an application around in different directories. I couldn't find any information regarding library versioning. Hopefully libraries in
Q. DND Saving? What's that?
A. Rox aware apps support dragging files from a save box to a directory in a file browser to save. Finally, someone does this right.
Re:Why I dislike about installing softwareunder Li (Score:3, Informative)
rpm -ivh --prefix ~/whatever packagename.rpm
That only works IF the package is "relocateable". Some packages, quite naturally, are not but most apps will (or should) be.
-DU-...etc...
Re:Sounds great, let me try (Score:2, Informative)
Re:Microsoft had it and lost it. (Score:3, Informative)
Except gconf is nothing like the Windows registry.
The Windows registry is a single file (the infamous REG.DAT) which is a single point of failure. The keys are undocumented thus leading to large 1000+ tomes describing the well-known keys but many keys are unfathomable.
The gconf "registry" is an illusion. Gconf files are all separate. Go look in ~/.gconf and see every application has its own file. If any file is corrupted, it only affects that app. All the other files are not corrupted and other apps are not affected. Also the gconf schema permits human-discovery of the keys and values; the developers can attach a textual description to every key.
The gconf "registry" is nothing more than a formalisation of rc-files (more commonly called dotfiles). Instead of each rc-file having a unique format, all gconf files are XML with a known schema. Instead of scattering rc-files randomly through ~/ with strange names, gconf files have a consistent naming scheme. Gconf is hierarchial, self-discoverable, and has event triggers so applications can receive feedback if values change.
But gconf does not throw away the benefits of rc-files! Gconf files are still human readable, they are still separate files, you can pick them up and copy them around to other machines, etc. Gconf files are entirely unlike the Windows registry. It's a testament to the quality of the illusion that you saw gconf-editor and thought "it's a registry" but it's just an illusion. You are really seeing a tree view of separate gconf files.
Factual clarification, the gconf backend can be swapped, so in theory you could have a single monolithic windows-registry-alike backend. But in practise the separate files are more convenient.
Re:Someone should tell Apple (Score:2, Informative)
> Windows version available at the time.
In all respects except maybe security, this is a pretty bogus claim. MacOS
was *different*, but it was not objectively *better* and in many ways was
a good deal *worse*. For example, Windows 95 OSR2 had real multitaking --
actual, factual, preemptive multitasking -- in 1996. MacOS didn't get it
until OS X came out. We're talking here about an important key feature
that *good* systems have had since the seventies: the ability for any
random application to be running at the same time as any other random
application, and the system doesn't freeze up for seconds-on-end while you
wait for something to finish; each application is responsive as if it were
running all the time (because, it is). So you can click on a link in Mozilla
and *immediately* switch to another window and do stuff while you wait for
the page to load. In MacOS 9.1, you can't do this; the browser monopolizes
the system, as if the whole OS were frozen up (in a sense, it is), waiting
for the page to load. This is a really big deal. It is, of course, fixed
in OS X. But Windows had this in late 1995 (sooner, if you count NT).
Then there's the dock. The Mac dock today (and since 10.0) is better than
anything Windows will have at least until Longhorn comes out[1], but prior
to that there was... there was... well, there was that little thing in the
upper-right-hand corner, a holdover from the System 6 Multifinder, that shows
which app is running, and you can pull it down and switch to a different one.
And the menubar at the top did have a clock. But there was nothing that
really could pass for an actual dock, bar, or panel.
Windows also has had, since 1995 or before, better memory management than
MacOS 9. This really shows up if you have more apps open than will fit in
RAM all at once. The classic Mac does not perform well under these
conditions; Windows 95 handles it much better; you only really notice the
swapping delays when you switch between applications. Additionally, Windows
95 does not require the user to configure VM manually, as MacOS 9 does.
(Windows 3.1 did require this; going to automatic handling of virtual
memory was a major leap forward in 1995.)
The Apple HIG is arguably better and certainly more closely followed by
most ISVs, but there are other niggly things about the Mac interface that
suck in ways that almost make up for that. With the Classic Mac you
basically *have* to have a third-party macro application, for example,
because way too few things have keyboard shortcuts. (The mouse is great
for discovering stuff you didn't know how to do, but it sucks for quickly
doing something that you do very frequently.) Keyboard shortcuts are much
more common in the Windows world (and almost totally universal in the *nix
world). OS X is starting to fix this (though there is still more to be done).
There are other things. To say that the MacOS is better than Windows *now*
is a fairly credible claim; to say that it has *always* been better than the
Windows available at the time is... bogus.
Although, in the Windows 3.1 days, I'd agree that the Mac system was better.
(I didn't use Windows much back then, though; I used DOS
[1] Gnome, however, is in some ways better. Notably, the applets are better,
and the Mac dock doesn't support drawers, which are IMO a killer feature.
The icon zooming on the Mac dock is better, however, and unifying the
running process list with the launchers, only listing a given app once
and indicating that it's running with the little arrow underneath, is a
nice touch. But the Windows taskbar is clearly inferior. But it was
better in 1995 than anything equivalent that the Mac had to offer until
OS X came out. It took Apple four or five years to catch on to this.
Re:Someone should tell Apple (Score:3, Informative)
Just that will do wonders to equalize the eMac's slower CPU. And if you're comparing the eMac to a Celeron, the CPU ain't even slower.
Another thing to consider is the fact that Apple uses high-quality tubes in their CRT's. This isn't the $75 special you get at the low end. Price out an equivalent box from a major maker (include a decent GPU and a Good monitor) and you'll see Apple fairs decently against them price-wise.
Re:Someone should tell Apple (Score:1, Informative)