Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Linux/UNIX Usability Research 99

st. augustine writes "A group of researchers at the University of Michigan School of Information has started a project called the Linux/UNIX Independent Group for Usability Information. Their goal: "to couple the power of UNIX with user interfaces that are consciously designed to allow novices to become experts without removing any of the existing functionality." They look serious. " Check out the Salon article that ran earlier today-they're mentioned in there as well. I've actually talked with those folks as well-they're smart cookies.
This discussion has been archived. No new comments can be posted.

Linux/UNIX Usability Research

Comments Filter:
  • by Anonymous Coward
    Unix has always had that "other group" - average users. Problem is they have been powerless, and limited by systems administration. Typical unix users have been administered to in large corporations and government agencies which place a premium on control and security, which are not the goals of these users but of the organizatgions. Granted, sysadmin is needed in large systems and is implicit even in small ones, but not in the sense of what "sysadmin" means in the unix world today. Linux and GNU are trying to change that.

    First, Linux is not unix. It is built on design principles which allow unix like functionality, but that's it. Linux achieves the same kind of system as unix but with different code, which is free, unlike unix which is controlled by commercial consortiums.

    Unix has always been linked too much to the world of corporate big iron like IBM mainframes. Unix programmers and sysadmins think they are a cut above IBM mainframers, but are they? There is also another side to unix, which is that much of the work involved in creating and enhancing unix was done in academic settings in a semi-open environment. For example, the work at Berkeley and MIT. However, the lofty idealism of the people who have really done the work in creating and extendinhg unix (who most certainly have not been sysadmins) have been despised and rejected by the comercial unix world of petty careerism in which end users are often held in contempt by mediocre technicians who think they are God's gift to technology.

    There is a difference between unix sysadmin and computer science and creativity. Sysadmin has become a career path which has too much defined the nature of unix and has perversely influenced how Linux is preceived. Sysadmin has little to do with programming, software development, or artistic and scientific creativity or even understanding how systems work. It is or has become simply one thing - a career path. Its rules are intentionally arcane to limit the number of people who can gain admittance and keep their salaries high. Learning these rules has almost nothing to do with intelligence, but only with having "mentors" who will accept the apprentice into the fold. It is a sham.

    It is a tragedy that Linux is saddled with this legacy which makes it almlost impossible to market Linux to average users who generally are just as intelligent, and oftem much, much more intelligent, than sysadmins. People who want to use computers should not need to be sysadmins as defined by unix sysadmin careerism, but all to often they need those skills. So, they teach themselves or just move on to a more usable system.

    Yes, it is the right thing. Linux is for users, average users. That is what its creator wants, and what almost every one of the really great programmers who have helped in the evolution of Linux wants.

    Making Linux more usable for people who are not narrow specialists in system administration will not in any way affect the underlying system or its power and flexibility. Nor will it remove from Linux the tools needed by professional programmers and sysadminis to use Linux at a different level. To imply that making Linux more usable for "average users" will somehow diminish Linux shows a profound lack of intelligence.


  • by Anonymous Coward
    First thing, this has to be some of the most intelligent commentary I've seen on /. in a while, wonder how long it'll last? Now to rant.

    Hmm...pretty, simple GUI's for average newbies like me, without diminishing power for all the experts and hackers. I wonder why all the experts think that ANY of their tools which they took so long to learn and master will be obsoleted. Grasp the concept of "Free," people. If a person wants to make a customized version of (insert free/free-ish OS here) to cater to a specific group (dumbed down for digital appliances, balls-out-bleeding-edge-everything-included for the hardcore geeks, and everything in between)

    Then again, Linux is only a kernel, I actually wonder what's stopping people from using customized versions of the kernel (see? I spelled it right!) and maybe glibc and throwing a native GUI on top. Wouldn't it just be another shell? As in, just another way to mediate between the user and all that other stuff? Ego? Sloth? Fear of being reviled by everyone? Come on, give me a reason! It would take a lot of work but it could be done! Your CLI shells would stil work, just that the NATIVE interface would be GUI-ish. NeXT/Apple did it, Be may be trying to do it.

    Besides, isn't GNU Not Unix anyway? Who says we have to adhere to EVERY SINGLE guideline that Our Heralded Forefathers laid down? I'm not saying discard all the old stuff, but adapt it for different situations. I'm pretty sure that with enough serious thought this LUIGUI (I love that acronym) can figure out how to translate the old CLI commands into shiny pretty point and click, without losing their famed "it just works" power and functionality.

    Oh, and finally, not wanting to know how exactly things work, at least initially, doth not ALWAYS make one "clueless." Many people just want to get their work done. They can learn about their tools later, not TOO much later, but later. Is that so wrong?
  • by Anonymous Coward on Thursday April 15, 1999 @08:14PM (#1930714)
    Where I think the LUIGUI people could best serve Linux OSS developers is in providing peer review for user interfaces. I was thinking of tackling a Linux project with GUI this summer, but I'm no expert on user-interface design. Getting reviewed by the people at LUIGUI could really help me and others put that necessary 'polish' on an app before it is released to the public. Without that polish, people won't start using the app.

    Well, them's my thoughts anyhow.
  • They're right about one thing: Linux (okay, damn it, Linux-based operating systems in general) have too many damn programs. In some areas, there are simply too many potentially useful and yet obscured programs that you won't find unless you're looking for them.

    I don't know if documentation is the full answer; I'd like to see the vendors get a tad more choosy about the programs they include -- I like xbill as much as the next guy, but how many or that level game do we really need? After all, if I miss something, I can always go out and get it on my own.

    And yes, I know that I could simply choose not to install them. Usually, however, I'm not looking to spend time choosing programs to install on Linux; I'm looking to *use* Linux.

    ----

  • My uncle's Porsche is much easier to handle than my more typical vehicle (think '89 Mustang).

    So, the answer is yes. We just have to figure out how to make it so (no Picard reference intended, but since you insist, the Enterprise computer rates as both powerful and easy to use, don't it?)

    ----

  • The problem is not necessarily the number of programs and options. The problem is that the users aren't given reasonable and obvious defaults.

    I should be able to run a Linux installer, answer very simple questions that require no knowledge of Unix, and be presented with a system that can immediately be used to browse the Web, write a simple word processing document and print it, start up, shut down, and other basic tasks. Everything else should be tastefully hidden, perhaps not even installed. Later on, I can customize whatever I want, but I don't want to have to deal with every feature at once.

    My Red Hat installer (for LinuxPPC) automatically installs and activates all the network daemons and servers. My local sysadmins are not pleased when random users install unsecured Linux boxes on the academic network, so I have to prowl around after every installation, exorcising daemons. I don't want to run a server; I just want a nice safe little client. Why am I being forced to deal with all these damned daemons, many of which I don't understand, and most of which I probably don't even want?
  • Projects like GNOME and KDE, and now LUIGUI -- efforts to make Linux 'usable' -- ultimately have the wrong idea. As other people have pointed out, Unix is already very usable. The problem is that it's not learnable. The CLI is extremely powerful, yet impossible to learn by guessing and experimenting (it's not 'intuitive'). You have to read most of Unix for the Imaptient or some similar 700-page volume before you can begin. But once you're ready, the power is there.

    I think (hope) the WIMP (windows, menus, icons and pointers) paradigm has seen its last days. Something completely new needs to spring up. The best a WIMP project like KDE or GNOME can do is to be pale imitation of the Macintosh. This is not to disparage the very good work being done by the GNOME and KDE developers! It's just that the metaphors and self-contained applications way of designing interfaces needs to die. It's become more a barrier than a path to effective computer use.

    The book About Face: The Essentials of user interface design by Alan Cooper, while mostly a WIMP style guide, has a few revolutionary ideas in it (get rid of filesystems as an interface element (PalmPilot!), inherent document management and revision control, et c.). I would suggest that everyone interested in UI design read it. (Judging by a lot of the crap that's out there now -- commercial and free -- people really need Cooper's advice. :\)

    One of the things I am working on is a formalism for a non-hierarchical ('super-hierarchical') namespace, something that I think could be a powerful element of UIs of the future. At least, that's my plan. :) (Visit my Web site if you want to get an extremely preliminary look at what I'm dealing with: innerfire.visi.com/pala-ka lloejna/Namespaces.html [visi.com].

  • Lifestreams, pie menus, paradigm shifts, wearables, 3d: yes, yes, yes, yes, yes.

    :)

  • Opensource style development is incompatible with tradional HCI. When someone has an 'scratches an itch' and writes code, he typically does it for new features. While this is a decent way to add functionality, ease of use cannot be added in such a slipshod way. UI "Refactorings" are pretty uncommon in practice, I think.
    --
    Man is most nearly himself when he achieves the seriousness of a child at play.
  • I am one of the millions of Indians, who happen to
    have read the Unicode standard. Sadly, it is utterly useless to use in the context of my mother tongue, Telugu. The fallacy of Unicode is that a set of characters are sufficient to represent the words in a language. Unfortunately, the character combinations play an important role, which is totally neglected in Unicode.
    It appears that Unicode will solve the problems of Europe, Japan, and possibly China.
  • I don't agree. For example, if a distribution came with all the Netscape Plug-In's installed and configured, I would be a happy person.

    and that's just the first thing I think of. More is better... After all, isn't that why some people still use windows (can't find the right apps for Linux?)?

  • That's a brilliant idea, IMO. Write it up and send it in to them. It might be possible to make it a subproject.

    But think about it for a while first; how will you prevent abuses? How will they choose which projects to review and which to turn down?

    -Billy
  • Right: and bees can't fly, nor was man ever meant to.

    Let's try it and see what happens -- because we HAVE to.

    Maybe we can't do it. Oh well; maybe it's because you were right. More likely it's some other reason; there are SO many of them, you know.

    But how odd -- I'm writing a game. I don't itch for one, but I'm writing one nonetheless. There are itches which can be scratched by other things than writing code for my use -- I'm rewriting Omega because I used to like it so much.

    -Billy
  • Go read Programming as if People Mattered by Nathaniel Borenstein. It talks about combining software engineering and UI design fruitfully. And about how not to fall into the "designed for dummies" trap, making programs easy for experts to use as well as novices. It's excellent, and Borenstein knows what he's talking about—he's no UNIX newbie!

    Borenstein is the head of the LUIGUI project. So this is relevant.

  • Photoshop and Cubase are both powerful and relatively easy to use for their intended audience (artists and musicians, not computer pros). Unfortunately, few software designers think about their audience.

  • I'm not sure that this has to be the only way.

    I have a car. I don't know a whole lot about the car, and I certainly don't consider myself a car expert. I know how to drive it, though, and I can fill it with gas, and check the oil level, and do a few other assorted things that were pretty easy to learn.

    If I have any more complex problem, I take it in to a service station.

    So in the near future with quality Linux desktop apps, fast net connections and ubiquitous encryption (we can dream, right?), it's conceivable that the face of home computing could change. My mom, instead of buying a computer and paying an ISP for a dialup, might just lease an X Terminal + connection as one service contract. All of the administration would be done by the provider. You and I would still buy PCs.

    I think it's pretty likely that admin-ing a system, crash recovery, hard/software installation, etc. will stay too complex possibly forever. The way this sort of problem is solved is by creating a service industry. This is pretty much how it works in an office, the only reason it doesn't in homes is because of access problems.

  • One word: GIMP.

    Seriously, though, this idea that power and ease of use are mutually exclusive is a myth. It takes careful thought and planning, and yes, it's often quite boring. That's probably why many Linux developers don't do it, and they perpetuate this myth as a way of rationalizing it. It's quite sad, but it's the way it goes.
  • I just like how their acronym will be said out loud.

    Can you say "Louie Gooey"?

    I knew that you could. :oD

    J.
  • Not only that, but the "future projects" page suffers from the dreaded question mark syndrome. Seems to me there's a deeper meaning in that :)
  • by raph ( 3148 ) on Thursday April 15, 1999 @09:45PM (#1930731) Homepage
    A couple of comments here. I'm a Gnome developer, and in the Gnome community, there is a serious focus on making Linux truly usable. We talk of the "mom test," i.e. your mom being able to get useful work done with the software.

    Real software usability goes deeper than just interactions with GUI dialogs and so on. For experts, Linux is much more usable than the consumer OS's because it is much more stable, more transparent (less things are hidden under the candy shell of the GUI), and has a broader collection of powerful tools. The challenge is going to be preserving this kind of usability while also making Linux more accessible to non-expert users.

    One argument that's often made is that Linux suits the needs of expert users because it was designed by expert users. Since we are people who don't mind learning how things really work, and prefer the tools to be powerful once we do learn them, that's reflected in what we build. The argument usually goes on to say that since we don't want pretty but shallow, easy to learn but limited tools, we will never end up building these things for Linux novices.

    This argument misses one important point, in my opinion. Even if you accept that the intellectual challenge of building usable software isn't by itself enough to keep the effort going, this argument pretty much assumes that the world is split up into hacker types and lots of isolated people who don't understand their computers. But this is not the full story. Many, many Linux people are sysadmins for a large number of not-so-computer-savvy users. Let me tell you something, Linux people in Windows sysadmin jobs hate having to do several fresh reinstalls of Windows per day per few hundred machines just because the registry gets wedged and there's no way to figure out how to fix it. Many of them would like nothing better than to have Linux become a viable desktop system so they'd be able to at least work with systems they don't hate.

    It may well be that these hardy souls turn out to be the vast army that works to make free software usable. Once Linux starts going into the desktop in sysadminned environments, the channels are in place to collect user feedback, and also to do something about it. If such-and-such feature is confusing to users, then the admins will hear about it. It's probably easier in many cases to just fix it than keep dealing with the problem reports, and certainly a hell of a lot more fun.

    Don't underestimate the dramatic strides already made in usability by the Linux community. When I first started working with Linux about six years ago, the usual way to install new software was to check the README, edit the Makefile, more often than not fix a few #includes or function prototypes, then run a series of make commands. These days we have RPM packages and so on, but we also have ./configure; make. To me, the autoconf system is a classic example of "deep usability" as opposed to the surface kind.

    In summary, I think we're just going to keep on going until we get there.

  • ...is that there are many different flavors of linux and many different window managers. If, for example, RedHat + KDE becomes (l)user friendly the hardcore linux freaks could still use Debian or SuSe + fvwm2. besides, I'm sure they'll make the usability features options that can be switched off by more experienced users. and you must admit that the Windows Control Panel is pretty useful for the simple configuration stuff, and I'd love to see such a thing in linux, but when you get down and dirty nothing's better than plain text config files and joe :-)

    see, it's not either one or the other, it's a symbiosis (sp?). hardcore linux geeks will still be able to use the Ultimate OS as they like to, while Joe Schmoe the average (l)user will also be able to get meaningful work done without having to get down and dirty into all the config files.

    just my two eurocents ofcourse :-)


    )O(
    the Gods have a sense of humor,
  • ... an actually-decent thread on Slashot about GUIs/usability without loads of "keep the clueless idiots out of Linux" or "hey, it's usable enough for me" posts.

    If you still don't think Linux should appeal to the mass market, think about this:
    1) Which OS would you prefer to use at work?
    2) Which OS would you prefer to administer on other people's machines, either in or out of work?

    The whole "Clueless users are bad for Debian" thing was incredibly depressing, but some great stuff came out of it, such as Andrew Pimlott's comments [debian.org], especially where he said "Real usability isn't about reducing functionality and presenting a pretty face to beginners. Done right, it will make all of our lives easier. Even, eventually, beginners."

  • Actually, I'd say that the computer on the Enterprise in Next Generation was pretty evil. I tell you, I wouldn't dare use the holodeck; you always get stuck there, and the computer always tries to kill you. I'd rather have Holly, because at least he doesn't intentionally harm you, although he does enjoy pratical jokes.

    As for UI, let me point out that it's really essential that the various design choices all work well together, rather than working okay. Case in point: Windows lets you resize any given window from any border. BUT, the region in which you can do that is quite small, and marked only by a cursor change. Worse, the mouse tracking algorithm is awful, and it's quite a lot of work to get the mouse to be in the right spot (especially as clicking a mouse tends to nudge it).

    So if you're going to use a similar resize interface, make the mouse tracking more appropriate to such small adjustments; make the resize border more than a few pixels; always provide a nice big target in the lower right hand corner (the default for this sort of thing) for people who don't want to deal with the edges.

    Mostly though, think about the repercussions of some fundamental UI choices, and if there's a better way, would it make life better in general, not just within a specific app?

    There are similar issues, like weighting the importance of bindings by memnonic devices (N for new folder) or placement (Z undo, X cut C copy V paste are all near where the Mac originally had its command key - some of the easier buttons to press in conjunction). And think about two hands having to press the buttons, or if one hand is likely to be on the mouse when an operation is performed. And if so maybe you want to keep the commands on one side of the keyboard (probably the left - only right handed people are important ;)

    Being able to infer information from other config files (e.g. is the mouse set for a right handed or left handed person? can you shift things around because of that? is it wise to?) is also a good plan. But ask before doing stuff that differs from the default configuration!
  • ...Later on, I can customize whatever I want, but I don't want to have to deal with every feature at once.

    This sounds like "progressive disclosure", an idea I first saw mentioned in articles about the Xerox Star. See, for example, this section from what I infer is an essay in the book Bringing Design to Software [stanford.edu], which says:

    The
    Open command was the basis for applying a technique of progressive disclosure--showing the user only the relevant information for a task at hand, and then providing a way to reveal more possibilities as they were needed.

    Perhaps not exactly the same idea, but, if not, it still might be a somewhat related idea.

  • Linux can be both for the power user and average user. Yes, Microsoft has shown that average users lead to unstability, but it doesn't have to be that way with Linux. It will take work, but with careful design code-bloat and extraneous features can be prevented while at the same time giving Linux an easy-to-use interface. I see a point when distributions will have an easier installation than Windows (some argue it does already) and one of the options during installation would be to only display a graphical interface so users don't have to use a command line. In order for big commercial vendors to come to Linux (Quark for example), we need a critical mass. We probably have that already with power users, but we'll need average users in order for the fabled world domination to occur. When PC Magazine has as many Windows-related articles as it currently does about OS/2, I'll be convinced that has happened :-)
  • Sysadmin has little to do with programming, software development, or artistic and scientific creativity

    Possibly, though I'm sure Larry Wall or Randal Schwartz might have something to say about that

    or even understanding how systems work

    Bullshit. That's the _WHOLE POINT_! Sysadms, if anything, are often the _ONLY_ people who know how systems work and what they do. If you don't, then you better start looking for work..

    average users who generally are just as intelligent, and oftem much, much more intelligent, than sysadmins

    What planet do you live on? Most of my lusers are incompetent snots who expect everything NOW, and can't figure out why *.* doesn't work, can't handle case-sensitivity, and can't bloody spell anything.. And they keep giving me code to run on production systems that's full of leaks, 2MB static binaries to run as CGIs accessed twice a second... Fucking accounting jackasses who can't figure out why you can't run a production Oracle DB on a single drive (you can, but you could also build a jetliner without triple and quadruple redundant systems).. My pay seems mostly derived
    from the ability to not kill these people on a
    daily basis..

    Or are you just a disgruntled user?

    Learning these rules has almost nothing to do with intelligence, but only with having "mentors" who will accept the apprentice into the fold. It is a sham

    Again, what planet do you live on? Or haven't you scanned the shelves at Borders or flipped pages at ora.com? I have received a total of 40 hours of formal education on Unix, and that was an AIX class which I could've _taught_.. Everything you need to know to get started in the field, you can learn from the Red Bible, ORA's books, systems docs (docs.sun.com for SPARCs), and manpages. You just need to train your mind to ask the right questions.. And don't represent yourself to be something you're not, but that's probably good advice for any field...

    Sysadmins aren't necessarily paid for their skillset, though that's important. They're paid to be responsible for operation of systems that _must_ remain operable _all the time_. The skillset and ability of that admin (or, where skills are lacking, the availability of that admin) differ, but the customer only really cares that systems stay running as long as possible for as little $$$ as possible. uptime/cost is one of the more annoying ratios in my job.

    Limitations admins may place on your privs and environment may seem (and can be) arbitrary, but think for a second: do you _really_ want root password? My policy on root is that it comes with a pager, and when TSHTF you are on the page list. Anything goes wrong with the system, you are a suspect. Because I'll be damned if I'm answering to Mr. PHB for some junk you did as root that you didn't document (or even understand)..

    Besides, in some ways, admins shouldn't be coders, they should be hackers, because the admin's job is to figure out which piece is keeping the system from running optimally, and replace that piece...

    What was your username again?
    *clickety-click*
  • But this same arrogance you associate with Unix (or Unix-like...) SysAdmins is also very similar to the paternalistic attitude demonstrated to most of its users that Microsoft has.

    Do you have to spite your face (i.e., throw out all the level of detail to manage Unix, or a Unix-like, system) to cut off your nose, ala Windows/WindowsNT, to make a product that is percieved to be "easy to use" and "easy to install"?

    It is hard to think in non-cynical terms about "average users" when you don't identify yourself as one, especially when you work in some way with supporting them on how they use their computers.

    Sure, there is a stereotype there. But the sad thing is, there are so many Average Users who live down to it.

    I know I personally don't care whether Average Users ever really consciously adopt Linux, because for Linux, it Just Doesn't Matter. Linux doesn't NEED them to survive. If all it does is remain a "hacker" tool, then so be it.

    Sure, it may doom me to living a generation or two behind the Average User's systems. It may doom me to either figuring out how to reverse engineer drivers, etc., from the Average User's OS to Linux, violating all sorts of laws, etc.

    I'll take whatever baggage Linux has over the increasing amount of baggage that comes with being dependent on the Average User's OS, because the tradeoff is Freedom.
  • Why should just hackers/programmers/geeks be the only ones to benefit from Linux? If we keep Linux only as a toy for the few people who are willing to put up with its eccentricities, we condemn Linux to become yet another footnote in computer history. Well, why *shouldn't* Linux stay this way (or at least take its own sweet time to become something else)?
  • I disagree. This is not purely a Linux problem. Many people do not know where everything is in Windows, and that's smaller! If you don't know a program is there is that in any way worse than not having it? I don't see how, apart from disk space.

    The main problem is how to track down the best tool for a job. This is a prennial problem, and is not yet solved to my satisfaction.

    the GUI approach is to show choices on screen via "menus" and "toolbars" This once made things simpler, but now there are too many choices, and things end up being buried in sub menus, and are difficult to find. This approach also leads to large "multi-function" applications, as it tends to be difficult to use GUI tools in concert.

    The unix/CLI approach is to provide lots of small tools that can be strung togeather, or programmed to provide any functionality. This makes ir harder initally, as lots of commands have to be *learnt* as well as how to string them togeather, but means that many obscure functions can be created on the fly by the user.

    Neither soleve all the problems and even the CLI method still makes it hard to find the right tool for many functions. I often find myself doing a
    man -k
    and sometimes I'm lucky... Some sort of find tool by function might be nice, but perhaps there are too many possible descriptions of functions....

    Some of the concepts I guess I'm trying to describe are things like LATEX where you request a heading, and the computer handles the details, but applied to the whole system...... I really don't know where to start, put perhaps this group do, and perhaps they will help to take Linux to a new level - I want *easier* to use than Windows!
  • This is not a terrably good metaphor. When you refer to Power in the Porche, you are talking about engine size, road handling, etc.

    Power as referred to here is about control over the system and it's environment, not CPU horsepower. The Porche is easy to use because it severly restricts the on the fly control you have over the car. (eg, AFAIK you can't adjust the fuel/air ratio easily, etc) The defaults may be extreemly good, and the automatic control excelent (you are paying for a Porche) but you have to be a mechanic to have "Power" (in this sense)
  • I recommend this book as well, especially as it has some fresh ideas, and disagrees with the holy mantra "Metaphor, Metaphor, Metaphor". Why copy the world outside into the computer, warts and all, when there are other, more logical ways of doing things as soon as you're not limited to a 2D sheet of paper? (See the calendar-example in the book.) ISBN: 1568843224
  • Real software usability goes deeper than just interactions with GUI dialogs and so on. For experts, Linux is much more usable than the consumer OS's because it is much more stable, more transparent (less things are hidden under the candy shell of the GUI), and has a broader collection of powerful tools. The challenge is going to be preserving this kind of usability while also making Linux more accessible to non-expert users.

    I like this summary but you left out one important community of users and that is handicapped users. For many handicapped users, graphical user interfaces are the kiss of death. My particular handicapped keeps me from using keyboards and mice. I instead use speech recognition and a tablet. Blind users I know require text-to-speech for their user interface. Even a minor handicap like color blindness can render a graphical user interface unusable.

    I submitted a rant on this about a month ago to the/.features queue and either the queue is really large or it's a hint that I need to rewrite my rant :-) . If you want a preview take a look at:
    http://www.connact.com/~esj/ha.html
  • heh... try Debian and see how long it takes to browse through 3000 packages.

  • by Signal 11 ( 7608 ) on Thursday April 15, 1999 @07:43PM (#1930745)
    For me, this isn't a question, but an answer, and the wrong one at that. Let me explain.
    Right now, linux is only really useful to hackers/programmers/geeks. The reason is, only hackers, programmers, and geeks use it. There is right now a push underway to add another group - the average user. But are we certain we want to traverse this path?
    Microsoft has shown that when you combine simplicity with stupidity, you get unstable programs and operating systems. The users demand more and more - they don't care whether code looks beautiful, they care about themes and cool sounds and new mouse pointers and talking paper clips. What's the net result? Software engineering.

    Software engineering is built on one principal - "build it to spec". The spec in this case is, make it easy enough for a dummy to use. Well, it does that. It's also woefully unstable.

    Now, the UNIX heritage is a different story. It's Computer science. An idea is presented, evaluated by it's peers, and brought to implementation if it's agreed it's the best solution at the time. The net result is - progress is slower, but the foundation is much more stable. You have a powerful set of versatile utilities you can use for a variety of tasks - grep, awk, named pipes. Software engineering, however, does not have "versatility" listed in the spec, nor should it - it's built to order. One goal, one purpose, one solution.

    Before we invite the average user into the fold, we should ask ourselves - are we being impatient? Let's show them what computer science can do, and avoid computer engineering.



    --
  • Why should just hackers/programmers/geeks be the only ones to benefit from Linux? If we keep Linux only as a toy for the few people who are willing to put up with its eccentricities, we condemn Linux to become yet another footnote in computer history.

    Well, why *shouldn't* Linux stay this way (or at least take its own sweet time to become something else)?

    Keeping Linux as a pet project only promotes its exclusivity. The question arises, is Linux being built to make the world a better place or to prove just how smart its programmers are? If it is the former, then Linux supporters need to demonstrate its usefulness in the Real World. If it's just a ego booster, then Linux supporters should make that clear to the industry and accept the loss of support. What is the point of promoting Linux if you don't plan to use it? And who then determines if and when Linux becomes a viable product?

    Open source software depends on the masses to design, create, test and implement it. Creating Linux and then not using it for useful, practical, day-to-day, general computing just becomes a form of mental masterbation. It's solving Fermat's Last Theorem [mbay.net], but with a lot less fanfare.

    Currently Linux is viable. It is useful. It does exist as a Real World solution. To pull back now would be to condemn Linux to the same status as the logo [mit.edu] programming language.


    -S. Louie

  • Linux needs to evolve into an OS that is usable by the masses if it is to become anything beyond a novelty item. As it stands, Linux has become popular for the very reason that it can perform competitively with MS, Apple and other UNIX varients on certain levels. It needs to make the next step and become a reliable, trusted foundation that general software can be built upon.

    It's not a matter of competing with MS. That is a small part of the big picture. Linux and other open/free source products means creating software that people can rely upon to do work. If you're into Linux just for the anti-MS nature of the movement, then you're a little misguided. Creating good software for people to use means creating software for everybody. Just because someone didn't contribute code to gcc doesn't mean they can't benefit from it. The same goes for Linux in general. The light at the end of the tunnel is a simple, effective, reliable system. Why should just hackers/programmers/geeks be the only ones to benefit from Linux? If we keep Linux only as a toy for the few people who are willing to put up with its eccentricities, we condemn Linux to become yet another footnote in computer history.

    Right now Linux hasn't won too many people over. Large companies are reacting because they don't want to get left out just in case. Making "ease-of-use" an issue with Linux may promote more general usage from those who are computer-savvy, but not inclined to work with non-proprietary software or have had bad experiences with UNIX. The more general usage, the more companies will be enticed to promote Linux as a viable option. The more companies, the more acceptance and, voila, a world where good, thoughly tested, useful software is the norm.

    Hey, what can I say? I'm allowed to be overly optomistic once in while...


    -S. Louie
  • Unfortunately, to forge a unified, integrated and pervasive UI, you need a dictator to lay out the framework, and impose it as the standard.

    The linux world can't even agree on a toolkit! What do they expect, vendors to develop separate Qt and GTK versions of their software? Ha!

    Look at the two shining examples of integration and useability - Mac and NeXT. They are/were both also shining examples of fascism in design. There was usually only one way to do things. This might have made some developers mad, but it was and is beneficial to users.

    Linux is the antithesis of fascism in design. It is inherently anarchistic. That is why you will continue to have complete anarchy in the world of linux UIs.

    And remember, if linux becomes the Mac, it will cease to be linux. None of you will be very happy with it. Let the novices use Macs. They'll be happier too.
  • The mom test includes setting up a system out of the box, including hardware configuration, etc.

    The mom test includes installing software.

    The mom test includes crash recovery.

    In other words, the mom test means your mom can send you to sweden and still use her computer.

    Of course linux fails. So does windows98, NT and some of the times, also the Mac (although it gets higher marks than any other desktop system).

    The only systems that truly pass the mom test are WebTV and the Palm Pilot.

    Once again, linux users have no understanding of what ease-of-use really means.
  • I have found at least one document that is available in Word format only.

    http://www.luigui.org/projects/recommendations.d oc

    Is this useable? At least they could put all their ideas in html.

    How lame.
  • using font="arial,helvetica" is the preferred method - it covers both of your platforms (unix and windows/mac).
  • I think you've created a false dichotomy. Both engineers and scientists value elegance and parsimony. Think of Occam's Razor (Science) and the KISS principle (Engineering). MS products are built using a third principle, Marketing, that results in massive feature sets with little effort to make them orthogonal and organized in logical hierarchies. They are, instead, focused on specific "market segments," and even if this means that there are half a dozen slightly different ways to perform a particular task, MS will include all of them. It's bad Engineering and bad Science, but it's excellent Marketing.

    One of the bits of MS disinformation that people often accept without question is that more features equals more power. Even hackers fall into this trap (just look at how many options the average GNU tool supports compared to the Unix original).

    Rob Pike of Bell Labs presented an interesting paper at USENIX (around '86 or so) titled "'cat -n' Considered Harmful" where he illustrated the dangers of hacking random features into a tool when reasonable alternatives already existed. "cat", he argued, was for copying a list of files to standard output. The "pr" command was for formatting listings, and thus was the more appropriate place for introducing line numbers. (Try "pr -n -t".) This was just one of many examples of unneeded complexity he presented, and the trend he identified has grown greatly over 13 years. Over time these things have had a similar effect on Unix to what MS's Market-Think has had on its products, making Unix harder to learn and use than it ought to be.

    I don't think the danger is in making Unix too easy to use. Rather, I think the danger is in making Unix as feature-laden as MS's products. It may seem a radical idea, but many of the solutions for Unix usability might be found by going back to its roots. We need to seek quality of features rather than quantity--MS will always win the latter game. It's our job to show the world that their victory is in reality a meaningless exercise in artificially segmenting the market.

  • Have a look at the SLIM project from Stanford University. I went to a keynote by Monica Lam in January in NZ where she described such a system using terminals that don't need to be upgraded -- the monitors can handle screen refreshes at human eye limits. The server is upgraded to do more and more, but the end user never needs to know...

    Unfortunately I can't find a URL... anyone from Stanford reading this and can help?
    http://www.dstc.edu.au/~ralf [dstc.edu.au]

  • Seriously, though, this idea that power and ease of use are mutually exclusive is a myth.

    Equally mythical, IMO (but no moreso) is the idea that anything can be made easy with the right UI. There is, I will readily admit, no excuse for making cryptic software. However some things are, in fact, complicated, and at some point, that complexity must be made visible to the user. If not, then one gets million-deep menus, brain-damaged wizards, and the like, as you attempt to express complicated concepts, in a simple "language".

    Setting up your modem to dial your ISP so you can surf the web, shouldn't require any understanding of TCP/IP, routing, etc. Or indeed, be any more complicated than using the web. Optimizing your multi-homed machine to efficiently route http requests over your IP tunnel, will require that knowledge, no matter how hard you work to conceal it, and it should at that point be assumed that the user has that knowledge....

    As simple as possible, but no easier.


  • You make an extremely valid point regarding the difference between a home ("mom") system and a system that functions well in a corporate network environment. (One of the huge problems with Windows 9x is that it's trying to satisfy both these markets at the same time.)


    However, in this context, Linux currently comes out behind in some respects. Linux (and Unix in general) is still very based on the host-terminal model, and therefor provides excellent remote administration capabilties, but limited capabilities to admin a group of hosts.

    But in a client situation, corporations don't necessarily want better administrative tools, what's needed is less administration altogether.

    Currently there's a lack of system management tools avalible, and if you did have 1000 odd Linux desktop machines, you couldn't even use something as rudimentary as Windows system policies to use default settings. In fact, as far as I can tell, you are limited to using a local account database on each machine, unless you buy Caldera's NDS module. (I could be missing NIS here.) When the #1 user complaint has traditionally been password management (too many, not syncronized, etc), this is a big problem.

    So, while it may seem like a panecea to be able to telnet into Joe User's box and install or fix his word processor, what's really needed is a network-oriented system where such actions are unecessary on a user-by-user basis. X-Terminals and Java NCs have pretty much fallen by the wayside -- Maybe that's where the network admins should look, rather than trying to give the "dumb user" population Unix.




    --
  • Note that in most environments, the "power user" who hacking his registry or trying to install Linux on his work machine is the worst kind of user, and the kind user that generates the most support headaches.

    A real power user is someone who's writing Access (or Unix equivlant) databases or HTML/Javascript and so on. It's not someone whose trying to change his system configuration or test out some cool downloads he's found. Those people are trying to get the company to subsidize (through hardware and support) their computer hobby.

    Most corporations either do disallow this behavior or would if they could. Linux/Unix is the ultimate solution in this regard simply because root is restricted enough to give these folks little or no opportunity to hack on their own configuation.

    Note that I'm not expressing contempt for the average user trying to do their job here - only the special case.
    --
  • I'm one of those sys admins you speak of, and you're correct. There is nothing more I'd like to see than Linux becoming a usable (and already stable!) desktop system that my users could handle. It would certainly make my life easier, from installing software to preventing users from mucking up the system components like they can with Windows (without some serious tweaking on my part). I've been able to setup NT workstations in a way that makes my life much easier than dealing with Win9x machines, but there's still something missing. My users end up running downloaded screen savers, etc. that crash NT. Sure, I could format their disks with NTFS and prevent them from writing anything to the system files, but then I'm limiting myself in recovery options and making things too restrictive for my users. Linux is definitely my preferred option.

  • Yeah, I think it is. One of the good things about a M$ os is that the average joe can do some things pretty easily right out of the box. The simple (although ugly) GUI coupled with Office lets users be productive doing many common tasks without a lot of hassle (crashes aside). Nobody has done it better. There is no reason though, that common tasks can't be made easier under linux without sacrificing its power and flexibility. Look at the SGI desktop combined with its online help, and man pages and intergrated applications. These machines are great. Everytime I go to do something on the O2, I get an education, and I get things done fast, and in a pretty flexible manner. This is because underneath their GUI is a pretty well laid out Unix. Things like rlogin and remote display, and multimedia are more or less transparent. They have put in nice GUI tools for most common tasks, but you can go in and modify things, and in many cases the GUI still works. Better, it will tell you when it cannot (most times.) In many ways that is what we should shoot for. This time it is being done on common hardware with open standards, and protocols that allow for a much more powerful computing experience.

    I can't wait for this personally. Makes all the long range plans of M$ look pale in comparison.

    Linux has that same appeal. When I try something new, I hit the net, then the HOW-TO pages, and again I am rewarded with lots of info, and the sense that the machine is being used the right way. I can look at command history, and note what I do, or script it for later. Very cool, and it can be done as easily from any machine around that is networked. Most of the good pieces are there, we just have not honed them long enough, but that will come.

    The average joe needs to be able to compute, and learn what ever they need/want to. If clear GUI tools are done right, then they will allow the rest of us to get in there and make the machine do what we want, how we want it done, from where ever we need to.

  • I agree with this. I have been working with computers since an early age. Many have forgotten what well optimized code is. Many of their "features" are marketing driven, and not user driven. The shoddy archetuture (ok so I cant spell this morning) of their software reflects this. M$ does not have a lot of my respect due to their almost total lack of regard for those who have come before and built the technology upon which they base their products. It all comes down to the fact that they have built a business model that requires that they own everything, and the consequences of this are beginning to become apparent to the average joe user. I can't help but wonder what would have been if they contributed to things rather than destroyed them.

    All I was really saying is that their core ideals have created something that makes easy stuff easy, but anything else is torture. Given that the Linux community is beginning to build these things again, we should be able to look at what they have done, and filter out those things that drive the not-so-average user nuts.

  • I guess I need to clarify what I said before. I don't want my GUI to look like theirs, and I certianly am not advocating use of their standards. The average user will want to be able to do most of the things that the MS GUI is capable of, and they will want to do it visually. That's all.
  • by Grenamier ( 12799 ) on Thursday April 15, 1999 @09:28PM (#1930762)
    Computer science has always appealed to me because I like things that blend elements of different disciplines into useful things. CompSci itself is a great blend of science, engineering, and mathematics in varying proportions depending on the problem you're looking at. The main products of computer science are abstract though, things like algorithms, proofs and theory...not operating systems or wordprocessors.

    Software engineering applies principles from computer science to the problem of building real software, which is implemented by programmers. Scientists, engineers and programmers all have a hand in the software business, the way an engineer might take ideas from chemistry and physics to design a good bridge which is built by construction workers.

    Computer science isn't really interested in creating actual software products (as well it shouldn't be). Saying that "computer engineers" should step aside and let computer scientists do their thing is like telling a civil engineer to take a hike so physicists can show them how to build an overpass. Well, that's not what they do (I just imagined a white-haired physicist in a lab coat mixing cement and cracked up.. :)

    I don't think it's fair to blame all of the problems of software completely on software engineering. I think the engineering principles get throw out the window a lot when companies decide they don't want to bother with it anymore. Also, engineering can have versatility if it's decided that versatility is wanted in a project...it's all part of defining the "spec". Bad marketing-driven design can lead to bad specs that include talking paperclips and the like, but good design with clear goals can lead to good specs and good products.

    Again, computer science, software engineering and programming are all interrelated, and good software is the result of mixing the best of each. Unfairly denying any one of the three leads to bad software.
  • Linux is all about having the freedom of choice. Not just a choice other than that Redmond OS, but the choice among thousands of different programs, the freedom to make your own hit parade of software, and the freedom to install whatever the heck you want to! If you think "Linux" installed too many different programs, you better look at the documentation about the package management software and see who **REALLY** was responsible for all those toys. ;)
    Freedom is the American Way! We want our choices! Go to freshmeat and download whatever programs you want! Go to slashdot and read any article you feel like! Take a look at the gazillions of nifty backgrounds at propaganda.themes.org (good job, bowie!), too many to chose from! That's the way I want it, no shackles! Wanna cry about how complicated it all is? Have a tissue and grow some balls already, son! Wanna be tied down to moronic simplicty *AND* superb powers? Go to microsoft.com and sell them your free soul. Me, I'll take Linux and it's challenges any day.....

    -Rahga
  • Keep GUIs away from the OS setup and config please.

    I'm worried about this because I have already experienced some M$ Windows type behavior with my Redhat Linux box.

    e.g. Xconfigurator removed some font directories for me. No warning, just took them out.

    e.g. Kvidtune totally screwed my mode lines so that X would not start.

    I could go on, but the point is these nice tools for the uninitiated seem to cause a lot of trouble.

    Now you could say that my Linux tools are only beta etc. but Microsoft seems to have demonstrated, by spending a huge amount of time, money and effort on coding and testing, that it just can't be made to work. The more complicated and powerfull the sytem becomes the less likely these GUI tools are going to work. They just can't anticipate every present and future scenario. Who is going to waste their life keeping them up to date ?

    I don't want to see Linux go down that road.

    Yes, for simple apps. simple tasks provide a nice simple attractive interface. For the rest of the sytem forget it.

    Users of these things should never have to install the operating sytem or configure it. Let it come in a Mac type box all sealed up and ready to run.

  • Maybe I jumped into this forum early, and none of the anti-User Friendly people have gotten a chance to post yet...

    Making Linux easy to use is in no way related to making Linux not powerful.

    For a lot of people, the computer and the OS is a means, a path, and not the objective. They use them as tools, much as you or I use cars, or toasters, or microwaves as tools to get us places, to make toast, or to cook food. Why would we need the mechanical knowledge and training to take apart and repair our vehicles if they break down? Why would we need to be able to rebuild the toaster if a wire inside snaps? Why should we know radar and electromagnetic wave theory to use our microwaves?

    In the same way, no user should need to be able to recompile, debug, or rewrite Linux so they can use it. They don't need to be able to RTFM, or edit .rc files, or set security or quotas to use their computer.

    If you don't want a Linux to be user friendly or useable, all it takes is to start your own fork, make your own distribution, and compile your own code to insure that you always have an obscure, powerful, difficult to use OS.

    While you're at it, you can always go and take apart your car for fun too; you shouldn't be driving it if you can't break it down and fix it.

    AS
  • One reason Windows is so unstable is because it is built on the assumption that device drivers are flawless. Microsoft always had the intention of "certifying" every device driver that makes it into a Windows setup. Unfortunately that kind of quality assurance is very near impossible, even for Unix experts / software geniuses.

    The difference with Unix/Linux is that any process can be killed and restarted, and there is always some kind of escape that allows the user to execute a kill. Even buggy filesystems and video drivers (X servers) can be killed. For example, I run KDE and there appears to be some kind of bug in KDE that causes it to lose the ability to launch new processes after running for days--it won't even let me log out. But I can always use the ctrl-alt-backspace keystroke.

    So anyone who's interested in building a GUI Linux should take into consideration the fact that the drivers and apps written for it *will* be bug-ridden. Always provide the most graceful exit possible.
  • I am optimistic about the future of user interface development on Linux/Unix because there is a lot of motivation to get it done and because the overall architecture is now defined. Let me elaborate. Raph's outstanding comment explains some the motivation for UI development. I am in the position he described and am constantly struggling to make it possible for end users to do the things they're supposed to be able to do with their expensive hardware and software. It's a great frustration when big office suites crash the workstation for no apparent reason, and the cause of the problem turns out to be something obscure such as a misconfigured SCSI card. The UI is there but it's built on an unstable foundation. Those who provide support for end users demand a higher level of quality and are often willing to sacrifice their personal time to provide that quality, if only one small project at a time. Some innovative companies actually pay their highly-skilled employees to improve the overall quality of software, which in theory will reap benefits in the long run. Netscape (AOL) is probably the best example. Netscape/AOL's Mozilla project is also a good example of an application being built with the entire UI known beforehand. Developers today have a lot of advantage over those who tried to create UI's a decade ago because there are now lots of good examples of previous efforts. For example, text-based menus were a bad idea, but they seemed like a good idea until graphical menus became commonly available. So, basically, we now have a good idea of what a UI should look like and how it should behave. We have a reasonably well-defined target platform: both pointing device and keyboard inputs, pixel-mapped video, pop-up menus, resizable windows, icons, etc. An immeasurable amount of work has been put into modern GUI design. UI's built on a stable foundation will continue to evolve because there are knowledgeable people who are motivated to do it and because the work is not as difficult as it was in past years. I am trying to do my part with a UI-enabled project I intend to license as open source.
  • (Oops, I posted this already but the formatting was destroyed. Sorry.)

    I am optimistic about the future of user interface development on Linux/Unix because there is a lot of motivation to get it done and because the overall architecture is now defined. Let me elaborate.

    Raph's outstanding comment explains some the motivation for UI development. I am in the position he described and am constantly struggling to make it possible for end users to do the things they're supposed to be able to do with their expensive hardware and software. It's a great frustration when big office suites crash the workstation for no apparent reason, and the cause of the problem turns out to be something obscure such as a misconfigured SCSI card. The UI is there but it's built on an unstable foundation.

    Those who provide support for end users demand a higher level of quality and are often willing to sacrifice their personal time to provide that quality, if only one small project at a time. Some innovative companies actually pay their highly-skilled employees to improve the overall quality of software, which in theory will reap benefits in the long run. Netscape (AOL) is probably the best example.

    Netscape/AOL's Mozilla project is also a good example of an application being built with the entire UI known beforehand. Developers today have a lot of advantage over those who tried to create UI's a decade ago because there are now lots of good examples of previous efforts. For example, text-based menus were a bad idea, but they seemed like a good idea until graphical menus became commonly available.

    So, basically, we now have a good idea of what a UI should look like and how it should behave. We have a reasonably well-defined target platform: both pointing device and keyboard inputs, pixel-mapped video, pop-up menus, resizable windows, icons, etc. An immeasurable amount of work has been put into modern GUI design.

    UI's built on a stable foundation will continue to evolve because there are knowledgeable people who are motivated to do it and because the work is not as difficult as it was in past years. I am trying to do my part with a UI-enabled project I intend to license as open source.
  • Just what Linux needs, more *nix snobbery. I won't go into the car analogy we've all heard it...but that's my arguement. Next time your car doesn't work, ask your mechanic to fix it. If your mechanic says "No you should know how to do this your self, you should have to learn the intricacies of the O2 sensor and the exhaust manifold before you can even drive a car...." you'll know how everyone else feels when they listen to your arguements about Linux. Your contempt for the "average user" is both sad and disturbing.

  • Never thought of that. Whoever finally comes up with leasing X terminals or the like, with administration done at a central location like you mentioned... is going to parallel the cable industry in its deployment. Clever idea. :)

    Now all you would need woudl be an idiot-proof GUI and lots of funding.
  • I don't seem to follow your logic. Easy to use does not inherently mean unstable. Linux is already been architected from the ground up to be a robust stable OS. There is no reason why this idea cannot carry over into the design and implementation of a robust user interface.

    -mua'dib
  • While you're at it, you can always go and take apart your car for fun too; you shouldn't be driving it if you can't break it down and fix it

    Although you say this in jest, I think it's a great idea. Not to require every driver to understand every detail about how their car works, but at least have a general understanding of what's going on and what aspects are important for safe operation. I understand that non-US countries, ex. Germany, actually do have some requirements of this type involved in getting a driver's license - and I don't think I need to make an argument for the superiority of German drivers to American drivers.
  • I don't know about anyone else, but compared to the GIMP, Photoshop has a much better interface. Although the GIMP is pretty damn similar, subtleties like the keyboard shortcuts were completely missed.

    Photoshop's layout of the keyboard shortcuts is incredible... common combinations of shortcuts (eg. F-F-TAB) are designed to be fluid on one hand, etc. That's why so many graphic designers hate upgrading Photoshop -- they change the shortcuts each time round. (I'm still sticking to 4.0.1... 5 is too different!)

    Macromedia xRes seemed to be a direct clone UI-wise of Photoshop, and they completely missed the subtleties too.

    When the average programmer decides to clone an interface, (eg. FVWM'95 from Win95) I've found they tend to miss the point. It might *look* like the original, but in real use, it's miles off.
  • Don't so quickly dismiss the "average user".

    There exists a small percentage of average users who, given a real OS and real tools, will become intoxicated with the power they suddenly find at their disposal. These are the people who the trade press calls "power users", the folks who think hand-hacking the Win9X registry is the height of computing cleverness. As Linux grows, many of these people will eventually become contributors of code, documentation, debugging, and other good things. They're good, bright, capable people who simply don't yet know how truly powerful their computers can be.

    The task is basically evangelistic: these folks believe a false doctrine (MSFT makes the only OS I need, it's the only OS out there, therefore I must learn all I can about this OS), and it's up to us to show them the truth. Projects (like GNOME, KDE, etc.) that strive to make Linux-based computing more accessible to "average users" will also make it interesting to the power users. Once the power users learn about Linux, they will become our future contributors.

    If we can get more average users, we'll get more hackers, too. These new hackers won't really care about the fine distinctions between computer science and computer engineering. They'll just like software that works, and they'll want to help make more of it. The more hackers we have, the faster we'll outpace proprietary software, and the faster the quality of our own tools will improve. It's a virtuous cycle.
  • Skyshadow wrote: They're right about one thing: Linux (okay, damn it, Linux-based operating systems in general) have too many damn programs. In some areas, there are simply too many potentially useful and yet obscured programs that you won't find unless you're looking for them.
    But a good interface is a very effective way around this problem. For example, the company I work (Amaze) [amaze.co.uk] for have rebuilt the interface to a system more complex than Linux (it operates X-ray microanalysis equipment) so that any user at any level, by following a simple flowchart of the taks they're trying to do, is shown, at each stage, just the controls they need and no others. It's called progressive or incremental disclosure in interface circles. I can see something like this for common Linux tasks - choose what you want to do from some available set of choices and the interface shows you just the commands that are relevant. Cuts down the complexity a treat.
  • Yes. Without question I say yes. If you try to deliver the entire power of Linux at any one time to an interface, it will never be easy to use. However, by revealing to a user just the commands and options they need to do whatever they're focused on at that point in time, you cut down the complexity.
    I'd love to see either the Gnome or KDE crews looking at alternatives (or at least enhancements) to the hoary old WIMP interfaces. Myself, I'm working on a Linux/GPL application rather like Natrificial's Brain [thebrain.com] that can act a an alternative way to navigate the in formation on a computer. The Brain is what I use every day instead of the Windows 98 Explorer and I find it much more effective.
  • With traditional HCI, yes. But there are programmers out there who are also interface designers and information architects (i.e., me), who are keen to get involved and contribute to Open Source projects by doing interface work.
    There has to be a recognition in the OSS community that programming is just part of what's needed to build software. It took a while before people recognised the need for documentation; maybe real interface design is next?
    Hmmm... this sounds like the history of programming in general... first the code, then the interface and if you're lucky, the documentation...
  • Kind of funny:

    When I first started using X-systems, I remember thinking, "No MDIs -- how primitive".

    Now, when I use an MDI under Windows, I think, "I can't see everything I'm working on. What a pain."
  • I'm a software engineer, and I was trained to be a mechanical engineer. Frankly, I'm appalled by what some people call works of software engineering. A good piece of engineering, in any other field, is solid and robust. That is, it is hard to make it fail, and it fails in a predictable way when it does (for a perfect example, think of automotive "crumple zones"--they even make the cars crash the right way).

    I've never heard of good definitions of computer science and software engineering, but we can (in theory) derive such definitions from other science and engineering disciplines.

    To me, science is the bleeding edge. In science, you discover principals that tell you more about your area of study. Often, the scientists are the ones to first invent a piece of technology. Computer scientists are the same way--always out there doing things that the rest of us think is impossible or unlikely. It took a lot of computer science to build Unix, because a lot of it hadn't been done before.

    Engineers, on the other hand, take things that the scientists do and render it more usable. While it took a scientist to develop (for example) fuzzy logic and similar technologies to analyze the human voice, it took engineers to make a dictation system out of it. Where it took scientists to notice the thermal effects of microwave radiation, it took engineers to create the microwave oven.

    Unlike Unix, Linux was built with more engineering and science. Since Linux is based on Unix, 99.9% of the bleeding-edge science was already done. Building new, solid implementations of this stuff is the engineering part.

    Then there's Microsoft. You could call it engineering, but I would consider that an insult to engineering. Engineers optimize a product for technical merit; Microsoft actively sabotages technical merit (their own and others) in order to win.

  • Linux can do more things than most people can imagine. So can Windows. So can a lot of other things. Nobody requires that you use every program installed on your Linux box. You don't have to understand it all. Linux is a universe beyond comprehension to all but the ubergeeks (and possibly not even them). But we're used to having too many choices. We respond by throwing out big heaping gobs of them. That's what we humans do; throw huge numbers of options out. Look at a chess game. Chess is a game beyond complete comprehension to us; computers can't completely "comprehend" the game. While the possibilities are limited, they are myriad. Supercomputers can't consider all the possibilities. Only recently has a computer been able to defeat a Kasparov. That's not because Kasparov can think faster than the supercomputer; it's because he can throw out 99% of the available moves very quickly.

    Only use what you need out of Linux; you likely have enough hard drive to leave the rest hanging out. When you need to do something new, learn how to do it by asking, surfing, or using whatever techniques you have. If Linux has a program to solve the problem for you, you will likely find it. Until then, you don't even have to know that the program even exists.

  • All things evolve. Life grew from simple organisms to complex life forms. If we didn't add 'features' , we would be itsy bitsy cells. It is not about complexity. Even complex systems can be built with simplicity and beauty.
  • Or can anything easy to use be powerful?


    doesn't seem so in software...
  • Right now, linux is only really useful to hackers/programmers/geeks.

    There are basically three kinds of users: novices, competent users and geeks. Here I'm just going to use the word 'geek' as a shorthand for 'hacker/programmer/geek/sysadmin.' It's shorter.

    I fall under the category 'competent.' I use UNIX at work, write scripts to do mundane chores, program from time to time, and run jobs on supercomputers. I have not, however, ever sysadmined, and I hope to avoid ever doing so. I would like to be to easily install and configure Linux.

    My mom falls under the category 'novice' as far as UNIX systems go. Not that she is new to computers, she has been using them as long as I have. But she doesn't go out and learn how to do anything unless she has a need to do it. She most certainly is not stupid. She has a PhD in Physics. She should be able to gain a lot from using Linux.

    She hasn't upgraded her computer (or software, mostly) for eight or nine years, because she wants to keep it stable. She sees that my dad (who upgrades impulsively) has a computer that crashes all the time, while hers (since she just stuck with a version that happened to be stable) still works. I'm sure she would love to use a stable modern powerful operating system, if she could easily figure out how to do what she needs to do, which is just involves a spreadsheet, a word processor and a draw program. And they would have to be easy enough for her to figure out how to use them without reading the manual (since I've never seen her read a manual).

    I think that it is possible to make software easy enough for us to use without dumbing it down. Of course, in my case I know I could learn to sysadmin if I wanted. I just would rather not. And I am sure possible to create a version of the OS that is easy to configure to run as a personal computer.

  • To me Linux is already user-friendly. It does what I tell it to do. How i want it to do it. ANd if something messes up i kill it. I've had to reboot Linux 4 times in 2 years. These were actually hardware problems. Yeah you have to edit some rc files, but have you tried regedit on windows. I'll take a text file anyday. My family hears me gripe all the time about how hard windows is to use.
  • There's rarely too many choices... I'd be much more upset if I found that a useful little pseudo-standard UNIX tool that I learned to use on another OS were missing from my distribution...

    I guess I don't quite understand what your concern is with having "too many programs." Are you concerned that they're taking up too much space (in general, the apps included with most distros are quite small)? Are you concerned that somehow you should know how to use all of them right now? (information overload?)

    As for the time it takes to choose apps to install, hey, I don't know 'bout you, but it takes me about 10 seconds to scan down the list of disk sets to install (Slackware), and maybe 5 seconds per set to pick the individual apps...
  • As part of the LUIGUI team -- we agree! We would love to hear more about what you have in mind...luigui-web@umich.edu

    Thanks! And sorry for the delayed reply, finals and everything...

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...