Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
United Kingdom Portables Linux Hardware Technology

New Linux-Based Laptop For Computer Newbies 198

Smivs writes "The BBC is carrying a report on how people confused and frustrated by computers can now turn to a laptop called Alex built just for them. Based on Linux, the laptop comes with simplified e-mail, web browsing, image editing and office software. Those who sign up for Alex pay £39.95 a month for telephone support, software updates and broadband access. The Newcastle-Based Broadband Computer Company who developed Alex has been working on this project for three years, and didn't immediately adopt a Linux solution — in fact, the first big trial was based on Windows. The company's Chief Technology Officer Barney Morrison-Lyons says that was never going to be the right route: 'The biggest problem with Microsoft is badly-written software — the operating system allows you to write software badly unlike Mac or Linux.' Mr. Hudson, one of the company's founders, said the company also intends to launch an application store for Alex for customers who want to add more features and functions to their computer. 'People who love Linux will be keen to develop for this,' he said."
This discussion has been archived. No new comments can be posted.

New Linux-Based Laptop For Computer Newbies

Comments Filter:
  • by jomegat ( 706411 ) on Friday February 19, 2010 @01:07PM (#31200960)
    Software can be badly written on any platform, and in any language.
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Friday February 19, 2010 @01:07PM (#31200966)
    Comment removed based on user account deletion
  • um... (Score:3, Insightful)

    by cool_story_bro ( 1522525 ) on Friday February 19, 2010 @01:08PM (#31200990)

    the operating system allows you to write software badly unlike Mac or Linux.

    Yeah. No operating system known to man prevents you from "writing software badly".

  • I'm happy (Score:2, Insightful)

    by Jorl17 ( 1716772 ) on Friday February 19, 2010 @01:09PM (#31200992)
    If it trully helps less technical people, then I think it can contribute as living proof that Linux (or GNU/Linux, you decide) can be user friendly.

    Rock on!
  • by tomhudson ( 43916 ) <barbara,hudson&barbara-hudson,com> on Friday February 19, 2010 @01:24PM (#31201212) Journal

    the operating system allows you to write software badly unlike Mac or Linux." Mr Hudson,

    Ms. Hudson disagrees with Mr. Hudson.

    The operating system doesn't "allow you to write software", bad or good. Garbage software can be written for any platform. And the "PC" is really a netbook that uses their servers to store your data, so you're locked in.

    Subscribers to Alex receive a USB stick which contains user-encrypted data and enables them to log on to their desktop from any Alex computer.

    For their monthly fee, customers also get anti-virus software and 10GB of storage space on the Broadband Computer Company's servers.

    The USB stick contains your log-n credentials, encrypted. Your data is sitting on their box. Vendor lock-in and over-priced.

    They're not doing their users any favours.

  • by asdf7890 ( 1518587 ) on Friday February 19, 2010 @01:26PM (#31201248)

    "People who love Linux will be keen to develop for this," he said

    No they won't. People who love Linux/community/whatever will develop for Linux/community/whatever. People who would love the chance to make a quick quid/dollar by packaging up a FOSS app for the app store will love it, but that won't create a marketplace full of will supported apps. And the general public see the "free!" part of Linux and the "free!" part of FOSS apps and won't be wanting to pay for apps from an app store, especially while paying that much a month for a support contract.

    39.95 a month

    You can get a free netbook or lowish spec laptop for that, which will come with Windows and will run Ubuntu quite happily, with many mobile phone contracts over here. This comes with mobile Internet access and a phone you can make/take calls and send texts with. I don't see the market - people will not want to get a free computer then pay that much for support when they can get a free netbook just by agreeing to a mobile phone contract and moan about the lack of support they aren't paying for later (and/or get their mate to support them because Dave knows about these things).

    The biggest problem with Microsoft is badly-written software — the operating system allows you to write software badly unlike Mac or Linux.

    This is wrong on a level or two. While I'm no fan of Windows and the terrible that definitely exists for it, I've also seen terrible apps and scripts for Linux too. No OS can protect the world from slap-dash design/programming with not mind for security.

  • by Nursie ( 632944 ) on Friday February 19, 2010 @01:28PM (#31201284)

    As a software engineer with a very pronounced UNIX bias let me just say I don't like the way windows hides stuff.

    Sure, sure, most people hate command lines and config files. I know this and I'm not arguing that everyone else is wrong and I am right, or that your grandmother should learn to love bash and xorg.conf or anmything else. I'm just putting across my perspective.

    I don't like it when the computer does stuff automatically and gets it wrong, and provides no way to correct it. Example - Dual monitor setup on my laptop + DVI connected flat panel.

    Linux - system boots up in single screen mode, I log into X and run xrandr to see available displays and modes, then run it again to set them up how I want. There is a gui option also.
    Windows - system boots up with the external screen primary but at the wrong resolution or refresh rate so nothing displays. I unplug the screen and windows (unlike linux) detects this and reconfigures to use a single screen. I log in. Bring up the display dialog and as the second screen isn't plugged in I get no options for it. I plug it in and windows switches back to dual screen at the wrong refresh rate and the panel stays blank. Now we're in a bind, I can;t get at the settings with the screen unplugged and I can't get at the settings applications with it plugged in. Eventually I figure out the key combos to grab and move the display dialog onto the secondary screen so I can fix the settings. Then we're ok.

    Now, most people probably have their screens setup once and don't care after that, and sure as hell don't want to be messing with some hokey command line app called "xrandr", but for me it works the other way around.

    I actually got a bit pissed off with NetworkManager on gnome desktops a little while ago because it hides all its settings and profiles away too, and it's quite tricky to find how to get it to stop connecting to a wireless LAN it's been connected to before. I found this windowsy and annoying.

    So yeah, unless things are seemless and perfect (which it seems nobody has got right yet, though I can't comment on Mac) I prefer being able to get dirty with relative ease. I realise that this is more of a power-user than coder example, but I think it reflects the sort of class of problems we unixy types have with MS. I think. Feel free to to inform me otherwise.

  • by Ephemeriis ( 315124 ) on Friday February 19, 2010 @01:40PM (#31201414)

    As a software engineer with a very pronounced UNIX bias let me just say I don't like the way windows hides stuff.

    I'm not a software engineer, and I use a Windows machine approximately 80% of the time... And I don't like the way Windows hides stuff.

    Install a piece of software under Windows, and there's really no telling where it goes. Sure, most of the code will live somewhere in the Program Files directory... But you'll wind up with some DLLs scattered all over the drive, and all sorts of registry entries. Un-install the software and it'll likely leave bits behind. Try to re-install again and you may find yourself with all sorts of odd errors.

    I don't know how many times I've had to manually comb through the registry to clean out left-behind bits of antivirus software that didn't get cleanly removed.

    There's generally no good way to make a backup of your settings before messing with something. Under Linux everything is basically a text file... So I can make a backup of that text file and revert to it if I have to. Under Windows... Well, I suppose I could probably make a backup of the registry... Unless the setting is actually stored in a file somewhere else - like in Local Settings or Application Data or something like that.

    And if I screw something up in Linux it's generally a matter of making a change to a config file that is more-or-less human-readable. Under Windows it's a matter of finding the right checkbox in the right window - which isn't necessarily going to be available if you've borked your machine badly enough that you've had to slave the drive.

  • by ais523 ( 1172701 ) <ais523(524\)(525)x)@bham.ac.uk> on Friday February 19, 2010 @01:50PM (#31201578)
    Part of the reason is because Windows is backwards-compatible to Windows 3.1 and 95, which were build without reasonable security models. Since then, Windows' security model has improved a lot; but it still needs to be compatible with old programs, which tended to be written quite badly back then. (I'd say older versions of Windows encouraged sloppy coding because it was just so easy, unlike the UNIX variants around at the time which would generally complain if you tried to do things that broke security too badly). Windows also used to be more homogenous than UNIX systems (even nowadays, you can see Linux manpages talking about the difference between BSD-style and X/Open-style, such as this one [die.net] which summarises the mess). As a result, old Windows programs tended to work even if written badly because they only had one sort of system to run on, which let them get away with dubious things, whereas old UNIX programs tended to need to be written well to work at all. (Classic Mac OS can be pretty-much disregarded here, because nobody uses it nowadays and Mac OS X is based on UNIX.)

    Since then, all the operating systems involved have become more modern. In UNIX land, people were used to porting programs anyway, to get them to run on new variants, so when newcomers like Linux turned up (and later Mac OS X, which is less different from traditional UNIX than Linux is with respect to how traditional UNIX applications behave, although it's a lot more different with respect to newly-written applications designed to run on it specifically), it was generally the responsibility of people to modify applications to get them running on Linux in particular, or whatever. As a result, Linux can do its job quite well without needing to tolerate badly-written insecure legacy applications. On the other hand, Windows would lose one of its major selling points (its compatibility), if it did that. So Windows is written to be very good at running legacy badly-written applications; as a side-effect, though, this means that it's rather good at running badly-written applications, even new ones.

    The end conclusion is that if you want to write well, you can do it on any platform; but lazy programmers who want to write badly will have fewer issues doing so on Windows.

    (There's another force at play, too; the cultures of obtaining software in Windows and Linux are rather different, and as a result, well-written Linux software tends to become much easier to find than badly-written Linux software. I imagine there's lots of bad software out there for both Windows and Linux despite the above effects, but you'll find bad Windows software preinstalled on a newly bought computer alongside the OS itself (people debate the merits of various OSes, but everyone I know hates "shovelware"), and on driver disks with hardware you buy. This doesn't happen so much with Linux, because there's no, or not as much, money in it; people on Linux are so used to (legally) getting software for free, that they're unlikely to pay unless people are offering something of good quality.)

  • by aztracker1 ( 702135 ) on Friday February 19, 2010 @02:05PM (#31201742) Homepage

    Properly behaving windows software will use a handful of locations for different items. The application and all required libraries, not registered by a separate installer or sub-installer will exist in C:\Program File\s or C:\Program Files (x86)\, your individual preferences should be in C:\Users\username\AppData\(local|roaming|local low)\appname\ and system wide preferences will be in the common profile directory. (note the paths changed slightly from NT4 to 2000/XP to Vista/7, but it's much more unix-like today.

    Though to be honest, there's just as much fragmentation on the *nix side. System 7 style vs. BSD style structures. /usr/local/ vs /opt/ as well as a lot of software written so it bloats out the user profile, instead of an application subdirectory. Not to mention replication of portions/all of a profile (windows does this a bit better imho).

    In terms of backing up your settings, you should be able to copy/backup your user folder. If a particular software doesn't follow the standards set out (special directory settings), it isn't the fault of Windows, it's the fault of that piece of software. You can write resilient software for pretty much any platform. And pretty much all of them have settings for where to put certain types of resources. The problem is the number of developers who make/made assumptions in their software that should have been using standards that have been set for over 10 years now, at least the software that's less than 10 years old for windows (since NT4 in '96 and Win98 a couple years later).

    I've seen more than a handful of apps meant to run on *nix platforms that make the same horrible assumptions as well. Also, it's not much less confusing in *nix. You'll need your user profile as well as /var and probably stuff in /opt and /usr/local ...

  • by natehoy ( 1608657 ) on Friday February 19, 2010 @02:08PM (#31201786) Journal

    All operating systems allow bad software to run, but the difference is how those applications have been developed for that platform over the years.

    Back in the earlier days of DOS/Windows, the concept of limiting what a user (or their account) could do was pretty much at odds with the whole concept of having a "Personal Computer". The whole idea of a PC was that the person sitting behind the keyboard was in control of what happened on that computer, and the operating system should deny them nothing.

    As the DOS/Windows model evolved, password-protecting the computer was added, and eventually levels of user access were developed. But the kicker was that most software was developed by people with administrative access to their computers, and given that the shipped default of all Microsoft OSes up to and including Windows XP was to have one user who was an Administrator, there was no significant penalty for writing software that required Administrator access.

    I've tried to run Windows XP as a limited user, and it was a pain in the shorts. So much software out there simply won't run, requiring me to keep a pseudo-admin account so I could "run as..." or just give up and promote my user account to admin.

    Enter Windows Vista, which as a security model was great - you could run as a limited user and "escalate" permissions for certain software that asked for it. It's almost a copy of "sudo" for Linux. It was a significant step in the right direction, and it started pushing the concept of developing applications with "minimum necessary" permissions (storing user defaults in user accounts or user sections of the registry rather than system folders or HKEY_LOCAL_MACHINE, etc).

    The vast majority of Windows software authors (including shareware authors) really do design their software to run just fine on a limited account. But there's a good bit of Admin-required stuff out there. Enough that most people don't even try running XP as a limited account, and those that do tend to give up after a while.

    ----------------------

    Compare that to the Linux world. Very few users run as root in Linux. The shipped default of most distros is either to have a separate admin account or (in the case of things like Ubuntu) to use a "pseudoroot" - a root account that does not really exist but that the default user is escalated to upon demand (similar to UAC in Windows).

    As a result of this, most Linux applications are built based on a more limited permissions model - they tend to store their application data in the /home directory of the user rather than scattered out with c:\Program Files (which is read-only to a proper limited user account) for example. If the majority of your users cannot run your software without significant reconfiguration of their machines, they'll squawk about it and you'll tend to change your code to fit the default model.

    So, while neither model FORCES good programming habits, a Linux developer is more likely to be testing his or her code under a limited account, because that's what most of his or her intended users will be using.

    It's not that one OS is inherently more secure than the other, necessarily, but that the shipped default has been set securely for so many years that the developers on a Linux platform are just used to only being able to run in userland, so they're more likely to write apps that way. Windows developers have not (until relatively recently) suffered a penalty for writing things to protected areas.

    With the advent of Vista and Seven, Microsoft raised the issue in terms of how many UAC boxes popped up. And this was the cause of much angst and gnashing of teeth among users. But that move is slowly showing Windows developers that they need to be taking userland permissions seriously too, and that's a good thing in the long run.

    Of course, not all security threats happen outside userland. You can always install software in \My Documents or /home and run it.

  • by natehoy ( 1608657 ) on Friday February 19, 2010 @02:39PM (#31202086) Journal

    It's a good thing you're only "pretty sure" about that, because (depending on what version of Windows you are referring to) you're utterly wrong .

    In any Windows up to and including XP, deleting C:\WINDOWS and all subdirs is a trivial task. Windows is generally shipped with one user, and that user is an Administrative-level user. Windows Vista and Windows 7 have UAC, so at least the system will warn you at least once.

    In every Linux distro I've ever worked with (Red Hat, Fedora, SuSE, Ubuntu, Mint, Knoppix, and a few others) your default user is a limited user account. If you went to any system-critical file and tried to delete it, you'd have to go through the extra step of escalating your permissions (log out and log back in as root, invoke su, or run the command through sudo). All three methods require that you enter a password of some sort.

    So, for the vast majority of computers out there, your statement is the complete reverse of the reality involved.

If you think the system is working, ask someone who's waiting for a prompt.

Working...