Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Red Hat Software Businesses Software Linux

Fedora Project Considering "Stateless Linux" 234

Havoc Pennington writes "Red Hat developers have been working on a generic framework covering all cases of sharing a single operating system install between multiple physical or virtual computers. This covers mounting the root filesystem diskless, keeping a read-only copy of it cached on a local disk, or storing it on a live CD, among other cases. Because OS configuration state is shared rather than local, the project is called 'stateless Linux.' The post to fedora-devel-list is here, and a PDF overview is here."
This discussion has been archived. No new comments can be posted.

Fedora Project Considering "Stateless Linux"

Comments Filter:
  • Looks neat but... (Score:4, Interesting)

    by cato kaze ( 770158 ) <omlet.magi-n@com> on Monday September 13, 2004 @06:41PM (#10241498)
    I don't see the purpose. Maybe I'm just unitiated, but wouldn't a linux terminal server work better, or perhaps some other solution. This in particular doesn't look that amazing, but I could be wrong. Does anyone out there have specific uses for this? (TFA won't load for me, so I'm going on what I see)
    • Re:Looks neat but... (Score:5, Interesting)

      by deragon ( 112986 ) on Monday September 13, 2004 @06:45PM (#10241539) Homepage Journal
      Depending of your needs, its better than a thin client, because each user still has his own computer, with all the CPU power, GPU power, etc... for him/herself.

      You can still have one user work and experiment on a kernel module and crash his system while another continue with her wordprocessing.
      • Re:Looks neat but... (Score:5, Informative)

        by afidel ( 530433 ) on Monday September 13, 2004 @07:05PM (#10241729)
        Exactly, this is a lot like windows roaming profiles and network mounted home directories. All the user settings and files move with the user without the drawbacks of terminal servers (of course it also comes with a lot of the drawbacks of disperse workstations). Combine this with network mounted application directories and you have almost as low of a TCO as terminal servers with the power of individual workstations.
      • Comment removed based on user account deletion
      • Re:Looks neat but... (Score:5, Informative)

        by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday September 13, 2004 @08:02PM (#10242210) Homepage Journal
        Even better, use this to eliminate the burden of maintaining all those installs, but use OpenMOSIX clustering. Now, everyone will get all the available performance of all the systems, AND you reduce your administration overhead. Too bad you can't use a 2.6 kernel with o-mosix yet - but that's coming in the next six months to a year. They say [sourceforge.net] that they're aiming to move everything possible into userspace, which will help them achieve their next goal, of splitting architecture-dependent code from everything else. There is still one more release (for kernel 2.4.26) before they get crackin' on 2.6 however. MOSIX has the same problem (plus is x86-only) and is available for kernel 2.4.27.

        If this thin client cluster idea appeals to you, please see ltsp-mosix [lpmo.edu].

    • Re:Looks neat but... (Score:4, Informative)

      by JPyObjC Dude ( 772176 ) on Monday September 13, 2004 @06:47PM (#10241563)
      There are dozens but they do not sit in the normal desktop computer realm. Such an architecture would be well suited for low cost server arrays that could run an app like compler, rendering or seti farms.

      Once such a system is set up properly, it could be self maintaining with a significant reduction in hardware and energy and maintenance costs.
    • by Anonymous Coward
      I don't see the purpose. Maybe I'm just unitiated, [...]
      On first pass, that said "urinated". On second pass, it said "uninitiated". Third and subsequent passes have failed.
    • The advantages become apparent when you have a large number of identical systems. Even more so when you want them diskless.

      That description matches an compute farm in the next room [0]. It also handles the case of 'diskless' install with a local disk, used for application specific working space [1].

      Hell, in the next building there is a beowulf of 32 nodes that hasn't bee updated because the updating of 32 nodes wasn't automated, and time crunch [2]. If it's all from a single image, that's trivial to up
    • Maybe I'm just unitiated, but wouldn't a linux terminal server work better

      Setting up a linux 'terminal server' (using XDMCP to provide X logins to thin clients) is exceedingly easy [google.com] to set up, and your thin clients can be running pretty much any UNIX flavour that supports XDMCP. I personally like this set up because the client computers can be as dumb as possible (and bloody cheap), and you can invest server resources in your central server - make it real beefy, dual processors, gigabytes of RAM :) The t

    • by The Monster ( 227884 ) on Monday September 13, 2004 @08:42PM (#10242448) Homepage
      I don't see the purpose.
      This is just the logical conclusion of the Linux Standards Base, including the File Hierarchy Standard [pathname.com]. Fundamental to FHS is the division of the file hierarchy according to two orthogonal criteria:
      A file/directory is either
      • Static (not changed except by action of the system administrator), or
      • Variable (subject to change at any time
      and either
      • Shareable (multiple machines can have a common copy), or
      • Unshareable (each machine needs a separate copy).
      In an effort that is conceptually equivalent to the separation of the kernel tree into architecture-dependent and -independent subtrees for the Alpha port, which made subsequent architectures far easier, a lot of people have devoted their efforts to determining just how little of what goes into the file hierarchy really has to be unique to the machine.

      The 'aha moment' comes when you think of groups of workstations with identical hardware, which are candidates for having a common image from which they can be built, and realize that you can build a relational database that correlates MAC addresses (possibly to some other locally-unique but shorter machine number) to the HW configuration. Now, conceptually all of those cookie-cutter-identical machines are a single entity for the purposes of configuration. A lot of what FHS considers 'unsharable' is now quite 'sharable' within such a HW config group.

      As workstations age, the IT department brings in a couple samples of the next HW configuration, loads drivers, tests against the app suite, and when they're ready for primetime, the vendor delivers them, the MAC addresses are added to the database, the workstations boot up, find Mommy (bootp server), and Just Work. The user can log out of an old computer and into a new one, and find all his 'stuff' right where he left it. It's the only sane way to compute in an institutional environment.

  • NFS Mount? (Score:2, Interesting)

    by Tony Hoyle ( 11698 )
    Haven't Unix machine been doing this for years as NFS mounts? The first sun machines I used (sunos 4.1) has just a single install of the OS and two machines sharing a read only mount.
    • Re:NFS Mount? (Score:5, Interesting)

      by nzkoz ( 139612 ) on Monday September 13, 2004 @06:51PM (#10241615) Homepage

      If you'd bother to read the white paper or howto (sure, I'm new here) you'd have read that this is more than NFS mounted roots.

      It's a framework for managing the servers, cached operation, integrated authentication etc. You can use this framework to manage roaming devices like laptops, allowing automatic install images, etc. etc.

      An NFS solution requires network connectivity the whole time, this doesn't.

  • LTSP (Score:3, Interesting)

    by Eberlin ( 570874 ) on Monday September 13, 2004 @06:43PM (#10241509) Homepage
    Stateless installs? Sounds a bit like the terminal server project. I smell thin clients...are they going into fashion again?

    Thin clients WOULD be a blessing, I imagine. Single configuration, one update, all the "personal files" in a server somewhere -- makes for easy updating and backing up. Also keeps hardware requirements down...which [buzzword warning] "helps lower TCO and increase ROI"
    • Re:LTSP (Score:5, Funny)

      by savagedome ( 742194 ) on Monday September 13, 2004 @06:48PM (#10241575)
      I imagine. Single configuration, one update, all the "personal files" in a server somewhere -- makes for easy updating and backing up. Also keeps hardware requirements down

      Welcome to the world of 'dumb terminals' again. Thanks for playing this long!
      • >Welcome to the world of 'dumb terminals' again. Thanks for playing this long!

        Quite right. The move in business IT from centrally managed mainframes to networked PCs was a huge step backwards in terms of cost and availability of line-of-business applications.

        The average user, once they learned how to use a particular application, never had to worry about IT, because it "just worked". Contrast this to downloading patches every other day, running weekly virus scans, keeping your PC current in the cor
    • Re:LTSP (Score:5, Informative)

      by LincolnQ ( 648660 ) on Monday September 13, 2004 @06:49PM (#10241589)
      It is intended to be a balance between thin and fat clients. So you have applications stored on the server, but copied and executed locally.

      Seems like a good idea to me.
    • Re:LTSP (Score:4, Interesting)

      by caseih ( 160668 ) on Monday September 13, 2004 @06:52PM (#10241619)
      While this could be used for thin clients, most of the pdf actually deals with thick clients. IE laptops who need full installs, and then sync up when part of the network.

      This kind of disconnected caching would be excellent. In some ways it's a kind of uber-sync.

      What fedora is experimenting will work great on thin and thick clients. I think this is an exciting development, and even for maintaining just a few machines around the house would be nice to have that kind of capability.

      Also, I would say that yes, thin clients are coming back into fashion. But thick clients are here to stay also.
  • mainframe (Score:4, Interesting)

    by celeritas_2 ( 750289 ) <ranmyaku@gmail.com> on Monday September 13, 2004 @06:43PM (#10241513)
    Unless i've caught a large case of the stupids, it looks like we're heading back to the days of the mainframe computer which many terminals plug into. Is this good or bad or neutral? I think this is a good way to keep corporate/school/etc computer costs down while making sysadmin jobs at least a little easier.
    • It's good.

      Until the central server crashes and nobody can do anything.
      • Re:mainframe (Score:4, Insightful)

        by celeritas_2 ( 750289 ) <ranmyaku@gmail.com> on Monday September 13, 2004 @06:47PM (#10241568)
        In my experience, then central server crashes anyway and nobody can do anything because they're too tied in already with email internet and logon. Just as long as security is good, and data is backed up very redundantly, I can't see that there would be any greater disadvantage.
      • The central server for something like this could easily be a cluster, whether load-balancing or failover/HA, optionally using a dual-attach RAID. Frankly it would probably be better not to do that, and to maintain totally separate data stores for configuration information, storing user data on a failover cluster that DOES use dual-attach storage. Commit configuration changes every n minutes and in such a fashion that you aren't going to partially overwrite a config, for maximum reliability.
        • Re:mainframe (Score:3, Interesting)

          But what about costs? I can buy 100 workstations, monitor included, for about $65K. How much does a dual-server setup (including terminals) for 100 users cost?

          Part of the problem is that while I don't trust users to keep their machines running properly, I barely trust a lot of server admins to do any better. I've seen the way a lot of servers are put together, and how often they need some really inane maintenance. It's scary. The penalty for a bad user is usually limited to affecting one or two people
    • Re:mainframe (Score:5, Informative)

      by owlstead ( 636356 ) on Monday September 13, 2004 @06:58PM (#10241674)
      Terminals did not have their own CPU to do things. Here everything is kept local, except the OS install which can easily be managed. Since Linux can work without rebooting for driver installs (which is a necesity in this case) you can even run different kind of hardware on a single install. Basically you now have a flexible, cheap network computer.

      And since we cannot do without networking anyway, and since storage devices are easy to make high available, this would seem like a blessing to me.
    • You almost need a cluster/failover server that has "dumb" terminals that plug in, and possibly share resources with the cluster.... sort of a mix of the two.
  • by Anonymous Coward on Monday September 13, 2004 @06:44PM (#10241523)
    On behalf of non-geeks, let me be the first to say... HUH?

    I mean, I know the words. It's mostly English, and that's my first language, and I'm pretty handy with computers, but that was the most incomprehensible load of babble I've heard since the last time I watched TNG.

    Can someone explain what this means, in plain English, to a regular user (i.e. non-hacker geek types)?
  • Wow! (Score:4, Insightful)

    by Libor Vanek ( 248963 ) <libor...vanek@@@gmail...com> on Monday September 13, 2004 @06:45PM (#10241528) Homepage
    Wow - this is really HUGE project. I mean - it spreads from kernel, through init scritps, through X managers & enviroments to easy to use administration tools. If they suceed this could be really "Linux killer application".

    And please all the "NFS root is enough" posts - read the article!
    • Re:Wow! (Score:2, Insightful)

      by lakiolen ( 785856 )
      It's not a killer app. It's not even an app. One's not going to download a file and suddenly their using stateless linux. It's a different way of organizing the underlying layers that applications use.
    • The project is too big, ambitious and lofty. It's just bound to collapse sooner or later IMHO. I don't think anybody /really/ wants to relearn how to deploy Linux anyway.
      • I don't think anybody /really/ wants to relearn how to deploy Linux anyway.

        What about the ones who haven't learned Linux a first time yet? Might they find this useful?

        You're right It's ambitious. I don't think that is a bad thing though.
      • by who what why ( 320330 ) on Monday September 13, 2004 @11:04PM (#10243264)
        I don't think anybody /really/ wants to relearn how to deploy Linux anyway.

        Well, most of us don't /really/ want to relearn *anything*. Sometimes, however, when you hear a new idea relating to an area you work in, the penny drops, and you are left thinking "wow, what a great idea".

        For instance, I work in a scientific research environment (high energy physics) where most of our software is Free (capital F), we work in different places at different times (planning, lab, analysis), we have a great deal of customized and hand written software and the ideal development environment so far has been NFS mounted home directories (running RedHat and now Fedora). In theory every machine I log into is running the same OS, with /usr/local NFS mounted from an [application|file] server, I login though NDIS and my home directory is also NFS mounted.

        This works fine in theory - except without a serious admin budget, different OS versions spring up... I have access to machines running RH9, FC1, FC2... and that's an improvement, whilst RedHat were still supporting RHL, we had 7.3, 8.0 and 9.0, with wildly different GCC versions. What happens? I end up using specific machines with a similar enough environment that all my simulations will at least compile without tweaking, and all my scripts etc work the same way. Homogenous environments, no matter how ideal, are not a possibility without a manpower commitment that many SMBs and other small operations can't afford.

        This stateless project LEAPS out at me as an ideal way for small operations (like up to 100 seats) to be managed by a single (even part time) admin.

        Not to mention the attempt to tackle laptops - which is the reality of the workplace. Many people have laptops. A lot of them (and their CTOs) would love to be running the same environment as the workplace LAN. At my lab most people have a laptop due to the amount of travelling we do - I'd guess that 90% of them are running XP, since even if they did run linux, they'd have to administer it themselves, wouldn't have clearance to access the NFS shares for $HOME and /usr/local.

        Although the laptop aspect still has a troubling achilles heel: most of us (well, my colleagues at least) have laptops in order to present our work to others. Even ignoring the ubiquitousness of PowerPoint, who amongst us would want to be on the road with a "cached client" laptop with NO write-access to anything but $HOME. Sure, the system worked at the office, and you fixed all the bugs that cropped up when you connected from home on you DSL, but what about a strange environment. You need to connect over someone elses WiFi to get the latest figures (sure, TFA talked about user-configured WiFi, but still, what if they have different security like WEAP that needs a new package and root access), or if you NEED to plug in a USB key to give a collaborator or customer your files. What then?

        Regardless, this to me is a prospective Killer App for linux, and is definitely tackling a bunch of issues that may niggle an admin for several years before they could even define what the problem is. Automatic updates across _all_ your workstations. Backups that require 10 minutes work after a crash - and I can attest that a recent HD crash to our "distributed" system took a few hours to get the machine back together, but several days before all the little minor tweaks we needed had been applied (things like monitor resolution, 'sudo' configuration, extra packages, sound drivers.

        For the first time, I stand up and say, THANK YOU REDHAT and THANKS FEDORA. This project tells me that you are thinking about your installed customer base and offering _really_ innovative ideas to the community. Anyone want to moan about how Linux is always playing catchup to MS and Apple and how F/OSS is doomed to lag behind forever?

  • by Anonymous Coward on Monday September 13, 2004 @06:46PM (#10241552)
    I want a distro where by default packages install under $HOME so that someone can install their favorite browser without root access.

    It's really disconcerting for me that practically all the distros want you to have root access even to install a simple MP3 player from their package files; and extremely distrubing that they do it by popping up KDE or Gnome windows asking for root paswords.

    Isn't this what we blame microsoft for?

    Disk space is cheap enough, we don't need more sharing of config stuff - we need more separation so users can use the benefits of package managers without having to get in the way of other users.

    • I have done something similar to this before. Use debootstrap to install a minimal Debian installation into my home directory, chroot into it, and then install whatever other packages I want to my heart's content. Unfortunately chroot requires root for some reason. If there was a way for a user to chroot, it would be pretty trivial to stow packages in your home directory even if they were compiled for systemwide installation.

    • by v1x ( 528604 ) on Monday September 13, 2004 @06:58PM (#10241678) Homepage
      > Isn't this what we blame microsoft for? <

      Not quite: we blame them for having to *run* a lot of programs as root to get full functionality. In most *nixes, OTOH, you only need root passwords to *install* programs, while the programs themselves run just fine for regular users.

      I dont see anything wrong with having to ask for root passwords for critical changes to any system: its a good practice, and one of the better implementations of it is seen in OS X, which actually has 'Lock/Unlock' icons for settings that need root access.
    • by Bazzargh ( 39195 ) on Monday September 13, 2004 @07:06PM (#10241738)
      "I want a distro where by default packages install under $HOME so that someone can install their favorite browser without root access."

      Take a look at zero install [sourceforge.net]. You can install 0install on many distros (as root) then install apps as a user exactly like you want.

      Or buy a mac!
    • I don't see why you're being modded as insightful for this rant. I have really not the slightest idea of why you mentioned Microsoft either.

      Here's a few points. First of all, you can configure KDE or Gnome not to ask. Second of all, most users are not admins. Allow me to expand on that. Most people who use computers have no idea of what is harmful and what is not harmful and will install anything. Theoretically the admin should install the basic apps (office, music, and internet) so that users won't go a
    • MIT Project Athena uses "lockers" which contain, well, whatever, but usually software. Lockers are mounted when an attach request is made. Home directories are lockers, as are install locations for software packages. Lockers presumably have some sort of permissions to determine who may attach them.

      The only problem with installing a package under $HOME is that software generally expects things to be in certain directories, unless you compile them, and then you can build them in and install them to your hom

    • by bigberk ( 547360 ) <bigberk@users.pc9.org> on Monday September 13, 2004 @08:38PM (#10242427)
      It's really disconcerting for me that practically all the distros want you to have root access even to install a simple MP3 player from their package files
      I always tended to think that packages were for the admin. If you want to install software, you can still install it under your home directory like we've done since the 70's ... compile it from source. These days, thanks to autoconf/automake, it's as easy as
      ./configure --prefix $HOME
      make
      make install
      • Exactly the point I wanted to make, and waited for someone to make it... ;-) Not to mention the fact that this is the way I actually deal with day-to-day hassles of not having the most up-to-date, say, gnuplot, available on our company's servers, which does not require a call to sysadm types, but also, as a friend of mine said long time ago "You never learn much of UNIX until you've spent your time on a system on which you had no root..."

        Paul B.
    • by grasshoppa ( 657393 ) on Monday September 13, 2004 @09:30PM (#10242732) Homepage
      I want a distro where by default packages install under $HOME so that someone can install their favorite browser without root access.

      Were the internet a safe place, I'd almost agree with you. Almost.

      Isn't this what we blame microsoft for?No. I've never blamed MS for this, who by default, logs in users as administrators. Which is a terrible idea, security wise, and they've been pulled over the coals several times for it. Rightly so.

      Disk space is cheap enough, we don't need more sharing of config stuff - we need more separation so users can use the benefits of package managers without having to get in the way of other users.

      No, what we need is users to do their job and stop trying to get around the restrictions the admins put in place, which is exactly what your idea would be used for.

      In fact, in all my production systems, home is ALWAYS mounted as noexec. You want a program on the server, fine, you let me know which one and why, and I'll think about it.
      • by IamTheRealMike ( 537420 ) on Tuesday September 14, 2004 @03:42AM (#10244132)
        Were the internet a safe place, I'd almost agree with you. Almost.

        Requiring the root password for certain tasks does not increase security, IMHO. Most users (a) don't want to be constantly typing in passwords and (b) would type it in whenever it was asked for without thinking too hard about it.

        If anything you don't want the typical personal-PC one-user setup to ask for the root password very often because the more often you ask when it's not really needed, the greater "password fatigue" gets and the less likely people are to think critically when they get asked.

        Really, if you spend a lot of time thinking about it as I have, you come to the realisation that malware which relies on social engineering doesn't have any useful technical solutions. You can get some way there with things like distributed whitelists but pretty quickly you end up in the realm of civil liberties (who really owns that machine you paid for?).

        In short: making tasks hard doesn't increase security, it just annoys the user. If the user has decided they want to do something, they'll do it. So good security in the face of a dangerous net is about advising the user well whilst not getting in their way.

        Now, I know you're coming from the viewpoint of a server admin which is fine. Most people aren't server admins. It's wrong to try and apply the tools used to admin servers to home machines.

        That's one reason why autopackage can install to home directories. [autopackage.org] (see the third screenshot), though it's not really something that's encouraged (and it can be disabled by administrators). Another is because it's really useful if you want to use a newer/older version of the software installed on a multi-user machine without interfering with other users. Another is because some shell accounts do let you run programs and it's nice to be able to use binaries.

        In fact, in all my production systems, home is ALWAYS mounted as noexec. You want a program on the server, fine, you let me know which one and why, and I'll think about it.

        That doesn't help very much, you can still run programs on a no-exec mount if you really want to.

      • ...home is ALWAYS mounted as noexec. You want a program on the server, fine, you let me know which one and why, and I'll think about it...

        I think that pretty much sums up why client/server computing is dead and everyone runs a local copy of Windows as admin.
    • I know it's not Linux, but Mac OS X does this (or can do).

      The first user you create is an admin, but runs (kind of) as an 'ordinary user', but with the power to 'sudo'. Think of them as being in the 'wheel' group.

      If I want to drag an application from an install disk image (or compile one) I can. If I want to make it usable by all users, I need to enter my password when I drag it into the 'global' applications folder.

      OK, some apps require the admin password to install (and many shouldn't, but still do it)
  • Like Clusters (Score:5, Interesting)

    by deadline ( 14171 ) on Monday September 13, 2004 @06:46PM (#10241555) Homepage
    This is similar to what clusters try and do. It is important to maintain the same OS state on all nodes. Take a look at Rocks Clusters [rocksclusters.org]. Rocks will push the same OS image out to the nodes of the cluster. There is no reason the cluster nodes could not be workstations on a desk.
  • Again... (Score:5, Insightful)

    by Libor Vanek ( 248963 ) <libor...vanek@@@gmail...com> on Monday September 13, 2004 @06:48PM (#10241572) Homepage
    Posts like:

    NFS read-only & shared root is enough
    +
    LTSP
    +
    Thin clients

    => please read the article
    • I've been doing too much preposition logic lately, because I saw that as

      !(NFS read-only & shared root is enough+LTSP+Thin clients) OR please read the article

      I hate college ;-)
  • A few thoughts (Score:3, Insightful)

    by jd ( 1658 ) <imipak@[ ]oo.com ['yah' in gap]> on Monday September 13, 2004 @06:48PM (#10241585) Homepage Journal
    (n+1)th Post!


    First, what's so special about this? If you set up a network filing system for your root FS and use LinuxBIOS as your bootable image, you can have a single, central Linux install that is shared with as many computers as you like.


    What would be far MORE interesting would be to have a central server with multiple images for different hardware. Then you could boot your nice, shiny IBM mainframe from the same "install" as your desktop PC or the webmaster's Apple Mac.


    Another possibility is a massively parallel installer. Basically have one machine on which you are actively installing, but have that machine replicate the write-to-disk operations across the entire network to all the other PCs.


    A third option would be to have a distro which set up the entire network as a cluster, but with the config files just on one machine. That way, you don't burden any one machine with having to serve the really heavy-duty stuff, such as applications.

    • Just notice that if you look "close enough", nothing is "special new" (e.g. relativity theory, iMac, Linux etc. etc.) - evolution is done in small steps. What I found great on this is the vision - let's throw away thin/fat client idea and just think about separating "operating system" and "user data".
  • ... to bring a company running thin clients to a grinding halt? Kill the central server... Looks interesting though.... since all config data is stored centrally, it would make sysadmin's lives much easier.
  • Some traits of this thing sound like the ultimate in modular design. Of course, I've done this sort of thing myself already by burning all the necessary files in /home onto a CD-RW. I could blow up my computer right now and probably have an identical Fedora system on another machine in as long as it takes the OS to install. The fact that they're proposing this as coming from a server really isn't that different. Once again, someone has re-invented the thin client. I would like to see something like a "mediu
  • by pfriedma ( 725399 ) on Monday September 13, 2004 @06:51PM (#10241614) Homepage
    Back when mainframes were popular (the first time), they were large, expensive, and consumed lots of power... but in the long run less expensive than putting full workstations on every desk and maintaining local copies of settings, software etc. My personal feeling as to why desktops took off is because, at the time of their introduction, it seemed rediculous to have a mainframe in the home. Local copies were fine since most people only had one computer to worry about. This has changed. People now have multiple computers, or at the very least, constantly transfer info between home and work machines. Now, mainframe power is available cheeply and in a small formfactor... and with the use of broadband increasing, it is becomming more and more popular to rid the home and office of multiple full machines, and replace them with terminals that can connect to a shared environment. Personally, I would love to see this take off. It would be nifty if I could "pause" my work at one terminal, and resume it at another in another location. Also reduces overall cost for people who have, let's say, one computer for the parents and one for the kids (the latter more prone to breaking). Cheap thin-clients would be really useful here.
    • Actually you can already do the "pause" thing with your workstation using Windows XP's Remote Desktop feature. True, it's basically a crippled version (1 user limit) of Windows Terminal Services, but it works great, even over the net.
    • Back when mainframes were popular (the first time) workstations diddnt exist. And the way you connected to minis in the '70s was strikenly simmilar to how you connected to Mainfraims; with a relativly dumb terminal. Destkops/workstations/PCs diddnt become viable for business use until at least the early 1980s.
  • RTFA, dammit! (Score:5, Informative)

    by tempest303 ( 259600 ) <[moc.oohay] [ta] [nostunksnej]> on Monday September 13, 2004 @07:01PM (#10241702) Homepage
    This is NOT just LSTP all over again! RTFA!

    From the article:
    • Applications run on local systems
      • avoids the needs for huge terminal servers with complex load balancing
      • works for laptops (emphasis mine)
    • Software and data are cached on the local disk
      • reduces bandwidth and increases speed
      • the cache can be read-only and thus per-computer state is impossible
      • works for laptops
  • by agristin ( 750854 ) on Monday September 13, 2004 @07:13PM (#10241807) Journal
    If you read the article, you will see that:

    1) they don't want users to need root for hardware (but do want users to need the admin to install certain software). This info is in the PDF. They already see that needing root for hardware install or configuration needs to be worked around.

    2) the design is a hybrid or amalgamation of thin and fat client, trying to cherry pick the best of both:

    applications run on local systems

    software and data cached on local disk

    central management and configuration of nodes

    they call it a cached client technology

    3) they have a plan for laptops. Stateless... instantiation, sync... things that sound vague, but they seem to have a plan because this stuff is considered in the howto. There are some notes in the how-to covering the different types of clients:

    " diskless clients, which boot directly from a snapshot stored on the server
    caching clients, which boot from a copy of a snapshot, cached locally on a hard drive.
    Live CD clients, which boot from a copy of a snapshot burned onto a CD
    thick clients, which don't use snapshots and must be maintained by another means.
    "

    The idea has some very cool potential for a business or network situation. I can't imagine this is ready for production, but it could be soon.

    -A
  • by grmoc ( 57943 ) on Monday September 13, 2004 @07:17PM (#10241841)

    First of all, I'm not associated with the project.

    However, I've read what they're talking about, and here is where many people are misinterpreting:

    This is not a 'thin' client in the traditional sense. The client in this case does the computations.. i.e. it actually runs the app.

    In other words, the computer is not merely a display, and as such shouldn't suffer from the traditional mainframe/client shortcomings.. (you have all the CPU power you normally have)

    When you think about this, think KNOPPIX and other live-cds, that is the nearest (and quite near, imho) to what they're discussing.

    So... why is this different from a normal install?

    A normal install has a read-write root, whereas here they're shooting for a read-only root, even if it is still on the local harddrive.
  • by davejenkins ( 99111 ) <slashdot.davejenkins@com> on Monday September 13, 2004 @07:28PM (#10241913) Homepage
    Bingo! If the kernel is in some remote location (i.e. Cayman Islands), then enterprises can run all their apps locally, but SCO cannot sue them for copyright violation (because the code is offshore)!

    Sure, ping times will be a bitch, but... /just kidding
  • I've just recently seen a situation where a system like this would be a big help, specifically the "users shouldn't have to be root to connect common hardware" part.

    My girlfriend has a laptop from work, a large company that enforces a "users don't get admin access to their machines" policy. Fine and dandy, until she brings the laptop over to my house and wants to print something on my printer. Whoops! No device driver for that particular kind of printer in the standard corporate install, and even though I

  • Innovation! (Score:2, Troll)

    by erroneus ( 253617 )
    Time and time again, I read about various advancements in Linux in this area or that. Most of the time it amounts to catching up to Mac and Windows or something else that has already been done elsewhere. And there's NOTHING wrong with it. If we want to be able to do something, we should be able to do it.

    But the one thing [anti-linux] people keep saying is that Linux is all about being a copy-cat and nothing about innovation, new development new technologies or new ideas.

    Recently, along with this and so
    • This whole topic of who "innovates" the most or even worse as you seem to unfortunately be stuck on, who "innovates" first has gotten really old. Posts where people harp on innovation are about on the same level of those posts where people complain that OSS projects never have good names. Give it a rest already.

      If you think that only "recently" linux and OSS have begun to innovate then you've been living with blinders on.
  • by Monkius ( 3888 ) on Monday September 13, 2004 @07:41PM (#10242040) Homepage
    I've been thinking about this way of doing things more and more since the appearance of Knoppix, FAI, Adios, and various cluster installation facilities--and clearly, so has Redhat.

    Most importantly, this

    1. avoids the absurdity of moving all processing, and indeed disk to a central server

    2. focusses attention on development and maintenance of prototype installations for different types of machines

    Some of the implementation techniques don't seem pleasant--but they're doing things in a way that appears forward-looking.

    I look forward to seeing more of this.
  • Not that its a bad idea, but its not revolutionary, as the story blurb seems to imply..

    Actually its the only way to fly in an enterprise environment.. Get the PC back out of the users hands. Should never have given them to the users in teh first place.. 3270's for all!
  • The ability to boot multiple systems off the same device sounds a lot like VAXcluster technology.

    Everything old is new again!

    • The ability to boot multiple systems off the same device sounds a lot like VAXcluster technology.

      But you don't have to pay DEC an arm and a leg for hardware and software licenses.

      The migration of all these ideas into the realm of industry standard, commoditized hardware is a huge deal. The age of one company being able to own a whole market vertically, from the silicon to the user interface, is gone. We left those monopolies behind, and good riddance. But we also left some good ideas behind with those no

  • by Trogre ( 513942 ) on Monday September 13, 2004 @08:12PM (#10242281) Homepage
    Heh. I once made a stateless distro, based on Red Hat, on a hard drive. The intention was to use it as a car ogg player.

    It had / mounted read-only. /var cannot be mounted read-only (needs /var/run, etc), so I mounted it as a 16M ramdisk, the contents of which was downloaded from /var.tgz at boot time. It worked splendid. Eventually, the slowest part of the boot process was waiting for the BIOS POST to finish.

    You could power down the thing whenever the hell you liked and never see fsck run.

  • by dheltzel ( 558802 ) on Monday September 13, 2004 @09:13PM (#10242645)
    This sounds like a great step forward for laptops as well as desktops that are to be "locked down".

    I think there should be a more general concept of overlayed filesystems, where a FS could be mounted on top of another FS "with transparency", so that you can see all the files in the entire "stack". A standard "ls" would show 1 instance of each file, with the "highest level" FS taking precedence. A modified program might be able to see all the versions of a particular file and be able to copy one to another (if permissions allow).

    If each FS could be mounted RO or RW, then you could have a local copy of everything on a CD or DVD, but make it appear writable by mounting another FS on top (either a local HD, USB pen drive, NFS mountpoint, etc). Recovering back to the original install would be just wiping out the modified files, so the underlying files are now visible.

    This would be good for:
    - fully functional Linux systems based of a CD or DVD
    - FS snapshots for backup or testing
    - intrusion detection (diff across file versions)
    - version control of the entire OS image

    Now, if only I were smart enough to actually write the code.

    • I think there should be a more general concept of overlayed filesystems, where a FS could be mounted on top of another FS "with transparency", so that you can see all the files in the entire "stack". A standard "ls" would show 1 instance of each file, with the "highest level" FS taking precedence. A modified program might be able to see all the versions of a particular file and be able to copy one to another (if permissions allow).


      Well, I'm not sure about your operating system, but this layout is one I c
    • The code is already beeing written [freshmeat.net] for you. It virtually "merges" two directory hierarchies. If you want to see the separate trees, you just look at the original locations.
  • Peer OS? (Score:2, Interesting)

    by recharged95 ( 782975 )
    Could this be the start of a paradigm shift in how we view networks and distributed computing? (So far, nah).

    Separate the state from the behavior with respective hardware, sounds interesting. Definitely they will need to break all the encapsulation layers built in todays modern OS and identify the patterns that represent common behavior and common state.

    In the article, it makes me wonder, is it better to centralize state or behavior? For instance, centralizing state would be more efficient, but if state

  • by Anonymous Coward
    Only if you have had your head stuck in the ground. Freebsd has had this for ages.
  • Interesting project (Score:4, Interesting)

    by GolfBoy ( 455562 ) on Monday September 13, 2004 @10:26PM (#10243068) Journal
    This is a very interesting project. As I understand the article, the point - long term - of the development effort is to try to get Linux (RedHat) adopted on the desktop by appealing to the TCO mentality of the IT department rather than by appealing to the desire of the end user to actually get stuff done. In other words, if the savings to IT of administering your machine centrally outweighs the benefits of you (corporate cube dweller) being able to configure your machine to your liking and use it as you see fit, then IT wins, and Linux makes an appearance on the Fortune 2000 desktop.

    'Thin client' was the first attempt to dethrone MS in this way, but this approach appears much more sophisticated, and consequently much more likely to succeed. Without seeing how the whole thing plays out I really have no idea whether the approach is successful or not. But it's a really nifty shot across the MS bows.

    Whether this goes anywhere or not ends up being decided by (as with most IT projects) whether the services provided by IT to the end users are adequate (in which case IT gets their way) or so obnoxiously limited that the end user cabal ends up storming the IT department with burning torches.
  • stateless? (Score:3, Insightful)

    by samantha ( 68231 ) * on Monday September 13, 2004 @10:44PM (#10243170) Homepage
    Shared state is practically equivalent to stateless? Since when?
  • A traditional means to achieve at least part of this goal is to mount /usr read-only over NFS and distribute it to various clients. But this is quite risky with Fedora:

    http://bugzilla.redhat.com/bugzilla/show_bug.cg i ?i d=119185

    If you follow the link and you can't believe what's recorded there, it's still correct: if /usr is read-only and the sysadmin accidentally tries to install a package, the RPM database is corrupted. Maybe this bug is hard to fix, but it's definitely a bug -- yet one of those fine
    • The bug was closed as WONTFIX because the reporter was an obnoxious prick. Referring to the developer as a Moron on repeated occasions. The fact is that if you want people to help you, yelling abuse is not a particularly good strategy.

      • The bug was closed as WONTFIX because the reporter was an obnoxious prick. Referring to the developer as a Moron on repeated occasions. The fact is that if you want people to help you, yelling abuse is not a particularly good strategy.

        The bugzilla entry is not a request for technical expertise, it's a bug report. It doesn't matter if the submitter is a socially challenged idiot.
  • Hi... We run a project called DRBL (Diskless Remote Boot in Linux) . The website is
    http://drbl.nchc.org.tw (Traditional Chinese)
    and
    http://drbl.sf.net (English).
    Maybe someone can have a look at that, some part of DRBL are similar to this Stateless Linux project.
    DRBL runs well on RedHat, Fedora, Mandrake and Debian.
    In Taiwan, more than 100 sites already downloaded and run DRBL, some of them are schools (Primary/High school/University), some of them are NPO and buisness companies.
    check this:
    http://drbl.nchc.or
  • Looks like OsX (Score:3, Interesting)

    by curious.corn ( 167387 ) on Tuesday September 14, 2004 @04:29AM (#10244237)
    Although I've never used it, a Domain of OsX machines can mount and boot from remotely networked disk images. Also, a standalone machine (like a laptop) participating to an Apple directory will authenticate against the server providing "terminals" for domain users not present on the machine's local credential database. Domain accounts can be coupled to local accounts available when unplugged from the domain. Save for the first item I've experienced the setup and found it very simple to configure & use. The only kludge is the use of traditional UNIX perms (ugo) that doesn't quite fit the picture. Tiger should take care of that next year. I hope RH etc will make their system "drop in" compatible with the Apple solution (basically it's openldap); the only problem is that consumer i386 HW only has chesy BIOS rather than openfirmware which I think is used to simplyfy the remote booting configuration process.
  • by codepunk ( 167897 ) on Tuesday September 14, 2004 @11:36AM (#10247183)
    I am currently running 200 workstations in a thin client environment and we really could not be happier. Not to mention that those 200 are running of a single redhat cluster with nearly 100% uptime for the year. What possible benifit am I going to get over my current environment? Our clients are a mixture of junk we got from a recycler, cdboot from a hacked slax distro or flashboot mini-itx boxes. Total maintenance time per month is measured in mere minutes. And no I am not running LTSP, to complex and I can just buy neoware boxes already configured as a redhat x terminal.

"Yes, and I feel bad about rendering their useless carci into dogfood..." -- Badger comics

Working...