Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Red Hat Software Businesses Software Linux

Fedora Project Considering "Stateless Linux" 234

Havoc Pennington writes "Red Hat developers have been working on a generic framework covering all cases of sharing a single operating system install between multiple physical or virtual computers. This covers mounting the root filesystem diskless, keeping a read-only copy of it cached on a local disk, or storing it on a live CD, among other cases. Because OS configuration state is shared rather than local, the project is called 'stateless Linux.' The post to fedora-devel-list is here, and a PDF overview is here."
This discussion has been archived. No new comments can be posted.

Fedora Project Considering "Stateless Linux"

Comments Filter:
  • Wow! (Score:4, Insightful)

    by Libor Vanek ( 248963 ) <libor,vanek&gmail,com> on Monday September 13, 2004 @07:45PM (#10241528) Homepage
    Wow - this is really HUGE project. I mean - it spreads from kernel, through init scritps, through X managers & enviroments to easy to use administration tools. If they suceed this could be really "Linux killer application".

    And please all the "NFS root is enough" posts - read the article!
  • by Anonymous Coward on Monday September 13, 2004 @07:46PM (#10241552)
    I want a distro where by default packages install under $HOME so that someone can install their favorite browser without root access.

    It's really disconcerting for me that practically all the distros want you to have root access even to install a simple MP3 player from their package files; and extremely distrubing that they do it by popping up KDE or Gnome windows asking for root paswords.

    Isn't this what we blame microsoft for?

    Disk space is cheap enough, we don't need more sharing of config stuff - we need more separation so users can use the benefits of package managers without having to get in the way of other users.

  • Re:mainframe (Score:4, Insightful)

    by celeritas_2 ( 750289 ) <ranmyaku@gmail.com> on Monday September 13, 2004 @07:47PM (#10241568)
    In my experience, then central server crashes anyway and nobody can do anything because they're too tied in already with email internet and logon. Just as long as security is good, and data is backed up very redundantly, I can't see that there would be any greater disadvantage.
  • Again... (Score:5, Insightful)

    by Libor Vanek ( 248963 ) <libor,vanek&gmail,com> on Monday September 13, 2004 @07:48PM (#10241572) Homepage
    Posts like:

    NFS read-only & shared root is enough
    +
    LTSP
    +
    Thin clients

    => please read the article
  • A few thoughts (Score:3, Insightful)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Monday September 13, 2004 @07:48PM (#10241585) Homepage Journal
    (n+1)th Post!


    First, what's so special about this? If you set up a network filing system for your root FS and use LinuxBIOS as your bootable image, you can have a single, central Linux install that is shared with as many computers as you like.


    What would be far MORE interesting would be to have a central server with multiple images for different hardware. Then you could boot your nice, shiny IBM mainframe from the same "install" as your desktop PC or the webmaster's Apple Mac.


    Another possibility is a massively parallel installer. Basically have one machine on which you are actively installing, but have that machine replicate the write-to-disk operations across the entire network to all the other PCs.


    A third option would be to have a distro which set up the entire network as a cluster, but with the config files just on one machine. That way, you don't burden any one machine with having to serve the really heavy-duty stuff, such as applications.

  • by cuteseal ( 794590 ) on Monday September 13, 2004 @07:49PM (#10241590) Homepage
    ... to bring a company running thin clients to a grinding halt? Kill the central server... Looks interesting though.... since all config data is stored centrally, it would make sysadmin's lives much easier.
  • by pfriedma ( 725399 ) on Monday September 13, 2004 @07:51PM (#10241614) Homepage
    Back when mainframes were popular (the first time), they were large, expensive, and consumed lots of power... but in the long run less expensive than putting full workstations on every desk and maintaining local copies of settings, software etc. My personal feeling as to why desktops took off is because, at the time of their introduction, it seemed rediculous to have a mainframe in the home. Local copies were fine since most people only had one computer to worry about. This has changed. People now have multiple computers, or at the very least, constantly transfer info between home and work machines. Now, mainframe power is available cheeply and in a small formfactor... and with the use of broadband increasing, it is becomming more and more popular to rid the home and office of multiple full machines, and replace them with terminals that can connect to a shared environment. Personally, I would love to see this take off. It would be nifty if I could "pause" my work at one terminal, and resume it at another in another location. Also reduces overall cost for people who have, let's say, one computer for the parents and one for the kids (the latter more prone to breaking). Cheap thin-clients would be really useful here.
  • by Spyro VII ( 666885 ) <{spyro} {at} {spyrius.com}> on Monday September 13, 2004 @08:11PM (#10241788)
    I don't see why you're being modded as insightful for this rant. I have really not the slightest idea of why you mentioned Microsoft either.

    Here's a few points. First of all, you can configure KDE or Gnome not to ask. Second of all, most users are not admins. Allow me to expand on that. Most people who use computers have no idea of what is harmful and what is not harmful and will install anything. Theoretically the admin should install the basic apps (office, music, and internet) so that users won't go and install a program that'll delete their home directory or something. Third of all, you can already have users setup like that. It's called booting more than one OS, but it seems silly and redundant to me.
  • Re:Wow! (Score:2, Insightful)

    by lakiolen ( 785856 ) on Monday September 13, 2004 @08:36PM (#10241988)
    It's not a killer app. It's not even an app. One's not going to download a file and suddenly their using stateless linux. It's a different way of organizing the underlying layers that applications use.
  • by Monkius ( 3888 ) on Monday September 13, 2004 @08:41PM (#10242040) Homepage
    I've been thinking about this way of doing things more and more since the appearance of Knoppix, FAI, Adios, and various cluster installation facilities--and clearly, so has Redhat.

    Most importantly, this

    1. avoids the absurdity of moving all processing, and indeed disk to a central server

    2. focusses attention on development and maintenance of prototype installations for different types of machines

    Some of the implementation techniques don't seem pleasant--but they're doing things in a way that appears forward-looking.

    I look forward to seeing more of this.
  • That's the problem (Score:2, Insightful)

    by Karma Sucks ( 127136 ) on Monday September 13, 2004 @09:00PM (#10242192)
    The project is too big, ambitious and lofty. It's just bound to collapse sooner or later IMHO. I don't think anybody /really/ wants to relearn how to deploy Linux anyway.
  • by bigberk ( 547360 ) <bigberk@users.pc9.org> on Monday September 13, 2004 @09:38PM (#10242427)
    It's really disconcerting for me that practically all the distros want you to have root access even to install a simple MP3 player from their package files
    I always tended to think that packages were for the admin. If you want to install software, you can still install it under your home directory like we've done since the 70's ... compile it from source. These days, thanks to autoconf/automake, it's as easy as
    ./configure --prefix $HOME
    make
    make install
  • by grasshoppa ( 657393 ) on Monday September 13, 2004 @10:30PM (#10242732) Homepage
    I want a distro where by default packages install under $HOME so that someone can install their favorite browser without root access.

    Were the internet a safe place, I'd almost agree with you. Almost.

    Isn't this what we blame microsoft for?No. I've never blamed MS for this, who by default, logs in users as administrators. Which is a terrible idea, security wise, and they've been pulled over the coals several times for it. Rightly so.

    Disk space is cheap enough, we don't need more sharing of config stuff - we need more separation so users can use the benefits of package managers without having to get in the way of other users.

    No, what we need is users to do their job and stop trying to get around the restrictions the admins put in place, which is exactly what your idea would be used for.

    In fact, in all my production systems, home is ALWAYS mounted as noexec. You want a program on the server, fine, you let me know which one and why, and I'll think about it.
  • stateless? (Score:3, Insightful)

    by samantha ( 68231 ) * on Monday September 13, 2004 @11:44PM (#10243170) Homepage
    Shared state is practically equivalent to stateless? Since when?
  • by who what why ( 320330 ) on Tuesday September 14, 2004 @12:04AM (#10243264)
    I don't think anybody /really/ wants to relearn how to deploy Linux anyway.

    Well, most of us don't /really/ want to relearn *anything*. Sometimes, however, when you hear a new idea relating to an area you work in, the penny drops, and you are left thinking "wow, what a great idea".

    For instance, I work in a scientific research environment (high energy physics) where most of our software is Free (capital F), we work in different places at different times (planning, lab, analysis), we have a great deal of customized and hand written software and the ideal development environment so far has been NFS mounted home directories (running RedHat and now Fedora). In theory every machine I log into is running the same OS, with /usr/local NFS mounted from an [application|file] server, I login though NDIS and my home directory is also NFS mounted.

    This works fine in theory - except without a serious admin budget, different OS versions spring up... I have access to machines running RH9, FC1, FC2... and that's an improvement, whilst RedHat were still supporting RHL, we had 7.3, 8.0 and 9.0, with wildly different GCC versions. What happens? I end up using specific machines with a similar enough environment that all my simulations will at least compile without tweaking, and all my scripts etc work the same way. Homogenous environments, no matter how ideal, are not a possibility without a manpower commitment that many SMBs and other small operations can't afford.

    This stateless project LEAPS out at me as an ideal way for small operations (like up to 100 seats) to be managed by a single (even part time) admin.

    Not to mention the attempt to tackle laptops - which is the reality of the workplace. Many people have laptops. A lot of them (and their CTOs) would love to be running the same environment as the workplace LAN. At my lab most people have a laptop due to the amount of travelling we do - I'd guess that 90% of them are running XP, since even if they did run linux, they'd have to administer it themselves, wouldn't have clearance to access the NFS shares for $HOME and /usr/local.

    Although the laptop aspect still has a troubling achilles heel: most of us (well, my colleagues at least) have laptops in order to present our work to others. Even ignoring the ubiquitousness of PowerPoint, who amongst us would want to be on the road with a "cached client" laptop with NO write-access to anything but $HOME. Sure, the system worked at the office, and you fixed all the bugs that cropped up when you connected from home on you DSL, but what about a strange environment. You need to connect over someone elses WiFi to get the latest figures (sure, TFA talked about user-configured WiFi, but still, what if they have different security like WEAP that needs a new package and root access), or if you NEED to plug in a USB key to give a collaborator or customer your files. What then?

    Regardless, this to me is a prospective Killer App for linux, and is definitely tackling a bunch of issues that may niggle an admin for several years before they could even define what the problem is. Automatic updates across _all_ your workstations. Backups that require 10 minutes work after a crash - and I can attest that a recent HD crash to our "distributed" system took a few hours to get the machine back together, but several days before all the little minor tweaks we needed had been applied (things like monitor resolution, 'sudo' configuration, extra packages, sound drivers.

    For the first time, I stand up and say, THANK YOU REDHAT and THANKS FEDORA. This project tells me that you are thinking about your installed customer base and offering _really_ innovative ideas to the community. Anyone want to moan about how Linux is always playing catchup to MS and Apple and how F/OSS is doomed to lag behind forever?

  • by IamTheRealMike ( 537420 ) on Tuesday September 14, 2004 @04:42AM (#10244132)
    Were the internet a safe place, I'd almost agree with you. Almost.

    Requiring the root password for certain tasks does not increase security, IMHO. Most users (a) don't want to be constantly typing in passwords and (b) would type it in whenever it was asked for without thinking too hard about it.

    If anything you don't want the typical personal-PC one-user setup to ask for the root password very often because the more often you ask when it's not really needed, the greater "password fatigue" gets and the less likely people are to think critically when they get asked.

    Really, if you spend a lot of time thinking about it as I have, you come to the realisation that malware which relies on social engineering doesn't have any useful technical solutions. You can get some way there with things like distributed whitelists but pretty quickly you end up in the realm of civil liberties (who really owns that machine you paid for?).

    In short: making tasks hard doesn't increase security, it just annoys the user. If the user has decided they want to do something, they'll do it. So good security in the face of a dangerous net is about advising the user well whilst not getting in their way.

    Now, I know you're coming from the viewpoint of a server admin which is fine. Most people aren't server admins. It's wrong to try and apply the tools used to admin servers to home machines.

    That's one reason why autopackage can install to home directories. [autopackage.org] (see the third screenshot), though it's not really something that's encouraged (and it can be disabled by administrators). Another is because it's really useful if you want to use a newer/older version of the software installed on a multi-user machine without interfering with other users. Another is because some shell accounts do let you run programs and it's nice to be able to use binaries.

    In fact, in all my production systems, home is ALWAYS mounted as noexec. You want a program on the server, fine, you let me know which one and why, and I'll think about it.

    That doesn't help very much, you can still run programs on a no-exec mount if you really want to.

  • by FooBarWidget ( 556006 ) on Tuesday September 14, 2004 @11:16AM (#10246185)
    $ cp /usr/bin/yes ~
    $ chmod -x ~/yes
    $ ~/yes
    bash: ~/yes: Permission denied
    $ /lib/ld-linux.so.2 ~/yes
    y
    y ...


    You might wonder how this works? /lib/ld-linux.so.2 is the so-called ELF interpreter (or something like that). Each ELF binary contains information about the path of the it's ELF interpreter. The kernel reads this path, and runs the ELF interpreter, while passing the filename of the binary as a parameter. So actually, each and every ELF executable is run by /lib/ld-linux.so.2. This similar to the kernel passing the filename of a shell script to /bin/bash.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...