Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
X Linux

After 2 Years of Development, LTSP 5.2 Is Out 79

The Linux Terminal Server Project has for years been simplifying the task of time-sharing a Linux system by means of X terminals (including repurposed low-end PCs). Now, stgraber writes "After almost two years or work and 994 commits later made by only 14 contributors, the LTSP team is proud to announce that the Linux Terminal Server Project released LTSP 5.2 on Wednesday the 17th of February. As the LTSP team wanted this release to be some kind of a reference point in LTSP's history, LDM (LTSP Display Manager) 2.1 and LTSPfs 0.6 were released on the same day. Packages for LTSP 5.2, LDM 2.1 and LTSPfs 0.6 are already in Ubuntu Lucid and a backport for Karmic is available. For other distributions, packages should be available very soon. And the upstream code is, as always, available on Launchpad."
This discussion has been archived. No new comments can be posted.

After 2 Years of Development, LTSP 5.2 Is Out

Comments Filter:
  • Impressive... (Score:2, Interesting)

    by King InuYasha ( 1159129 ) on Sunday February 21, 2010 @06:26PM (#31222698) Homepage
    With 14 contributors, that they got it done in two years is impressive. Hopefully with this update, more distributions will be able to readily support LTSP 5.2 again...
  • by symbolset ( 646467 ) on Sunday February 21, 2010 @08:23PM (#31223794) Journal

    A browser and a VT-100 terminal are all that a lot of customer service people need and should have. The limitation of using a web application prevents a lot of activity you don't want customer service people doing like installing applications, running scripts embedded in documents, etc. Web interfaces have come a long way.

    Likewise networking and thin clients have come a long way since the days of Token Ring, which peaked at 100mbps in the late 1990s. Thin clients have gigabit network connections now and every port is switched rather than being part of a bus or loop.

    Most especially servers have come a long way. It's not unusual to have a 1U server that runs 16 3GHz threads on 8 cores, or 12 threads on 12 cores, using high-bandwidth/high IOPs SAN or local storage and 10Gbps networking. Back then 1GHz was fast for a server. 1GB was a lot of RAM, and today 192GB is easily reachable. Next month we get the 12-core 2, 4 and 8 socket boxes for up to 96 cores per server. This is just the commodity stuff - I'm not citing the special purpose stuff like Sun and Itanic for the obvious reasons. Heck, these days the SSD hard drive in my laptop can do over 8K IOPs - I can configure a server to do well over a million. Storage infrastructure also enjoys the leverage of newer technologies that leverage abstraction in new ways. You can, for example, create "smart clones" of a desktop virtual machine which work as deltas off of a "standard image" and require almost no storage at all. As the user uses it, the smartclone image file on the SAN grows only as much as the data written. As soon as the customer logs out, their temporary data is erased and no storage is consumed - and they get a fresh image the next time they log in which improves security immensely.

    So in short, time sharing was bad back then because you were sharing from a very shallow pool of resources through a thin straw. Now the pool is deeper enough, the straw is wide enough, to give the benefits we were promised back then and didn't see. The clients, the network and the servers all have the capacity to deliver an outstanding experience. Sharing is an even better idea now because the drives, servers and even individual processors or cores can power themselves down and up based on demand and keep a reasonable amount of resources available to handle demand spikes.

    The question now becomes whether or not we can return to the cathedral - the ivory tower of precious resources husbanded and defended by a heirarchical information clergy steeped in knowledge and cloaked in the mysteries of keeping it running and making it safe. We needed the Bazaar to improve productivity when the infrastructure wasn't up to snuff, but it's proven a costly and vulnerable environment for business. Getting the end users to give up their local autonomy is not going to be a soft sell - it's going to be a long and ugly fight. IT pros can probably ease the transition by making the virtual or shared environment more open and faster than the local one until the transition is complete, and then shutting down the ability of end users to do unsafe things once the migration is complete.

  • LTSP-cluster (Score:3, Interesting)

    by xzvf ( 924443 ) on Sunday February 21, 2010 @10:21PM (#31224782)
    LTSP is an outstanding product that scales incredibly well compared to other virtual desktop solutions. While a little off topic, LTSP Cluster is an excellent addition to large scale LTSP deployments. https://www.ltsp-cluster.org/ [ltsp-cluster.org]
  • by markdavis ( 642305 ) on Sunday February 21, 2010 @10:32PM (#31224854)

    We ran 130 Xterminals (Linux machine thin clients) over switched 10-base-T with a 100-FL backbone for many, many years (up until 2 years ago). It worked just fine. The only thing that will kill the network is trying to play video or have Flash, neither of which we support.

    Now we have 160 over switched 100-TX with 1000 fiber backbone. It is faster, but not THAT noticeable.

  • by willy_me ( 212994 ) on Monday February 22, 2010 @01:19AM (#31226160)

    Can you even buy hubs any more?

    I believe that all of those cheap 10/100 switches are really two hubs with a switch connecting the 10mb to the 100mb. Technically, there is a switch in it so they call it a switch - but it acts just like a hub.

  • Re:ltsp problems (Score:3, Interesting)

    by stgraber ( 1247908 ) on Monday February 22, 2010 @08:39AM (#31228148)
    And that's why we implemented localapps. Running firefox as a localapp will let you do fullscreen flash just fine. As I mentioned, LTSP is either using X11 over SSH or SSH only for authentication. In the second mode, your credentials are sent securely but the actual X11 events are send unencrypted, so that's actually faster than any OpenVPN/IPSEC you may use.
  • by Archangel Michael ( 180766 ) on Monday February 22, 2010 @12:41PM (#31230498) Journal

    Who told you terminal servers were slow?

    Experience.

    It all depends on what is being "served" by the TS. However, I've seen huge servers brought to their knees by processes such as Powerpoint (or OpenOffice clone), or rendering images, or whatever sucking as much CPU as possible to run. Load that up by three or four people and now you have a TS brought to its knees. A server designed to host 30 to 50 clients suddenly can't support 10, all because people didn't include "Powerpoint" in the spec, because nobody heard of it when it was implemented, and a year later, someone hear of it, and now you have computers that don't function well for ANYONE.

    If someone goes down the whole Terminal Server route, they best be understanding the dynamics of how people use computers changes and evolves, almost always to use MORE computing power, rather than less.

    And to compensate for these changes, one ought to budget into the specification the upgrades needed to keep up with the increasing demands. Either that or start telling people "NO" when they want Powerpoint. And good luck with that.

    This is not to say I'm against Terminal Servers, because I'm not. In places that have a limited demand for applications, a TS is probably an awesome solution for managing workstations.

    Just saying you haven't experience it being "too slow" doesn't mean it doesn't happen. I've seen it happen, and it sucks when people expect a system to work as designed when the parameters of the system has changed.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...