

Linux Implementation For 2500 Workstations? 205
Jeff Kwiatkowski asks: "We are looking to roll out Linux to over 2500 desktops and could use any advice that we can get. We need security info, implementation suggestions and any other advice that you would care to offer. We are currently evaluating Debian, Caldera and Red Hat. I also want a minimalist desktop, so I have been leaning toward WindowMaker as the window manager. In addition, we currently have machines with 32 meg of RAM (fast processors, though) and would like to keep the upgrade to 64 meg, only, if possible. Lastly, do any of you have any thoughts on Word Perfect vs. Applixware?" For those of you who think that the claim 'Linux is not ready for the desktop' is a falsehood, then this story is for you. As you can see, people are looking at deploying Linux on the desktop, and suggestions from you guys could make this process a lot easier.
Central administration is the key (Score:1)
2. Have one common
3. Centralize as much administration as possible. The same way you distribute
4. Keep some services in
5. Encrypt all sensitive communication. Sending passwords unencrypted over the network is a Bad Thing, especially in a court house. Use Kerberos authentication, it better, simpler and more flexible than ssh in the long run. You can use it for encrypted telnet, encrypted POP3 and IMAP, encrypted remote X, file system authentication and more. Don't forget to turn off cleartext logins, or people will use it anyway. (Don't let them get used to it!)
There are lots of other things you can do, but these are the basics. All this will save you a lot of work.
I'd go for fvwm2 as the standard window manager. I haven't tried WP, so I don't know if it's better that ApplixWare. I hate StarOffice because it one huge application with a completely nonstandard user interface, instead of a few smaller applications that works well together with each other and fits well into GNOME, KDE, all window managers and X in general.
Re:Kickstart (Score:1)
Re:Minimal, but functional (Score:1)
Actually, wasn't one of the motivations behind fvwm that twm was too heavy?
Re:Have you tried SIAG office (Score:1)
VA SystemImager (Score:1)
We basically do a kickstart install of RedHat 6.x, then call 'updateclient' with the proper options to grab the image from our server. SystemImager is a big Perl script that uses rsync to pull files across the network. It's quick and it works 99% of the time. We've done quite a few machines, and we are doing a couple more labs worth in August. I am the 'image builder guy,' so I get to assemble and test an image, break it, bend it, and, when it's ready, I upload it for our labs to share. When the labs are ready, we pull the image down onto a few test machines, and if everything goes right, we do the rest. It's a simple process that is easily scriptable, and you only have to maintain one image for all of the machines.
All of our machines are EXACTLY the same, though, so if your machines are different, you could run into problems. We've had very good luck with it, and I would recommend it to anyone doing a big Linux rollout.
---
Ryan
Re:I'll let others slug it out over desktop ideas. (Score:1)
By keeping all data and major apps on a remote server you are sacrificing performance and avaliability for convenience. Not the users convenience, mind you, but the sysdamin's. Such a strategy may work in smaller (where the network can handle it) non-mission critical environments (so when your file server and its backups or the network go south and no work is possible its no big deal) or where you lack skilled system administrators to devise better systems. However, the most scalable and robust method is the use a remote update facility, e,g, rdist, rsync, apt, rpm.
Office Suites (Score:1)
Word Perfect seemed slick, but querky. Font and printing problems alone were enough to make me stop using it. It was nice, but the little problems were a pain in the ***.
Applix seems so "out of date" to me, I almost tried to figure out if I could return it for my money back. But I ended up using it the most for about 1.5 years when I had at least 64M RAM. Honestly, with gnumeric and abiword, I think gnome has rendered Applix outdated (and they cost NOTHING!)
In the end, I've settled on the one solution I think was worthwhile, Star Office with at least 128M RAM. Once you get to like 160M or more, Star Office really is very useful. Of course, I couldn't afford to upgrade EVERY system to run Star Office, so I found another solution. Upgrade a couple systems as much as possable, and then just run them that way (X -query fast.system.net or export DISPLAY=workstation).
I still haven't got around to installing this Windows95 thing, I hear 98^H^H2000 is out now?
Slackware tagfiles (Score:1)
Glut of choices (Score:1)
you can install Mandrake on a desktop, configuring the installation to taste then at the end of it you have the option of creating a boot diskette which will clone that installation. If that installation is from a server then you can make copies of the diskette and run around booting the PCs and letting rip.
This wouldn't give you Wordperfect Or Applix ( unless they are included in the App CDs for Mandrake ) but you can push that out with an FTP script from the server.
Another nice option is to install on one PC configure and tweak it to death then use DD and wget to create an image of the hard drive on the server ( this assumes the server has a better network connection than the workstations and includes a blazing fast RAID ( with 2500 workstations you need that anyway ).
you can then create a boot diskette with a script that simply partitions the hard drive and dumps that image off the server to it. This is easily the fastest way to clone an installation to a pile of PCs.
As for software I disagree about the Window manager but it's totally your choice. I would use KDE mostly because it comes with all the utilities you would want your users foundling and it has that familiar Windows 9x feel by default.
64 megs is more than enough to handle this stuff. One hint though is to use a 1 gig swap partition ( which you can likely afford on modern machines ). That way runaway leakware (caught) Netscape (cough) won't take your machines down too quickly.
The initial role out isn't that critical though. What is really fun is the latter updates. Trust me on this one, you *will* want to push KDE-2.0 and KOffice 1.0 ( despite plans to the contrary, I suspect they won't ship together ). Other cool things you will want to push to these desktops latter include Mozilla and the next generation of JVMs.
Fortunately you can likely live with the current Kernel for a few years.
Finally I would use Mandrake Update or something similar with your own server set as the mirror so you can butcher security holes as soon as they are discovered with little if any hassle.
All the best and tell me where to send my resume if you can't do this stuff by yourself.
Large scale linux roll out (Score:1)
Points:
1) Redhat's kickstart system is very usefull. Split a 'in house' tree of 6.2.
2) Create custom package lists in the base/comps file to reflect the type of machines you need.
3) Create/recreate rpm's in distro to reflect all the spicific setting changes/customizations you need. (And any comercial software you need on machines. Those RPMS are just kept inhouse.
4) Make sure all updates/systemconfig changes are done thru some type of package management system.. RPM is actually VERY good for this. Get familar with writing spec files..
5) Auth systems: Kerb5. NIS, only if necessary and on 'controled' network segments.
6) Home dir Fileserver: Network Appliance F760. Period. There is simply no other solution that works as nicely. Cost per megabyte is high, but its so damn reliable and easy to deal with.
7) Give all your machines hostnames based upon some location sceme, and use dhcp to give out addresses _based upon hostname_ so management of vlans can be a easier to keep track of.
Also.. Its not hard to hack together a web interface to pop out floppy images with all the necessary info in the kickstart config file. Have every machine with a nearby copy of its 'reinstal' floppy, so if the machine gets 'screwed', its easier to do a 20 min reinstall (thats all it takes) then send someone from IT to 'fix' the machine.
...
. ""The future masters of technology will have to be lighthearted and
. intelligent. The machine easily masters the grim and the dumb."
Re:I'll let others slug it out over desktop ideas. (Score:1)
When some bug is found in the application and it needs an update, who got the application ? You can use a database for keeping track of it of couse, but still...
Keeping everything homogenous when you can actually do so, is the clever thing to do. This is actually a special kind of setup, since they only need one architecture, and all the servers can be configured the same. IMO it would be stupid not to take advantage of that.
Also, by having the same applications on all servers or on all workstations (whichever approach is chosen), avoids the problem that someone using someone else's workstation is missing applications (and need to bother the admin with the problem).
Re:OT: WindowMaker's lack of a pager (Score:1)
And if you build Window Maker from source (and your X server is set up properly), you can use your mouse's scrollwheel to switch between workspaces by "scrolling" on the root window.
To do this:
Re:Kickstart (Score:1)
Re:Use the terminals as terminals - BINGO! (Score:1)
1. CPU power and memory requirements keep increasing year after year, forcing you to upgrade stand-alone PCs. In the case of terminals, however, they are used only for running the X server. All the applications would run remotely. Therefore, it would be irrelevant if the terminal is P-100 or a P3-1000 -- all it's used for is *display* of data. You would need to upgrade the terminals *only* if you decide that they need more video memory/faster video acceleration or something. And in general, since a terminal does no work wrt running applications, it is not subject to the usual upgrade cycle of stand-alone PCs and, therefore, can last *much* longer.
2. It is *much* more efficient to run applications on a single server than on a whole buch of stand-alone PCs. First of all, 99% of the time a stnad-alone PC is idle. How much CPU work does it take to type in a letter in Word or read a web page in Netscape? Since a single application server would run all the apps for a fairly large group of terminals, the CPU time would be used much more efficiently. Therefore, even a not-so-fast server would be able to easily serve a few dozen terminals. Further, how many people in the office would be running exactly the same apps? Say, 20 people are running netscape at the same time. On stand-alone PCs, each would need to have a copy of Netscape code in memory. On an app server, only *one* copy would be required for all 20 users, thanks to the shared memory. So, an app server is also more memory-efficient as well as CPU-efficient.
3. Windows apps. Yes, people need to run them too. (like Word for instance). You can set up a couple of NT servers to run wincenter or something. Then, the terminal users can run windows apps in almost the same way as remote X apps (windows apps appear inside a separate window that looks like a NT desktop window)
4. Another poster has already explained the virtues of storing all user data remotely on file servers, so I'm not gonna go into details
5. Upgrades. I already said why hardware upgrades would need to be very rare. Same with software. The terminals would run a very minimalist installation of Linux with all daemons off except for the X server. So you would neet to worry only about upgrading the servers. It sure is easier to upgrade a few dozen servers than 2500 individual workstations.
I could go on, but that should be enough to give you the idea. Just to let you know I'm not speaking out of my ass, my university (www.uwaterloo.ca) is using exactly the same setup, except the terminals also boot remotely (via bootp). It works very nicely. Only problem is that this being a cash-strapped university, the terminals have really crappy video cards in them, and really crapy monitors. Oh well.
___
Re:A clarification on Apps... (Score:1)
Actually, it is. My university is running 5 labs of diskless X terminals (20 - 30 terminals in each) over an unswitched half-duplex 10BaseT and it works great. Don't forget that X was designed for network. Why not use this feature then?
___
Re:Use the terminals as terminals - BINGO! (Score:1)
___
Re:X over a LAN.... (Score:1)
___
Re:Kickstart (Score:1)
Re:Corel (Score:1)
My advice (Score:1)
One thing you should really consider when choosing a dist is to make sure that _everything_ you install works properly. This is RH's big difficulty. All of the GUI tools and sysadmin tools work "half-way". If you look at them wrong, they break. For desktop users, this is unacceptable. Users can usually deal with strange environments surprisingly well, as long as the tools actually work. They get really frustrated when some buttons don't work, some right-click menus don't work, or a certain sequence of clicks crash a program. If you don't know what I'm talking about, try using the RedHat GNOME RPM manager in RH 6.1 (haven't really looked at 6.2). RH is nice on the server (without Linuxconf), because it has everything you need, and everything is integrated with PAM, and organized in a sane way. But on the desktop, it has too many half-done tools. So, either find a dist that doesn't have half-done tools, or don't install anything that doesn't work 99.99% right. I'd be happy to supply you with more advice (advice does come cheaply), but I'd have to know more about your requirements.
Re:My advice (Score:1)
GNU cfEngine for everyday maintenance (Score:1)
I've seen people above advocating centralizing your user data, and making the boxes all cookie-cutter installs. Excellent advice.
Once you have them up and running, however the question becomes. "How do I make changes to the environment en-masse?".
Thats where GNU cfEngine [hioslo.no] comes in. It's a great tool for maintaining heterogeneous networks. You should consider implementing this on the rollout, as it will allow you some means to "push out" changes to all of the hosts.
Check out:
http://www.iu.hioslo.no/cfengine/ [hioslo.no]
Its a very powerful tool, so much forethought and planning is in order with it, but it pays off in the long run in being able to make changes to the machines in large chunks.
-- PoochieReds
Have you tried SIAG office (Score:1)
It is available at www.siag.nu
Try Mandrake (Score:1)
debian. (Score:1)
If you have a debian cd, or if you wouldn't mind making your own ftp mirror of a debian ftp server, you can have one single computer that acts as the "syncronizer". Now this idea I've just heard about, never actually tried it. All the other computers can be configured through a crontab to run apt-get update, and if you point them to that single machine on the network or whatever, it'd be so much easier.
Each machine's software configuration in perfect sync. I guess this would work, in theory, but again, i've never tried it myself. Well, Cheers. See ya later. Hope I dont get flamed or anything
-- webfreak
webfreak@themes.org
http://e.themes.org
Re:OT: WindowMaker's lack of a pager (Score:1)
replication station (Score:1)
Since for obvious cost reasons, you'll most probably use IDE on your deployed machines, I recommend that your replicator be a mostly SCSI machine, with an IDE rack, and IDE compiled as kernel modules; this way, you can hot-plug and hot-unplug your IDE racked disks as you replicate them, by modprobing the IDE modules in and out (be sure to use the right hdparm/ioctl calls to flush caches and shutdown disks, though): no reboot needed!
Even without a SCSI machine, I've installed a company's computer rooms with IDE racks, and it's been a pleasure to replicate, install, test, fix, and reinstall machines there. (I also used debian as the basic distribution, which isn't perfect in a NFS- and NIS- sharing environment, but is the least horrible thing I tried.)
Finally, if your machines are heavily networked, you can use such thing as the Linbox Network Architecture [linbox.com] (web site currently down for recreation; contact the people at linbox.com), which is basically a lots of diskless linux clients (you buy a fast ethernet and RAM instead of lame IDE disks), and a few badass servers for files and applications. Ten times less disks to maintain and clients without persistent state means much less administration.
Yours freely,
-- Faré @ TUNES [tunes.org].org
Go with WindowMaker (Score:1)
Re:pass the buck? (Score:1)
If you are installing 2500 machines, you usually know better than your vendor. 1 hour maintenance per machine equals 1 man-year. You'd better have competent sysadmin(s) and automate as much as possible. You must be able to automate the system reinstallation, software upgrade, hardware upgrades, ..., of the machines.
Re:I'll let others slug it out over desktop ideas. (Score:1)
If you do end up with workgroup application/mail/print/etc. servers, think about where they live in the heirarchy, and keep them close to the users.
You'll have to calculate your bandwidth needs, bu t I suspect that for that number of clients, you'll probably end up with a three tier network. Clients on the bottom, on 100MBit switches, connected via fiber to concentrator switches with the user servers, connected to the backbone.
-j, showing his biases
Some Strategies -- Debian, NIS, automated installs (Score:1)
which distribution you wish to use is which one
is easiest to customize. I have found Debian the easiest to customize to my needs. In most large environments, people don't upgrade machines, they wipe them out and migrate their server data. Debian gives you the choice of upgrading machines.
The real power of debian, is that you can customize one users machine and those customizations will continue across upgrades. Not everyone needs dia, but some subset of people do and they have dia and when the upgrade they still have dia and you don't have to do anything. That is powerfull and usefull. Yes you have to login to someones machine and give them dia the first time.
If you use debian you need dzinstall [unitn.it] and you will need to customize the base install.
Another important strategy that has not been
discussed is how do you break this down to groups.
Identify groups of dependent people. The accounting department is all dependent on everyone else there so they should be made a unit. Give them their own server. I would aim for 1 server for every 20-100 users. Those users should be able to function with their server even if the "centralized" servers go down.
each "departmental" server should be backed up, should have a "network drive", a "name server", an "account server", a "network server", "OS Server", "print server"
NFS server
NIS slave
DHCP server
DNS server
Web server (Intranet)
LPD
should server as an apt server or as an rpm server
/net" otherwise it lets users corrupt it) Applixware is clean, fast, stable has
The machine configuration for the department should also live on your central server and should be pushed out to each department using rsync. But by distributing the neccesary services you reduce this risk of a catastrophic failure hitting all users.
NIS/NFS
NIS/NFS security I know it is impossible but they are very convienient and there are some precaustions you can take.
NFS -
users do not have root on NFS clients. IF they do they can be any user on the system.
you keep a static arp table for IP address and you
use static DHCP for clients. And you list every
client that is allowed to connect to your NFS server. Yes this can still be hacked!!!! but someone cannot just bring in a laptop and full control over your users files. Its keeps accidents from happening.
NIS
enforce use of good passwds. this is done by configuring the passwd program.
Make sure you have slave NIS servers!!!! Set the local slave to be the default NIS server for clients.
Don't use broadcast NIS, set the NIS server on each client. Yes someone can still spoof your NIS server, do not let the NIS from the outside internet in. It is worth it to trust your users, becuase it makes your computing environment better and you can trace down who caused problems and get them fired.
Automated Installs.
2500 user machines
assume 50 departmental server and 5 back end master servers.
Buy new equipment and do the master servers correctly.
Replication Strategy.
Make sure you can produce a departmental server from a blank box in 2 hours. Make sure anyone who can read instructions can produce a departmental server in 2 hours. And hopefully that won't be two hours of interactive time.
Given a departmental server, make sure you can build a new desktop from a blank machine in one hour. If cannot you have problems with your automation and your network fix!
USE SOURCE CONTROL all system infromation should be in source control! From the very beginning keep your management scripts, your NIS source files, your deployment descripters in Source control, I reccomend CVS! This will make your life easier.
DESKTOP
as for a desktop, I like windowmaker, I think it is very obvious for beginning users.
WP is substantially lighter than Star Office, and is fairly feature complete. Star office will be a pig, try it out, some users will require it as it is the most feature complete office swuite available on linux. (NOTE DO NOT INSTALL star office so it is user writable, even if there is just one user per machine install it as a net install "setup
strange user interface, and doesn't have the feature count that many people want. And what you see on the screen and what you see on the printer tend to be pretty far off.
I would reccomend avoiding net storage for applications and even all user data. Hard drives are cheap. It makes users less reliant on the network for performance issues. It also makes users for more in control at their workstation and it allows you to customize a workstation to an individuals tastes. (this is why debian is great, you get both customizations and easy upgrades). From a computing efficiency standard this does not make sense the net-slave computers are better. From an employee productivity standard this makes lots of sense.
When setting this up script everything, make sure
the that someone other than the person who solved the problem tries it and can do it.
This is a lot of work and requires formalizing a lot of things. I would reccomend start trying to build the departmental server. The build the things to build the departmental server, destroy it and verify that it can be automaticall built by someone else, using information stored in source control. After that then start doing the end user workstations.
Good LUCK! if you found any of these thoughts usefull do email me.
Damn this sounds like fun! (Score:1)
Admin'ing would be easy. Hardware support would be the bitch.
1. Network 10/100 Switched network
2. DHCP
3. NIS+
4. NFS (Automounted)(hehe, CIFS?)
5. Kickstart, or some automated install.
6. IceWM (Very small, looks window'ish, FAST low mem,themeable, toolbar. Easy configs. I use it on all my Sun/Linux boxes)
7. Web Interface for Email w/ SSL
8. StarOffice.
9. PowerBroker.
10. SSH+SSHD.
11. FTPD. (OpenBSD Ftp Port? no exec command) 12. Xfree 4.0.1 (Need to upgrade mine this weekend)
13. Netscape. (Not sure which version is the most stable)
14. Lot of people been talking about ReiserFS. Might be a good idea.
15. Winframe clients maybe. (Dont know his setup)
16. ENskip? (We use SKIP/EFS on our Solaris boxes, thou with static IPs. Soon to use winskip on laptops via dhcp)
You could be real anal, and lock the basic install down. Dont want these 2500 boxes to be someones DoS jump point.
It also depends on what these 2500 computers are doing, Call Center, Trouble shooting NOC, Workstations...
Each has it own own little problems that need tweaks..
One of my jobs, I ran the helpdesk for a 500+ call/dispatch center. Adminstration was quite easy. Everyone ran Xterm's with menu driven apps. /. and surf pr0n. ;)
The largest amount of works for techs was repair the hardware or file systems.
I only had to fix some print que's, basic unix administration. Then read
Man O man, to be an SA again. :)
-IronWolve
I know you said minimalist desktop, but... (Score:1)
As for distro, RedHat is easy to install, but Debian seems to be more complete, and the packages work! It is definately easier to upgrade. It will take more digging, and learning on the installer's side, but it should install easily over many machines once you've got it set up for replication.
Debian is refusing to distribute KDE over licensing issues, and this situation is likely to continue, but there's nothing to stop you from installing it yourself, and then propagating it out over your workstations. Try it out, and their Koffice and see what you think.
I believe 64meg of RAM will be plenty for your needs. While KDE is a big system, it runs very quickly.
wordperfect for legal use (Score:1)
--
X over a LAN.... (Score:1)
I've heard others say this, but everyone seems to leave out exactly what they are doing over X.
X is a nasty remote protocol - it's very verbose, and generates lots of traffic for any X-related function. Using one of the X-protocol compressor setups helps this quite a bit (it reduces GUI-related X overhead traffic by about 60%).
Indeed, you can run many X-terminals over standard unswitched 10BaseT. But only if you're doing non-GUI-intensive apps. xterm, emacs, et al. Give the original post, I'm assuming that they are going to be running Netscape, Applix, WordPerfect, StarOffice, and stuff like Matlab. All generate lots and lots of GUI calls, which have to traverse the network. I think you'll find that 5 users writing a document in WorkPerfect generate more traffic than 20 users using Emacs to write the same document.
X is just not scalable to allow everyone to run GUI-intensive apps remotely.
-Erik
Re:use a web based email system (Score:1)
Re:Use the terminals as terminals (Score:1)
If you need to, you can install the Citrix ICA Client on the terminals and/or have a product such as NCD Wincenter running on a Windows Terminal Server if some (or even all) users desperately must have one or more Windoze apps.
We've had Wincenter + Terminal Server + X terminals here for four years, and most users are hard-pressed to tell that they aren't running Windows on their desks.
Again, this is not quite what you're looking for, but it does provide a nice backdoor if it doesn't quite work out.
Re:I'll let others slug it out over desktop ideas. (Score:1)
Re:Use Suse (Score:2)
Also, use DHCP to assign your IP addresses -- don't try to manually manipulate 2500 different IP addresses, that would be a nightmare!
As far as distros go, I would definitely think about Mandrake -- it's very easy to install, and even has the ability to save the disk partition map to a floppy. It's 10 times easier to install than any other distro, even Corel's, which has problems on some machines. Mandrake is RedHat based, but if you look at rpmfind.net, you'll see a very large shift in the direction of the updates -- Mandrake's always come out first, even before RedHats (like Xfree86 4.0.1, for instance).
Also, I wouldn't make these machines available to the internet.. put them behind a single large server running IPCHAINS/NAT and give them all the 192.168.x.x local IP addresses. This keeps them safe and keeps you from worrying about outside cracks on any of those 2500 machines.
KDE 1.1.2 is by far the easiest environment to use. Save yourself some help desk calls and use a minimalist KDE environment on your workstations. It's functionally similar to the Windows desktop, so the transition for your users should be fairly simple.
Good luck, and please write another article to describe your experiences once you are done!
yep yep (Score:2)
you can do something like this with redhat kickstart installs. Once you get them working (fairly trivial, though mkkickstart doesn't really work out of the box) you can install software with about two minutes of human interaction, and then a varying amount for the rest, depending on how many packages are getting installed. You can also add any custom RPMs to the list, so as long as you know how to roll an RPM you're golden.
If you don't know how to roll an RPM then just check out www.rpm.org [rpm.org] which includes a lot of helpful reference material, including the slightly outdated but exceedingly thorough Maximum RPM available in Postscript [rpm.org] or LaTeX [rpm.org].
----------------------------
Re:Tailored installation, user/system separation (Score:2)
This demonstrates the problem with Ask Slashdot questions like this -- small town mentality. Yes, the method described above will work, but it simply isn't practical in an enterprise environment. If you need to put a CD in each machine to install it, it's going to be a very long and tedious process. Of course, I don't see a reason why the kickstart method couldn't be used for a network install, too, but I've never tried it myself...
Re:I'll let others slug it out over desktop ideas. (Score:2)
Both:
*) Communication is not encrypted, so the network must be physically secure if you want secure communications between workstations and servers. This should be considered.
Your:
*) Apps run locally, taking advantage of the processing power in each workstation. But resources available to one user are those of one workstation, no more.
*) Each workstation must be powerfull enough to run the apps well
*) Requires a somewhat secure way of sharing user information between workstations (NIS is out)
*) A workstation is trusted - it is allowed to mount a filesystem, so NFS is out and Coda is in
*) Printing must be set up for each workstation - maybe not a concern, it depends on printing policies.
*) Upgrading applications should be automated, so that each workstation can be upgraded easily
*) The network is used for file transfers
Mine:
*) Apps run on servers, taking better advantage of shared information (shared libraries and loaded executables). The load is balanced on the servers because of the multiple users, so one user can use more than his share of the resources, if they don't all do it at once.
*) A workstation will only need to run the X server. If it can do more, we won't benefit from that.
*) Communication between servers is physically secure, so you can use NIS
*) Workstations are completely un-trusted. You can use the best performing technology to share filesystems between servers (PVFS/NFS/GFS)
*) Printing is set up on the servers only.
*) Upgrading applications is only a concern on the servers.
*) The network is used for X communications
Any thoughs Erik ?
Re:Use the terminals as terminals (Score:2)
Besides, you will be running a multi-cpu server, and one spinning netscape process will only bog down one CPU. The system doesn't slow _that_ much down in such a configuration, and if the process is killed within the next five minutes by the aforementioned cron job, I think it will be an acceptable solution.
I think you're guessing when it comes to the animated gifs. I'm _pretty_ sure that netscape will upload the images to the X server, and tell it to change the picture, not upload the new frame every time.
But sure, X will load the network. Put the applications on the workstations, and Coda/AFS/PVFS/NFS/GFS/whatever will load the network instead.
From my experience, X works extremely well even on a low-bandwidth high-latency line. Of course, if the network is *that* bad, users will notice, but all in all I wouldn't be so damn worried about X and network bandwidth. I don't think it's a problem compared to what you'll see with *file* transfers from running the apps locally on the workstations.
Re:Use the terminals as terminals (Score:2)
If you consider
*) CPU time spent (seconds)
*) Process state (running or sleeping)
*) Last 1-minute CPU time spent (percentage)
and of course the UID (don't kill root's netscape
And sure, once in a blue moon the script will fail. But hey, this is *netscape*, after all.
You will spot which processes tend to get stuck over time, and add them to the script. Been there done that, and it works.
Please, if you have a better approach tell me about it.
Re:Use the terminals as terminals (Score:2)
It's memory hungry, but quite a lot of memory is spent on the (huge bloated) executable itself, and this will be shared between all users.
Also, not everyone is running netscape all the time. So if you want 30 Megs for one netscape instance, you can probably do just fine with 10 Megs for each such instance on the server, and only half the users have a netscape active.
Besides, Netscape uses the X server to hold images etc. so the 32 megs on the desktops will be put to good use still. But the _real_ bloat can be nicely kept on the server.
Every five minutes or so, cron should run a small script to check for netscape processes that hog CPU (this would be a heuristic, but it can be done well - I know because I've done it). That way the server can kill dead netscape processes, and your users won't come back complaining about their workstation being slow everytime a local netscape process dies.
Re:Have used Debian on similar rollout (Score:2)
No. Our systems were only just starting to be deployed (however we had 2 or 3 contracts for 20,000 odd screens each that we had to fulfill). We had 20 or so in SE Asia and a heap spread around the USA - maybe 40 or so (I believe we had a demo box in Times Square, NY). All these boxes were running Linux so maybe you got the BSOD screen saver :). BTW, believe it or not, all the rendering software was written in Java. Worked surprisingly well too (decent performance when doing Video work etc).
Have used Debian on similar rollout (Score:2)
For this the installers decided to go with Debian for the customisability (can strip to minimal feature set easily compared to RH and others) and easy upgrade capabilities for very large installations.
As a personal workstation I prefer Mandrake over everything else. Have used RH (and a former Slackware devotee) at work for the development machines and decided to give Mandrake a burn on one of the home boxes. Much prefer the feel of the setup (even though it is sort of a recompiled RH). RH seems to install just about everything, even when trying to do minimal stuff, whereas madrake does everything nicely. As a bonus they compile everything with the pentium (-m586)options so if you have only pentiums and above then this should give that little extra boost in a small memory footprint environment.
Footnote: The reason I left was that due to some political bullshit the VC people decided that they really wanted to use Windows boxes as the display hardware right at the last minute (ie a month before we'd finished the software development). So, if you are walking through a train station or shopping mall and the screen gets a BSOD - blame the fscking VC's that don't have a bloody clue. They destroyed an extremely profitable business model and wouldn't surprise me (and the rest of the developers that left) if the company now went broke within 6 months.
My suggestions (Score:2)
Oh, I can't stress NFS/NIS enough. You could use LDAP or Kerberos (or NT domain, I think) based authentication with Linux, too, but NIS is the easiest to set up, and may not be as sexy as LDAP but is just as good. Also, I'd suggest using autofs for all the NFS mounts. And splurge on the servers; NFS/NIS will make your desktops trivial to swap/repair without interrupting work if something goes wrong, but you want 99.999% uptime on the servers.
I don't like Caldera. To be fair, I haven't used it since 1998 or so; it may have improved.
NFS is a fine filesystem, by the way, as long as you have root on all the machines using it and don't have to worry about packet sniffers in between those machines.
Make sure you've got 10/100 NICs in all the machines. The cards aren't more expensive than plain 10baseT, really, and the hub prices are dropping quickly enough that if you don't start with a 100baseT network you'll want one soon. You'll definitely want 100baseT connections to the servers from the start.
32MB should be fine for WindowMaker (although you'll want lots of swap with an office suite and Netscape running); 64MB (still with 64+MB swap) would probably be perfect with any desktop. I've got 128MB on my home machine, but that's cause if I close Netscape I want it to reopen entirely from disk cache.
Why pick your users' desktop? It's not like you're going to have 1GB drives on the clients, so install 'em all, set whatever you prefer as the display manager default option, and make sure the desktops boot to kdm or gdm.
I assume you're worried about security? I would turn off most network services (on Linux boxes in front of the firewall at my work, there are no UDP ports and only the ssh TCP port open); you won't need them on clients, but most distros will default with them on anyway. Also, make sure you have the BIOS set to boot only from HDD, a BIOS administrator password, a LILO restricted password, and go-rw permissions on lilo.conf. It's shocking how few people do this; I'd say 99+% of Linux computers in the world let you get root with no password by simply rebooting.
kickstart (Score:2)
diskless X Terminals (Score:2)
http://www.ltsp.org/
http://www.solucorp.qc.ca/xterminals/
for more details.
Re:Tailored installation, user/system separation (Score:2)
Apt is a sysadmin's dream (Score:2)
Have your own apt server, and have each desktop regularly fetch updates from it. Then when you want to roll out a new update, all you need to do is test it, put it on the apt server, and all of the desktops that are using it will update themselves.
This is very convenient for getting rid of any need to visit individual desktops to figure out who is using what custom packages. If they are using it, it gets updated. If they are not, it ignores the update.
:-)
Cheers,
Ben
Eek. You're not in ops or IT management, I hope (Score:2)
A clarification on Apps... (Score:2)
Everyone seems to be taking a different view from what I had in mind when I said:
First off, you don't want any data locally. That's right. I don't care who has the workstation, the only thing sitting on the local disk should be the OS. All user files, and major applications should be sitting on a remote filesystem.
Obviously, this means I wasn't clear in my wording, since everyone seems to miss what my intention was.
I'm not in favor of remote execution of applications. For reasons I stated later, running X over a LAN isn't a scalable choice.
Rather, what I was trying to suggest was this: only the OS should be stored on the workstations' local disks, while all applications should be stored on a remotely-mounted network file system. Such apps directories would be mounted on the workstation, then the apps in them run locally.
I don't care how slick apt-get or rsync or rdist setups you can make. It's still far more complicated than having all commonly-used applications stored on a central file server. It's then much easier to do upgrades, and also much simpler to make multiple versions of the same app available (this is extraordinarily difficult (and/or clunky) to do locally).
I hope I'm a bit clearer this time.
Thanks for the feedback, folks!
-Erik
distribution does not matter much (Score:2)
for 5000 students here. the distribution does not matter much, since realy want your own install
mechanism for roll outs.
the easiest, pre-packaged way is useing drive
image, a tool designed for yust that. if there is
also windows on these machines, you realy want
this.
but if its only linux you can create your own
auto install process.
many networkc cards like the 3c905C have a network boot rom, but you can also work with a boot disk.
the install mechanism can be very simple:
boot a kernel with the necessary drivers, get ip via dhcp, filesystem via nfs-root, partition the hard disk, create filesystems + swap, put your "image" (can be a tar ball) of the software you want on the hard disk, change some things like hostname, install the bootloader, done.
manageing hostnames could be done via a small
cleint/server system: some server gives out the hostnames, the clients aquires one from this cental resource. its realy easy to do this with
a cgi script and GET or wget, its scriptable.
if your hardware is not all the same, you can detect some stuff by parsing the kernel log
from the boot process. lspci (and some grep commands) is a big help with pci cards, e.g. vga.
building a base system to install on all machines
is "easy": install your favorite distributions
and the software you want, and tune everything till you are sattisfied. then build a tar.gz of everythign and put it on the server. grab the partition table with sfdisk -d (sfdisk can use this output to create the same partition table on a different system), and you are fine.
you could also install some hooks in the image,
that will run at the next boot, and delete themself. these hooks can fire up X11 and ask
stuff like hostname, dhcp/manual ip, and all this.
gtk/perl/glade is a big help, or tcl/tk or whatever you use.
a roll out, a mere installation of everything is very very easy. the mechanisms are widespread known for more than 10 years, and they do not differ very much from a windows rollout.
but realy hard is the maintaince. software updates on linux dont go that easy. you cant use the distribution mechanisms, sind the might fuck up
(like some debian packages asking for [ENTER]).
i found a big friend in rsync. so, the software update/installation side needs some work, most important if you have lots of different combinations of software on the machines.
its getting harder with the hardware: after some time hardware will fail, and people will replace it with different hardware. all the distriibutions know how to do autodetection for installation, but there is no tool to do it everytime the machine boots. you dont want someone to edit some config file, because a serial mouse was replaced with a
ps2 one.
but the hardest part is a good configuration for lots of users, if they have different backgrounds.
sure you can use skell like mechanisms, but face it: they suck, they are very ugly hacks. but lots of applications dont have good config files in
it would be realy nice to have some windows features for people who want them, like hardware detection at boot time, or a "run this the next time linux boots, but only once" mechanism,
or some automatic configuration of IPsec
(like the windows "add to domain"), and lots
of other stuff.
i looked at bit at some windows software, where the user can pick the software he wants, and gets
it installed. if there are updates, they are listed, and installed when he wants them. admins
can create and configure these software packages
and updates and can put a lot of magic in it.
and it all works, without the user having
(root|administrator) rights. linux could need some of this stuff for big desktop users.
Holy Moly (Score:2)
1. Standardize hardware so you can have a reference box the admins can work with and then duplicate the settings on the server for download.
2. Set up an NFS server to store the user's personal files with a cron job on each client sending to a particular server so you've always have online backups of user data.
3. For security on the workstations disable everything except needed remote access permissions and services. Let users log into the main server to get personal files using FTP or HTTP rather than their personal box at the office.
4. Set up an office wide intranet with lots of online help files and HOWTOs, make your admins rewrite or write HOWTOs that pertain to the hardware and software you're going to use. Being able to fire up netscape to get help rather than call the IT department will save everyone a few headaches.
For actual software on the workstations you ought to probably use the most stable version of XFree you can get away with, 3.3.6 and 4.0.1 are both pretty stable although 3.3.6 is a bit more documented and security related bugs are documented a bit more. Use IceWM instead of a fancy shmancy one, its both stable and fast. You don't need a bunch of extra stuff in the background. I personally feel that GNOME makes the most efficient use of the space on your screen but KDE works just as well. If you really want to be hardcore, load up KDE's widget libraries (or GNOME's if you're using KDE) so you have better range of choices for graphical apps. Applix Office 4.x is a very good suite that has the best file compatibility I've seen. I hope that helps a bit.
Re:What hardware does one need to run X remotely? (Score:2)
Re:Two words (Score:2)
Re:Don't forget VNC (Score:2)
Re:Holy Moly (Score:2)
Re:Don't forget VNC (Score:2)
startoffice (Score:2)
As far as debian, caldera, or redhat goes, it depends on your admins. All have package management, and rpm vs dpkg could be debaited till the univers collapses and the truth is it depends on what you prefer. I personally found installing debian to be tough cause the last version I tried did the dependency checking at each step where redhat does it after you select the packages. At least that was my experience. 64 Meg of RAM is plenty for Linux boxes.
alien will convert packages between the two systems, and gnorpm and kpackage will handle rpms and kpackage also handles debs as well.
My personal feeling on teh deb vs RH debate is that many companies look at RH because there is a company behind it where debian there is not. Yes there is storm and corel which are based on debian, but if you look at the different software ports that are done to Linux many large companies go with RH first. Just look at IBM. IBM released there via voice technology for Linux (you can buy the software for 59.95 or so) and they recommend RH 6.0 or later.
Others may feel different and they can, but this is my opinion and experience. Incidentally I have been enjoying RH since I first tried 5.1
send flames > /dev/null
Re:Selection (Score:2)
I do like Word Perfect a little better than the word processor in StarOffice, but StarOffice is more than usable.
At any rate, how many people actually use more than 10% of the features in their office suite? The argument that you have to pick the absolutely most feature bloated office suite just doesn't make much sense when you consider that most people won't ever need half of it.
Most people could get everything done they need to using something as spartan as Applixware... Maybe faster, since they wouldn't be tempted to twiddle with as many frills or have to wade through so much extraneous junk.
Kickstart (Score:2)
Regards,
Re:diskless X Terminals (Score:2)
There are even utilities that allow you to load-balance over a cluster of servers. Say a cluster of 10x128Mb servers with process load balancing enabled. Running xclients remotely will mean that you can probably get away with most clients stuck to 32Mb and some swap.
As it seems you already have the workstations there I can't see any reason to remove the harddisks - By runnning the X servers locally you will save what might be valuable network bandwidth for the applications.
Wow! Seems really interesting project. I have to say however that I'm not a "Linux on the Desktop" advocate yet... sure - I run it on my workstation, but I like ordinary users to use what they already know.
Re:use a web based email system (Score:2)
Re:Tailored installation, user/system separation (Score:2)
I think that Red Hat Kickstart [redhat.com] is exactly that. It lets you script the answers to all the questions and choices in the installer and perform a fully automated install. Basically, you put the script on a boot diskette and put the cdrom in the cd drive and off you go...
Re:User Community? (Score:2)
Agreed. My recent frustration is that I am one of these technical users, trying to convert myself to a business user. I have always administered my own machine and knew everything that was on it. I enjoyed it.
Now, I'm more into development and integration and my time isn't worth doing administrative level tasks. So, I fall behind in knowledge, but still think I can fix things, usually just making it worse. Add to that that my workstation is running *shudder* NT and I have always had Linux on my workstation. I'm trying to learn to trust the support group to solve my problems and get my real work done instead of spending hours breaking my workstation.
It sucks because I feel like those jerky kids in the support group know something that I don't. Oh, well. I'm getting paid more than them.
Re:I'll let others slug it out over desktop ideas. (Score:2)
This is both totally correct and totally, utterly wrong, because you forgot the most important part of making this work, which is caching. X-terminals and diskless workstations were tried for a while, and they sucked. They sucked because they were slow, and they were slow because getting their data off the server. That hasn't changed, either. Available bandwidth has increased, but so has appetite for bandwidth. The solution now, as it has always been, is to cache at the clients for performance, which means that the clients should and will have data - just not authoritative copies of data.
The problem is that, to make this work, you need to use a protocol that maintains all the semantics of local file access, most inportantly cache coherency but also things like locking. This is hard to do efficiently, especially when you have to consider things like recovery and failover, and so you don't often see it done well (or at all). NFS just punted on cache coherency, which is why everyone but a few people who've made careers out of implementing NFS agree that NFS blows chunks. AFS/DFS/Coda were at least designed to do this stuff right, but IMO - I'm a distributed and shared-storage FS designer/implementer - don't do it as well as they should and have suffered acceptance problems because of second-order technical (and some non-technical) issues. Sprite at Berkeley and Plan9 at AT&T have shown that this kind of thing can be done, but it has yet to be done in the context of a commodity OS. Some might argue that it can't be done in such a context, but I disagree. It is in fact one of my goals to create just such a filesystem...if I can ever convince an employer to let me, or find the means to do it on my own.
Re:Use debian. (Score:2)
Re:Use debian. (Score:2)
Um, thats silly... Imagine this scenerio: a package breaks networking. Oh sh*t! You have to fix 2500 machines manually and delete your inbox.
They would have to try any new packages on a trial configuration, and wouldn't want to risk upgrading more than once a month
I know Debian stable is really good, but any sysadmin with that much responsibility has to apply due diligence
debian apt suite of tools! (Score:2)
Debain has all sorts of sexy tools for this kind of thing in thier distro. dpkg-repack can repack an installed system into a distribution cd that installs exactly that system. Debian can be a little more work for one system but you can automate your whole show with the tools provided... thats worth it.
Look here [debian.org].
Re:Linux (Score:2)
$300 * 2500 = $750,000
Yeah, I would really use Windows 2000 here, with the expensive support contracts, tendancy for users to save their files unsafely on the local machines, etc. Linux, or FreeBSD even, will allow the sysadmin to tightly control what can be done on each machine, and all the users will save automatically to a backed-up server all the time, without knowing it. Add on the cost for a few Win2000 Servers, and licenses for Windows Terminal Servers, and more, and you could be doubling the cost, for no appreciable reason.
Each machine will be stripped down, FreeBSD is great for this, and I am surprised that no-one has mentioned it! Install only what you need, and it runs great on 32Mb. Try out the different window managers on the users, although Window Maker should be good enough - make sure that the WordPerfect icon is on the right hand side! Lawyers use WordPerfect, so use WordPerfect Office, the only real cost to the system, and still cheaper than 2,500 user license for Office 2000.
I assume most machines are different in configuration in some way, so the disk image system isn't going to work too well, unless you can easily classify machines into large groups with the same config. Tie down the security, obviously, and make sure that printing will work fine, and you should be set.
Make sure that there are some decent fonts on the systems. Most Unixes still don't know what a font is barely, so Freetype or xfstt are a necessity.
Also install some desktop games! Sokoban and Mahjong would be good (more intellectual), as opposed to Solitaire and Minesweeper (for middle managers) :-)
Re:I'll let others slug it out over desktop ideas. (Score:2)
It's sounds tempting - put everything on servers, it's much easier to maintain. It's true - from the admin point of view, having everything on the server is much easier. It's also much easier from that point of view to lie down and go sleepytime and forget about the users....
Problems with central apps:
Software needs unique to small groups - they are often left out in the cold. typically, fighting starts about putting special need apps on the central servers. End result - they can't do their work.
And, network delays - as has already been pointed out, employees will sit staring at useless screens waiting for stuff to happen.
Given good cloning/replication/remote admin, you shouldn't need to put apps on a central server just so going from version 2.5 to 2.6 is easier. You ought to be able to set up a script that remotely updates the necessary applications across many workstations.
Don't ask me how though - I'm just a "user".
Re:Use debian. (Score:2)
That said, Debian stable is really darn stable... And the apt tools make maintenance trivial.
Ray
--
Re:How much are you saving? (Score:2)
-- Abigail
Re:use a web based email system (Score:2)
But you don't want to ruin your cooperate image by sending out the drek that Netscape and MSIE produce.
central point for scanning for viruses ect.
That's why you have a global incoming mail server.
central point for security.
Ditto. Besides, you do have firewalls, don't you?
less support needed instead of having to run around to each machine to fix things all the time you have
Which you also achieve with a global mail server. Of course, you will still have clients. Even web based email needs a client. (But you one install one client, on your NFS or AFS, right?)
one point to manage
Those are features of a central mail server, and a shared filesystem. Not of web based mail.
ease of adding and removing users
And that is complicated otherwise because of what exactly?
Users like the ability to bring up an email on another machine when they are working in a group setting.
IMAP and even POP will do that too.
-- Abigail
How to roll out 2500 PCs at the same time (Score:2)
roll out 2500 secure Linux boxes (or any other OS) at the same time.
Norton Ghost [pcanywheregroup.com] (Originally writen by Binary Research, a Canadian company) is a pretty sweet piece of software that will let you "clone" a disk partition, even over a network to multiple clients simultaneously using IP multicast. This means you can set up Linux on one workstation, install all your software and security patches, link up all 2500 machines by network, insert a Ghost boot disk in the other 2499 machines, then copy your complete installation - software and all - to all 2500 machines at once.
Re:I'll let others slug it out over desktop ideas. (Score:2)
If you are actually using machines designed as workstations (thin clients), I agree. But if you are using PCs, then isn't this a huge waste of the processor power, hard drive capacity, etc.. of a desktop. I'm a user not an administrator, so of course my main concern is that is that I have the apps I need to get my work done. This means that I don't want to wait on a slow network while I'm using an app. I also don't want to sit and stare at a useless pile of parts if the network is down (this has happended quite a bit where I work).
I realize that having apps installed on 2500 hardrives might be a nightmare, but you shouldn't lose sight of why you have those apps in the first place. Those apps are there to support the needs of the end user, not to make the admin's job easier. After all, that's why you get paid the big bucks right? :)
Is there a way to remotely manage apps on the hard drives, if not maybe there should be. How about remote power on. I mean as long as those PCs are turned on, you should have remote access to their hard drives and you shouldn't have to go around and turn on every one manually. Actually, I think this would be a great idea anyway. I'd love for my computer to be on and ready to go when I walk in at 8:10, er I mean 8:00. But then again, if your network never goes down, and those apps will run from the server as fast as they would run from a desktop, then sure, I agree, buy thin clients and put everything on the server. But if you work were I do, that's the last thing you want.
----------
AbiWord [abisource.com]: The BEST opensource word processor
I use rsync to keep the machines up to date (Score:2)
Whenever I change something on the "master client" I fire up a script I wrote to rsync all the other machines. For 40 machines, this takes about 2min per machine (unloaded 100MBit network).
So:
I have begun to document this at www.linuxfaq.de [linuxfaq.de] (German, though). Please tell us what you decide to do.
Have fun! :)
Re:Corel (Score:2)
Easy to install? You can't select singular packages, it installs LILO to the MBR whether you want it or not (I don't - I dualboot already, using XOSL [xosl.org] and LILO in the root partition), and it's slow as hell. Oh, and only the paid-for version has WordPerfect 8 with it, and there's not much other software (not even a complete KDE) with it either.
Why has no-one suggested Mandrake? It's got the fastest RPM-based installation I've seen (and RPM has some nasty inherent timings after it's finished installing files), it allows singular package selection, it's got StarOffice (which is better), it's based on a version of Redhat and so has all the utilities you've come to know, and is supplied with Blackbox, my favourite WM. Small, takes up almost no memory whatsoever, and exquisitely stylish, especially in the blue mode. (SuSE supplies an outdated version - watch for this.) Also, it supplies network install, which is a plus, and can be installed very small.
--------------------
This message is not written by an employee of any Linux distribution firm, which is obvious as, at the moment, I am on a student footing.
Re:Use the terminals as terminals (Score:2)
dd + netcat = ghost (Score:2)
Disadvantage: requires a modicum of shell scripting knowledge.
Advantage: free.
Look at a Windows Terminal Server for windows apps (Score:2)
This lets you keep the coolness and control of having linux on the desktop but lets you run a "thin client" to a Windows machine to run all your windows only apps. It can also be the thing that keeps the Pointy Haired Boss from deep-sixing your linux project because of some particular app.
The client protocol is also very bandwidth-lean and depending on the app you can get 40-60 users on a dual proc PIII-800 WTS machine. The licensing for a Windows Terminal Server is nightmarish but it might be the only solution to some real business problems .
Re:I'll let others slug it out over desktop ideas. (Score:2)
This is an honest question, I'd like to know. I'm a big fan of the client-server approach... However, till now my only experience has been with a very underpowered SGI/Irix server serving way too many dumb terminals and a Sunray lab designed for 30 stations but with only a few concurrent users. Two extremes.
--
About WM (Score:2)
I'd suggest using IceWM [sourceforge.net]. It's FAST, only requires about 1.5MB (my current usage is 1.3MB) and by default it looks and feels like Windows. You're switching from Win, right? Using a WM that looks and feels the same helps a lot. And with only 32/64 megs, you'll want something that doesn't use half of if just to give you pretty icons.
If you need to ask... (Score:2)
I strongly suggest you get a Unix/Linux consultant (perhaps from SCO) to come in and write something up, or even better, hire a good Unix/Linux admin.
That being said, I'd go with RedHat, since it's pretty well supported and has a large presence on the web for QA sessions.
I'm rolling out about 300 desktops "soon". I'd love to do it with my preferred destop -- HelixCode's gnome and debian -- but the install is just a little over the top.
try replacing Enlightenment with Sawmill/Sawfish.
32 MB. might not be too good if you plan on running any real apps.
Re:Kickstart (Score:2)
I've been using kickstart+DHCP+NFS since RH5.2 and, despite its annoyances, it sure beats the hell out of the Slackware "jumpstart" we hacked togeather here.
RH6.2 fixed a lot of little bugs, so I'd give it a try again.
As far as a validator goes, I'm not sure exactly what you mean, but the RedHat installer does provide *A LOT* of debugging output on virtual consoles 2, 3, 4, and 5. Also, once this install gets on its way, a shell is opened up on VC2. That's all I've ever needed.
And the post installation hooks are very powerful. I won't go into that here, but if you'd like to see what we've done here at the Computer Science Department at the University of Minnesota, send me an e-mail.
Re:Kickstart (Score:2)
Python errors are usually the result of using the wrong boot disk. If you doing a network install you need to use the bootnet image, otherwise use the basic boot image.
If you're getting any other Python errors, there's probably something wrong in your kicstart config file. Otherwise, maybe you have actually stumbled upon a bug.
I've only used Kickstart+DHCP+NFS installs. So I just put in the stock bootnet floppy, boot with "linux ks", the istaller goes off the the DHCP server to get the network config, brings up the NIC, mounts up the directory with all of our kickstart config files via NFS, reads the particular config file for that client, mounts up the distribution via NFS, and the install begins.
There was a bug in 6.1 where the "next-server" option obtained from the DHCP server didn't work. This forced you to put the kickstart config files on the same server as the DHCP server. A minor annoyance, but it's been fixed in 6.2 now.
As far as installing third-party software, there's nothing that says you can't do that with post-install scripts. We do, and it works great.
Our post install scripts even install all of the updates.
There's nothing wrong with compiling from source either. That's what
Define requirements better (Score:3)
How much document sharing will there be with organizations that use other (read: Microsoft or Corel) office software? For interoperability and the shallowest learning curve for MS Office users, Corel WordPerfect Office and StarOffice are the best choices. Both will read existing MS Office files well enough, and will export new files in that format well enough, too. Neither is up to the task of heavy back-and-forth collaboration with MS Office users, but that's true of any office suite on any platform. StarOffice can't deal with most WordPerfect Office files. This would be a non-issue under 98% of circumstances, as so little of the world really uses WordPerfect anymore. However, as a state court system, you're part of the 2%: WordPerfect still has a strong presence at law offices, so interview a representative sample of users and managers to find out whether they currently do receive a notable number of WordPerfect files via e-mail or on disk. Frankly, they probably don't, instead getting them via fax or in hardcopy. So it may well not be an issue.
This answered and all other things being equal, I'd opt for StarOffice; Sun is in better financial shape these days than Corel, and bulky though StarOffice is, it's also natively written for Unix/Linux. It's also got more features that suit it for network and large-environment use. Both Corel and StarOffice are available in identical versions for Windows, so the laptop brigade can work seamlessly withe the terminal crowd.
For another thing, many job functions don't require an office suite at all. Don't assume everyone needs one, and don't just reflexively give one out, even if it's free of license fees like StarOffice. If someone just sends simple faxes and email, that's all they should be able to do. If someone simply accesses an AS/400 or mainframe and works with e-mail, they need access to nothing more than a web browser for e-mail (or perhaps Netscape Communicator with its IMAP mail support), and a tn5250 emulator. For sending faxes, use email-to-fax and fax-to-email gateways. Hylafax is your friend. And incidentally, StarOffice has nice hooks for printing and "emailing" through networked Hylafax servers. It can also sync with Palm gizmos.
For more complex environments, StarOffice has some further advantages. Enterprise-caliber support contracts are available, as are user and administration courses. Macros and scripts for it can be written in Javascript or in VBA. MS Office VBA scripts themselves won't work, but the skills some users and managers may have can be leveraged very easily. For another thing, it can be scripted with and interact with Java. So what? Ah, here's the nifty enterprise-caliber part: not only can StarOffice 5.2 access ODBC databases. It can also access any JDBC data source--which means pretty much any database on the planet, regardless of OS. In addition, you can take advantage of Java toolkits and SDKs for all sorts of things. For example, IBM has a toolkit for Java access for AS/400 client APIs. Want to add a menu item to StarOffice for retrieving data directly from a mainframe into a spreadsheet? Or linking calendar items (have I mentioned StarOffice's Outlook-like group calendaring?) dirtectly to AS/400 screens? You can. In ways that can be reused on other platforms and environments.
What Linux distribution you use is the least important piece of this. Choose something that offers easy creation of kickstart disks, to ease installation on new machines, and that offers good, reliable security upgrades. Me, I'd go with something that offers decent commercial support contracts. The terminals won't need any such nonsense, but it sure is nice to know you can get an engineer on the phone if a $50,000 server has a memory leak you can't squash. There are several ways to do this: you can go with a Linux vendor that offers contracts, like RedHat, or with a hardware vendor that sells Linux OS support on its hardware, like an IBM or VA Linux. For this reason, Mandrake comes out weaker than RedHat. It's not about the quality of the default installer or the number of "extras" on the CD. It's about maintainability going forward and the quality of support you can buy for those times when Usenet and online documentation cost too much downtime.
Depending on what you need in the way of file storage or backend applications, you may well want to look into a commercial Unix (say, Solaris, which runs StarOffice splendidly) on the backend. Linux is great, but if your needs really call for a SAN (and it doesn't sound like they do), or you want to go with one gigantic 24-CPU server instead of several 4-CPU ones, Linux may not cut it. Don't compromise your implementation for politics. Keep in mind that at this level, Linux is Unix is Unix, and that things like data and email and so forth can move fluidly between flavors without a moment's thought. Linux can certainly support 2500 users well; it can support tens of thousands of users well, as most large universities can tell you. There are also things it can do that something like Solaris can't, such as attach seamlessly to Windows file shares. But plan your applications and your network before you make the final OS decision, so the OS doesn't force compromises.
And another nice thing about these Linux X terminals is their flexibility. Just need green-screen VT102/3270/5250 access in the mailroom? Can do. Need to mix in some Windows-only applications after all? Set up a Metaframe server and give the X terminals access to an ICA client. Want users to be able to save to floppy, or attach a barcode reader? You can. With no changes on the terminals themselves.
A matter of taste. Centralization. (Score:3)
I use Suse on the desktop and found it to be *very* practical. But mostly, the question "what distribution shall I use?" comes down to "that's a matter of taste".
There are differences in their target audience, of course. Corel Linux is targeted mainly at beginners, while RedHat, Suse, Mandrake and Caldera try to be useful "for everyone" (they all can be installed and used by dummies but they all offer features for the pros, too). Debian is a little different because unlike the previous distributions, it doesn't offer automated configuration scripts. Again, it is solely a matter of taste if you like Suse's "yast" to mangle your configuration files for you (I do) or if you prefer to edit them by hand. Those people using Debian tell me that they are very happy with it (especially with its easy upgrade process and its security model).
Installing on auto-pilot...
BTW, if your machines are 2500 identical hardware setups, you could easily create one reference linux setup and copy the entire harddisk across the network, using a simple custom bootdisk and the "dd" command. Also, all distributions offer automated install features (ask their support about it) so that you just put in the CD and they auto-install your custom setup.
Centralization...
Reading from your requirement list, you may want to hire a Unix (semi-)professional for the setup. I mean, 2500 machines!
There are a number of Unix features that can make life easier in such a situation, so it's good to have someone who knows how to setup things like that...
Here are a few ways of centralizing things in the Unix world. Each step means a bit more centralization and means that the server must be more powerful and vice versa that the client doesn't have to be a powerful, fast machine anymore.
- You can use a centralized NIS/YP server for the user and password administration, which will make life a lot easier for your admin. I have never done this myself, but I worked with such a setup at University and it was incredibly practical.
- How about setting up a central file server for the user's home directories? If all user-related information is mounted via NFS, your workstations can easily be replaced and employees can easily move offices. Just login on someone else's machine and your personal files are right there.
- Next, you could setup a central file server that contains all the application binaries. This makes updates easy and avoids the need to upgrade your workstations' harddisks.
- And the final step would be to make all those machines pure X-Terminals that only run the X-Server and a local window manager, while the applications run on a central server. I don't know if this is for you, since this requires buying new powerful servers. On the other hand, since 32 MB is more than enough for an X-Terminal, you can avoid buying RAM for 2500 machines.
Some more thoughts on memory...
If you want all the applications to run locally, you should choose a minimalistic window manager (not KDE and not Gnome, both work with 32 MB, but ask for more) and Applixware. I have tried Applixware and it runs fine on a small machine, but I mostly use StarOffice now. StarOffice is a memory hog, though, so it isn't an alternative for you. (I have not tried Word Perfect, so I cannot judge about it.)
Finally...
Good luck for your project, but please expect a few weeks, possibly even months of time before everything works smoothly. The sheer size of your network is a true challenge.
------------------
Comment removed (Score:3)
User Community? (Score:3)
The type of user community that you are supporting is very important. I am guessing at two different types of groups: operations and business users.
As for Operations, here are my tips:
Have fun.
Use the terminals as terminals (Score:4)
One big benefit: As the terminals do not hold data, it doesn't matter if they are stolen. Terminals are not trusted.
The X protocol is made for networks, and a 10 MBit/s hose to each terminal would be just great. However, it's not encrypted, so you should at least consider how physically secure your network is, and what the requirements would be.
Then set up one server for each N users. If they are doing web access and text editing, your average ``high end but not that high'' server should be able to run 15-40 users. Maybe more, but I haven't tried this type of workload myself so I can't say. Anyone ?
You will end up with a server farm. Each server should hold a home filesystem locally, and preferrably the users with the homes on that local fs should log in on that server. You can choose to let the server export their home fs'es to the other servers as well and share user accounts with NIS, which would let any user log in anywhere. If a terminal is tied to a user and vice versa, there should be no need for a terminal to be able to choose other servers, but if they're not, then the need will be there.
I've done a few such setups, but at a *much* smaller scale. I can tell you that it is a relief to _only_ have to update software on the server(s).
Large rollouts (Score:4)
The same principle is absolutely essential for anything more than 100 or so machines (even if upgrades aren't a priority, bug fixes and security fixes will be).
In truth, I can't imagine any distribution would be better suited than any another here, especially if you are willing to write a boot up script which can download any new RPMs or DEBs and install them. The only problem is making sure they are not "interactively installed". Lots of Debian packages are but this is easily remedied. In fact, if you used Debian, adding apt-get update && apt-get dist-upgrade to your boot script and setting up your own packages repository (a simple FTP folder) would do that for you. You may need to tweak the odd package to force some settings but that's what your network of 5 machines reserved for testing are for right...
I'd also go with Sawfish/Sawmill instead of Window Maker. While I'm a huge fan of WM, I think sawfish has a much more desktop friendly future ahead. It can also look pretty identical to WM, and some of the other themes are very practical for desktop use. Its memory footprint on my machine is just under 4MB with half of that as shared libs which lots of other programs are using. Perhaps a choice at login would be useful, especially if offered with something pretty like GDM.
The major issue will probably be support, although that's more likely to be for specific applications than the whole system. I take it that to be entrusted to install 2500 desktops, you know your greps from your seds and are pretty capable of writing some scripts to manage upgrades. If not, find someone who is and pick their brains.
Re:I'll let others slug it out over desktop ideas. (Score:4)
Rather than bogging down the network with remote X apps, *please* investigate Debian's apt-get tool. In some ways, the Debian distribution is a 10,000+ distributed cluster of homogenous systems.
For my home Debian box, all I have to do is run apt-get update; apt-get upgrade once a day, and then my system is homogenous with the official Debian distribution.
If you put those two commands into your user's init scripts (probably with the --force option), then lock down the /etc/apt/sources.list, then...
The biggest disadvantage of using apt-get is that your network will probably get bogged down after you change a large package. The next morning, at 8am, when 1000 people turn on their computers, they'll all be trying to download the same package at the same time, which could be a mini nightmare. If you're a good sysadmin, you'll figure out a good way around it.
The other big advantages:
If you don't need to roll out this installation tomorrow, I'd recommend that you install a copy of Debian (Debian 2.1 [debian.org] is stable, but out of date, Debian 2.2 [debian.org] is not quite released yet).
Once you install Debian 2.1, hang out for a while talking to people on the irc channels (irc.debian.org), and get all your stuff configured, then run the command apt-get update; apt-get dist-upgrade, and your distribution will automatically be upgraded from 2.1 to 2.2 (hopefully with almost no user intervention).
This message turned out to be a lot longer than I expected, but there's a lot to consider in your situation. Good luck!
--Robert
debian + replicator! (Score:4)
Tailored installation, user/system separation (Score:5)
The first is important because one of your major costs is going to be support --- this will skyrocket if you use standard distro CDs because they're all based on interactive user choice in varying degrees, and corporate handholding costs money.
The second is important because without the separation, upgrading will become a nightmare over time --- again, this will increase your support costs. In fact, consider seriously the possibility of not holding any user data on the workstations at all, but on a central filestore instead. That simplifies data backup as well as workstation upgrading, because then you can regard workstation state as throwaway.
I'll let others slug it out over desktop ideas... (Score:5)
... I'm a Sys/Net Architect, so guess where my biases are? :-)
Anyway, what you have on the desktop matters (esp the mechanism you use for clone workstations (you are planning to clone workstations, right?)), but I'll concentrate on something else equally important, and which will affect how you set up the desktops: Network and Backend System Design
First off, you don't want any data locally. That's right. I don't care who has the workstation, the only thing sitting on the local disk should be the OS. All user files, and major applications should be sitting on a remote filesystem. Otherwise, you end up with a completely intractible backup and upgrade problem. Trust me on this.
As a correllary to the last statement, you don't want to use NFS as your file sharing method. Hell, even SMB would be better. You want to look at either AFS or Coda. I would recommend the latter, as it's nowhere near as nasty to set up.
As part of Coda/AFS, you are going to have to think about how you design your file server setup. A central bank of servers is tempting, but this tends to be really harsh on the campus backbone, as it puts the workstation relatively "far" from the server, and all traffic has to traverse the backbone. Consider local file servers which may cache user data for replication back to the master server(s) later.
Printing is also a bit of a problem. I heartily recommend the CUPS [cups.org] system talked about here [slashdot.org] a couple of days ago. Have all your workstations spool to dedicated print servers. They don't have to be powerful, but make them dedicated. You won't regret it.
As far as security and other mishmash goes, do the usual /etc/inetd.conf edit, and comment EVERYTHING out. Don't run ANY daemons on the clients (other than what is absolutely necessary for Coda). Have all mail blindly forwarded to a central mail server. As a correllary, use IMAP (preferably IMAP-over-SSL) as your mail server. Stay away from local UNIX mail, and POP. And look at running postfix or exim instead of sendmail.
You can think about using application servers (i.e. run X apps remotely) if you want, but realize that this will up the bandwidth requirement, and honestly, you probably can't run more than two dozen major X apps over a LAN before it bogs down completely. That is, you need a local app server with 100Mbit connections to about 25 machines so each can run 1 or 2 X apps remotely.
If you can afford it, and have the time, use LDAP as your user info directory - avoid NIS and NIS+ (the first is horribly insecure, and the second is nasty).
This is a first approximation of what you might do. If you want a serious proposal, I'm available nights and weekends (for a modest fee, of course... heehee)
Good luck!
-Erik
No Beowulf comment?!?! (Score:5)