Ask Slashdot: Linux Login and Resource Management In a Computer Lab? 98
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.
aversion therapy (Score:3)
I would do it up A Clockwork Orange style.
The original BOFH stories are a good guide: http://bofh.ntk.net/BOFH/ [ntk.net]
ulimit? (Score:1)
systemd (Score:1)
I believe you can do at least some of that with systemd user sessions and resource restrictions
http://0pointer.de/blog/projects/resources.html [0pointer.de]
User sessions are currently kind of beta-ish but they're getting better / more useful... I already launch emacs and a MIDI synth through it on login, and it works wonderfully (ironically, though, PulseAudio, the other Lennart project that got a lot of flak, doesn't launch through this mechanism yet).
Trust your users (Score:5, Funny)
Trust your users.
Re: (Score:2)
Good grief (Score:1, Redundant)
Re: (Score:2)
Around here some of the public schools just got rid of Pentium IIIs. Not everyone can afford something decent.
Re: (Score:3)
Re: (Score:2)
Do you still have the box your computer came in?
Good, please turn off your computer, disconnect it, and ship it back.
Why?
Becuase you're too fscking stupid and ignorant to use one. And as to why you even thought you should comment on something that you have no clue about, other than to display your gross ignorance in public, like a baboon's ass, I have no idea.
mark
Re: (Score:2)
I have yet to see a computing environment where the demand of computing power significantly outstripped supply due to antiquated technology except where the network administrators were practically tenured. In those cases they were gobbling up so much in salary and blowing time to keep fixing stuff mostly due to age.
The administrator even seems to point at that he is trying to fix problems that don't fully exist. "...and it is hacked a lot." Is one of thos
Re: (Score:2)
Is this 1988? The easiest/cheapest solution is spend a couple bucks on decent machines.
Sweet, I've been needing an upgrade myself as well, but there seems to be a strange shortage of people insisting we speed more than a couple bucks on the problem and will pay for the upgrade. I'm glad I found you!
250 workstations upgraded to top tier is roughly $200000.00 or so. Better make it $250000 so we can get new LCDs too, these 10 year old 19" ones are getting a tiny bit of burn-in.
Just go ahead and paypal it to me, and I'll get right on implementing your suggestion!
Re: (Score:3)
Even if all the machines were identical top of the line machines, many of the things that was listed as requirements would still apply.
"Spend[ing] a couple bucks" isn't always fiscally possible in a education or non-profit environment which the computing lab is likely a part of.
Finally, given likely limited resources, it likely made a lot more sense to buy more lower end less expensive machines if they could adequately meet the needs of the majority of users while having just a couple of high end machines f
Re: (Score:1)
Exactly the last point.
What I dislike the most are users that take advantage of others due to their lack of knowledge. And this is either done intentionally or unintentionally when rules are not enforced.
I would like all the students (often coming in contact with linux, shell programming and clusters for the first time) to have a fair shot of using the available resources, and not to backstab each other.
Before everyone could run on the cluster, until I discovered that certain students were g
Re: (Score:1)
Re: (Score:1)
And what's the appropriate action besides root_squash and proper host access control (/etc/exports,tcp wrappers, firewall, etc ...) ?
It still doesn't do any real authentication.
Platform LSF (Score:2)
Re: (Score:1)
Hi,
another alternative would maybe sysfera-ds [sysfera.com], but their open source offering seems lacking documentation and features (see here [github.io]).
Need to investigate. Seems something on the lines of what vizstack [sourceforge.net] could have done.
Is this all necessary? (Score:5, Insightful)
Seems like you are trying to work out a solution to a problem you don't have yet. Maybe first see if users are just willing to play nice. Get a powerful system and let them have at it. That's what we do. I work for an engineering college and we have a fairly large Linux server that is for instructional use. Students can log in and run the provided programs. Our resource management? None, unless the system is getting hit hard, in which case we will see what is happening and maybe manually nice something or talk to a user. We basically never have to. People use it to do their assignments and go about their business.
Hardware is fairly cheap, so you can throw a lot of power at the problem. Get a system with a decent amount of cores and RAM and you'll probably find out that it is fine.
Now, if things become a repeated problem then sure, look at a technical solution. However don't go getting all draconian without a reason. You may just be wasting your time and resources.
Re:Is this all necessary? (Score:4, Informative)
We did it like you describe. We had some problems with people doing dumb stuff and we just stuck post-its on the monitors describing how to use the "top" command.
[you@server1 ~]$ top
PID USER %CPU COMMAND
1960 you 2.3 top
2457 Bob 97.0 bitcoin
[you@server1 ~]$ write Bob DUDE! wtf?!?!
etc...
Re: (Score:2)
Re: (Score:3)
Re: (Score:1)
Seems like you are trying to work out a solution to a problem you don't have yet. Maybe first see if users are just willing to play nice.
You'll also discover that once in a blue moon, users do have a legitimate reason to briefly consume much more resources than typical.
Spikes happen. It's normal. Monitor the usage, but don't cap it until your problems are more than theoretical.
Re:Is this all necessary? (Score:5, Interesting)
15 minutes later, someone tapped on my shoulder and asked me what I was doing - I had taken the full processing capabilities for a while. I showed my script - gasp horror, and a 1 second pause was added to the script and I was good to go. Learned a lesson too.
The year before I got there - enough people were learning how to hack the system to crash it that they were having trouble keeping the system up. Their solution - install a button next to each keyboard that when pushed would crash the system. No work was accomplished for a week - then it didn't go down again. We were told about the button, it was rough for a couple days - and then the systems were rock solid.
Kids will be kids - good kids will create a nightmare for you - work to focus that energy in a positive way and good things will result.
Re: (Score:2)
$lisp?? Was this on McGill's RAX/Music or BU's RAX/VPS system by any chance?
Did you look at the PAM modules on your system? (Score:4, Informative)
Some of what you're asking for are ulimit settings - total number of processes, for example. That's pam_limits. Some could also be handled with pam_tally2. Or, since you're already using LDAP, you could use a simple web-based reservation system which specifies allowed login hosts in the LDAP server for however long someone wants to "check out" a machine; that's how I've done it when I've needed to control access to cluster resources.
When you talk about controlling other resources beyond logins, it's generally better to handle it at the application level rather than the OS level if you can. But using ulimits (and again, this can be integrated into LDAP pretty easily), you can restrict resources and apply process priority (ionice and nice are your friend) based on membership in a specific group or another LDAP attribute.
You could, for example, create a "highpower" group per set of machines / per machine (highpower_serverA) and add users to that group based on a checkout system, then define limits on the number of processes they can use, amount of memory they can use, total CPU time they can use, etc in limits.conf based on being in that group or not being in that group.
I'll send you my bill tomorrow.
Re: (Score:1)
I'll send you my bill tomorrow.
Agree. PAM plus some usage guidelines and monitoring should be enough. Stuff like this http://www.ibm.com/developerwo... [ibm.com] BTW It feels being like one of the torrent nodes backing up your encrypted files for you.
Just deal with problem users individually. (Score:4, Insightful)
Have these problems actually been happening a lot?
When I first started to help manage a computer lab, I was concerned users would behave really badly and do horrible things. The truth is, very few users did, and we just talked to those users and told them how to behave.
If you get the occasional repeatedly defiant user, locking out their account can be the final solution. But most people (at least at our site) aren't jerks and listen. Most "bad things" are due more to incompetence than malice, and educating students is easy.
Also, as someone with experience in these matters, allow me to recommend AGAINST Fedora for production systems. I like to call Fedora the self-breaking distro; updates break things CONSTANTLY. You're much better off running Ubuntu (even non-LTS is more stable than Fedora) or the RHEL clones like CentOS or Scientific Linux.
Re: (Score:2)
Except the long term existence of Scientific Linux is now in doubt with Cern jumping ship to CentOS.
To be honest since the introduction of being able to use extra repositories at install time the requirement for a separate CentOS and Scientific Linux mostly evaporated.
...by hiring them. (Score:2)
When I used to work for a university (mid-1990s), our department's sysdmin had gotten in trouble at the engineering school because he had written a script that would log into every machine multiple times until all ttys were exhausted ... so he could run his ray-tracing jobs undisturbed. I heard he got away with it for quite some time before one of their sysadmins came in early and realized something wasn't right.
They told him not to do it, but instead of banning him, they put him to work ... he wrote some
Re: (Score:1)
Hi,
the beowulf clusters we have are running either based on Centos or SLES. For the development workstations where newer versions of certain software are needed I install Fedora.
This means the developers basically run production on the cluster and develop on the workstations.
Since there is always a gap between the two (i.e. centos 5 on cluster and fedora 16 on workstations before, centos 6 on cluster and fedora 20 on workstation), when the cluster is updated there is limited breakag
Re: (Score:1)
I've been managing systems with hundreds of well-meaning and not-so- scientists, for years.
Generally, I subscribe to the school of thought that putting too many fences does more damage than good.
I know for myself, that I *can* create trouble in a zillion of ways on a system, that fencing against it is almost pointless:
* fork bombs
* malloc bombs
*
*
* deliver daemonized processes in the background
The first two you may handle a bit with the PAM limit techniques des
Re: (Score:2)
A lot of this seems superflurious ... (Score:5, Interesting)
If you're giving your users access to the machines, they should be able to use them. And if you can't trust them to use them responsibly, don't give them access.
If it were me, I'd secure the boxes normally, set up some resource usage rules (guidelines?) and see what happens. If problems happen often, then maybe look into something automated to enforce the rules, but if not, then you're done.
As for renicing stuff done by remote users, I'm not sure this is a good idea, but if you want to do it you can renice sshd itself, and to be thorough you can also renice crond (if you give them access to cron/at.) But do keep in mind that nice (and ionice) can't do magic with an overloaded system -- they help, but they don't do magic.
As for commercial systems, I haven't really seen this as being a big problem outside academia. Multiuser *nix systems where different people are competing for resources is kind of rare in the commercial sector, as it seems like the trends lately are to have enough hardware, often dedicated, and to enforce limits through voluntary compliance (and have their boss talk to them if it's still a problem.)
That "have their boss talk to them" bit may not work so well for students, but still, I would wait for a problem to appear before I put too much effort into solving it.
Instead, put your efforts into proper sysadmin stuff -- stay up to date on patches, look for problems (especially security ones), make sure backups work, help users with problems, etc. If there's any troublemakers, talk to them, and if they don't shape up after a few warnings, kick them out. (And make sure the policies permit that!)
You can enforce limits on specific users through pam and sshd_config and some other mechanisms, but I'd suggest leaving that for later. Anything you do that will limit what people can do will eventually keep them from doing what they legitimately need to be doing.
A lot of complexity, a little gain? (Score:2)
That sounds like a lot of overhead for a problem that seems unlikely. I've used lots of multi-user linux boxes over the years and never noticed that a few bad users ruined the experience for everybody else. If it's really an issue, think of it instead as a learning opportunity - post concise instructions on proper lab utilization and how to use top, etc to check if somebody else is the reason why the machine you are using is slow. Then let users police each other.
Re: (Score:2)
I've used lots of multi-user linux boxes over the years and never noticed that a few bad users ruined the experience for everybody else.
I did ... but this was 25 years ago at college when hardware was scarce (we had 1 MB disk quotas!) and the computer system was used to do all sorts of things that people just couldn't do from their own personal computers (i.e. access mail, news or the Internet.)
Users policed each other back then to a degree, but there wasn't much you could do to make a bad user behave unless the sysadmins backed you on it, and they'd only back you if they explicitly broke the rules set down. And often you didn't even know
Technical solution to a social problem. (Score:5, Insightful)
If your users can't play nice together, the solution isn't to treat the place like a prison with automated systems enforcing a hard and fast set of rules.
The solution is for users to create their own enforcement. If some guy tries to take all the resources across your network with distcc, then the people affected should be able to notice that and tell the guy to knock that the fuck off.
In other words, give the users the freedom to break stuff, but also the knowledge to find out who'd breaking their stuff. It'll serve them far better than creating a walled garden where someone else has the responsibility to enforce social rules.
Slashdot and reddit work this way. Neither go around trying to enforce how people behave, they give the users the power to do that themself.
Re: (Score:2)
Ha. You make me laugh. People such as yourself have bad memories, or lived in some kind of sheltered environment. Every generation is convinced that the generation after them are the spawn of satan, and when THEY were that age they were all just perfect angels, or at the very least a HELL of a lot better than the current lot of miscreants. The attitude you're projecting has been common for at least the last 60 years.
Uhh.. when _I_ was that age about 20 years ago people were hacking into the computer sc
Re: (Score:3)
Seconded. Except >20yrs and HPUX rather than SunOS.
Police ourselves - yeah sure we did. Act like adults ? er, nope. I figured out several ways to crash machines from console, if someone logged in remote and started using all the resource, I'd crash the machine and move to another. X was completely unsecured in those days but they installed a graphical login. Fake login windows, key loggers, fake error windows (make the guy on the better workstation think it's crashed so he moves off it), check.
Best
I would write my own with LDAP (Score:1)
Re:I would write my own with LDAP (Score:4, Insightful)
I would be terrified if you were my co-worker.
Re: (Score:2)
My goodness, what has happened to Slashdot? Have the competent admins been replaced with morons?
NFS homedirs (Score:2)
Back when I worked in schools, one of our techs setup LTSP with NFS-mounted homedirs.
I mentioned that perhaps IP-based host authorization wasn't exactly a secure way of doing things, especially when it applied to both students and teachers/admin-staff.
I was told that it wouldn't be an issue, and that files were perfectly safe.
So some time goes by and a demo is scheduled for the system. My compatriot logs in and... he gets a hot-pink desktop with My Little Pony wallpaper theme. Unfortunately that didn't diss
Server Cluster (Score:3)
Easy solution:
Put all of your systems in to one big active/active server cluster. Then everyone is sharing all the resources evenly by default.
Here is a Fedora resource:
http://clusterlabs.org/doc/en-... [clusterlabs.org]
If you really want to have some fun you should try to create a Plan9 cluster. This is a transparent cluster OS that was designed for the purpose of resource sharing.
http://plan9.bell-labs.com/pla... [bell-labs.com]
The problem is what you are using (Score:1)
NX? But you are using x2go? THAT is not NX. Contact the experts I.E NoMachine http://nomachine.com/ [nomachine.com]. Only the real authors of probably the most amazing remote access and management tool can you help you there.
FreeIPA (Score:2)
Since you are on Fedora already, I'd recommend FreeIPA. It'll give you more than your LDAP+PAM for centralized authentication and authorization, like Host-based Access Control, centralized sudoers policy, DNS, etc.
However, it wouldn't accomplish any of the tasks you specifically asked for out-of-the-box. I was thinking you could write some of these tasks as FreeIPA plugins.
Re: (Score:2)
Well, if it's linux, FreeIPA is better because then you can take advantage of group policies that are designed to work with linux. If you use AD, you will get authentication and that's about it. Now if you have windows+linux it's a bigger problem. In our lab we went with AD forsaking the advantages of FreeIPA for our linux users, but you could also set up both servers with a shared trust. It's a bit more complicated, but this is something RedHat are trying to develop into a turnkey solution.
Sort of glad my school didn't lock us down (Score:1)
One was a neural networking course which involved programming a computational model and then running 100,000 iterations of the model and analyzing the results. We had been given 6 weeks for it because it was going to take at least 1 week or so to run, but I could not get my model to work for the life of me, and working with the professor finally got it working the night before the resu
The Cloud? (Score:2)
Limitations (Score:1)
Responsability-linked quotas (Score:1)
The only way I see this happening is if you totally migrate your lab to something like Amazon AWS/EC2, and link each user to an individual account with specific bandwidth and storage (GRATIS) quotas.
For one, processing power wont be an issue since that's on Amazon's side, and it's virtually unlimited. Now, everyone will have a decent amount of the other resources for whatever they need, as long as quotas stay inside each user's scope (for which their free quota should have been well defined).
A user abuses h
Re: (Score:2)
I'd just run my own "cloud" instead, using, say KVM. With billing etc. like in the old times.
Virtualization may be your answer (Score:3)
We had a similar issue with our engineers. We had login servers which worked great as they were poorly advertised and woefully underused, but once we had a system in place for them to make efficient use of them, they started to randomly crash. Most times it was due to them trying to submit a job to our compute farm and end up running it on the login servers, but sometimes it was malicious and a deliberate attempt to get a few extra CPU cycles at the expense of others. For us, the solution was rolling our own virtual desktop farm. We used KVM for the hypervisor, python for the back end control, and php for the front end web interface. We used Active Directory for authentication and rights management. That way we could control precisely how much resources each engineer had rights to.
As you are working at a school, it is not without reason to believe that you can use the students to help develop a system to manage the virtual instances. With a bit of forethought and a limit to the specifications, you can have a simple VDI broker developed and tested in a month. And if you avoid my mistake and use the libvirt API, you will even have the ability to easily expand the system to using linux containers.
let them all use distcc (Score:2)
To paraphrase Syndrome: When everyone's impacted by everyone's compile, no-one is.
Also, find me something other than a full kernel compile that takes measurable amounts of time on a real machine.
Re: (Score:2)
I'm amazed at how much effort is placed on limiting researchers' misuse of computers at the expense of other researchers
FTFY
Social problem, social solution (Score:3)
Post a short, general list of rules in several obvious places. Make them reasonable enough to cover most possible user needs but flexible enough to cover things that you haven't thought of yet. Any user who is stupid enough to break the rules by running fork bombs, torrents, mining, hiding stashes of lemur porn or anything else which a child of six could tell you was a bad idea, will have their accounts disabled as soon as they are discovered.
If they have a good excuse for abusing the systems then discuss it with them, suggest alternatives to running rendering jobs on the lab servers and keeping passwords on sticky notes or whatever else it is that they are doing wrong and then restore their access, trusting that they will know better. If you do it right, they may even decide that it is better to ask for permission than forgiveness next time.
If they don't, send a memo to their department head briefly outlining what they did, how it was detected, what action you have taken, and that you won't be reversing this decision until you see a presidential pardon come down from an appropriately high authority. It doesn't matter if they have Really Important Work which needs to be done by the end of the week or not, just cut them off until the proper User Apology and Restoration procedure has been completed.
There you go. This solution is licensed under the WTFPL [wtfpl.net] which is compatible with the Open Source Definition and the Debian Free Software Guidelines so you can use it any way you want. You can even supply your own LART and display it prominently by the door of your office if that helps get the message across.
Re: (Score:2)
That sounds reasonable only if you have a very small group of users, and loads of time to deal with it.
Everybody runs a fork bomb once in their life. A computer lab should be a safe place to make mistakes, not somewhere that any mistakes will make you a pariah. If you do take that unreasonable attitude, the "presidential pardons" will be coming down on a regular basis, just signed-off as a routine duty without the slightest thought, every time a department head requests it.
Re: (Score:2)
If they have a good excuse for abusing the systems then discuss it with them, suggest alternatives to running rendering jobs on the lab servers and keeping passwords on sticky notes or whatever else it is that they are doing wrong and then restore their access, trusting that they will know better.
Everybody runs a fork bomb once in their life. A computer lab should be a safe place to make mistakes, not somewhere that any mistakes will make you a pariah.
It's good that we agree on that.
The 1970's Called (Score:2)
The 1970's called, they want their userspace problems back:
http://www.cmu.edu/computing/c... [cmu.edu]
Containers (Score:2)
Running Fedora - why not FreeIPA? (Score:2)
Don't get too worked up about resource management (Score:1)
I did my undergrad degree on a lab not unlike this (actually Sun workstations using NIS/NFS to mount home directories - this was the 1990s). These machines were likely an 1-2 orders of magnitude less powerful than even your smallest desktop - desktops with 32MB of RAM and servers with 128-256MB. There was no resource management aside from disk quotas and the lab worked fine.
Depending on what you mean by high-usage I would have thought even modest desktop systems would be powerful enough for just about any
Resources... (Score:2)
I saw someone suggesting that the users should play nice. That'd be great... and maybe they did, 30 years ago. (We'll ignore the late 80's early 90's stealing of someone else in the lab's xterm....)
I had a user last year - an intern - like everyone, NFS-mounted home directory. It was, of course, shared with a good number of other users. He ran a job that dumped a logfile in his home directory. MANY gigs of logfile, enough to blow out the filesystem. Users were not amused. *I* was NOT AMUSED, as my home dire