Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Ask Slashdot: Securing Systems you don't Manage 106

A verbose member of Clan Anonymous Coward asks this difficult question: "My university has a problem. We have lots of autonomous departments managing their own computing infrastructure, lots of autonomous users managing their own computers and a very large network population (in excess of 20k people). Of the systems which are not managed by "professionals" about 10% are linux. How should the university tackle the problem of people keeping their boxes up-to-date whenever it has little control on the box owners? Using tools to identify problems (e.g. nmap, satan, etc) is the easy part. How do we then get hundreds of different computer owners to update their systems when they didn't know what they were doing in the first place? How to we do this in a climate where the resources are not available to employ herds of new computer support staff to assist these people?"
Our anonymous submittor continues...
"Many of us recognise linux as being a good thing (tm) and indeed many of us use linux to provide high availability and robust services. Unfortunately, many of the "non-professionals" who install linux tend not to know what they are doing. They get their system installed and bring it up on the network (easy now compared to what it used to be!) and then leave the system to look after itself. All fine so far, except that most of these boxes are running the plethora of services that come enabled by default on popular linux distributions (e.g. imap, www, etc.).

The problem comes in like this: there is a high rate of publication of exploits for linux systems and, unless users are very careful to keep up-to-date with patches, they are compromising the entire computing infrastructure for everyone."

This sounds like a Network Policy Issue. Most networks have rules that state the acceptable uses for the resource and the conditions that must be satisfied for it's continued use. It seems something like this would be appropriate here. The larger problem however, is its enforcement. What do you all think?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Securing Systems you don't Manage

Comments Filter:
  • by Anonymous Coward
    Universities that have LAN to the dorms have the same problem with Win95. It has no security whatsoever and some folks make it worse by turning on sharing.

    Ultimately you need a strong policy from management. When you have that in hand, you can block problem IP's at the router. If people don't follow the rules, they can have a cable drop that leads nowhere.....
  • by Anonymous Coward
    A good network infrastructure can make enforcement easier. DHCP can be an enforcement tool (no compliance, no address lease). Managed hubs can give you a great amount of control. Use internal firewalls to limit network traffic between subnets.
  • by Anonymous Coward
    Imitate MIT IS. Make a custom version of Red Hat that integrates better into your school network (in terms of file sharing, access to globally licensed programs, instant messaging, and other network services). Have it automatically install security and other updates from a central server. If you don't have the resources to do this, MIT licenses Athena, and there are presumably similar packages available from CMU and other places.

    MIT IS makes an "Athenized" version of Solaris, Irix, and are currently working on making the unofficial "Athenized" Red Hat official (it's a matter of making it a bit more consistent with the other Athena platforms). The vast majority of GNU/Linux users at MIT, including virtually all the clueless to semiclueful ones run the currently unofficial Athenized Red Hat.

    Solutions like trying to ban GNU/Linux or adding firewalls are generally braindead; they do a lot of damage to the user body, especially the power users. Don't listen to the people recommending these. They're very PHB.

    - pmitros at mit.edu
  • by Anonymous Coward
    The way I see it there are a few things you can do to minimize your risk.

    First and foremost, allow NO trust between your systems and systems you don't manage. If someone manages to crack one of those linux boxes, the last thing you want is to have them then able to rsh to one of your systems. Disable your R* services on all your machines, don't allow them to NFS mount your systems, etc.

    Second, you may consider refusing connection to all ports on the systems in question from outside campus. Perhaps put machines where the admin can demonstrate the system is secure on a different subnet and allow them to accept inbound connections.

    You may want to do routine security audits with the various penetration testing tools out there and notify administrators of systems with security issues by E-mail (You may want to do this for your windows guys, too. Bunches of them have sharing on by default and everyone can get to their hard drives from the network.)

    Far too many people take security too lightly these days. Feel free to create a draconian security policy but be willing to be reasonable if the admin can demonstrate that they can run a secure system.

    Oh, and you might want to look into mandating encrypted traffic on your network, since a lot of .edu kiddies are discovering packet sniffers for the first time.
  • by Anonymous Coward
    hi, and welcome to the club!

    seriously, what we have is the same problem at my univ. how do we solve it? good question. one suggestion i make is to have a staff of people on hand who know what they are doing with UNIX boxen. despite what many suits and schleps say, UNIX boxen are all too important for their horsepower (hardware and software) that can't be rivaled by NT, despite the hype. have a staff in your network services clan, ie in user support, who know what they are doing. make them available left and right for free or for a small fee, something very reasonable. and make sure they're trained, as i wont let all too many of our user support staff near our million dollar equip- they are just too dumb.

    secondly, if a problem pops up, visit the person immediately in person. "hi, we noticed some issues from this system." it's probably safe to assume in most instances that the problem didn't really originate from a legit user of that system. but pull the system off the network and help with a security issue as you would normally. do it in person so that they don't fsck things up and wipe logs or whatnot. also make sure you approach them in a friendly manner, you are their friend. it will foster an environment in the future that generates more positive responses and faster responses if a problem is sensed.

    lastly, educate people and ensure that they value security and know that assistance is available for setting up systems.

    i should note that these are not really solutoins that in magically in place. i had to do a lot of work to ensure that people know who we are and what we do, who to contact etc... it took a lot of work. i'm still trying to change the system from the users' end (ie we have no official security coordinator though we see lots of attacks left and right, we tend to handle it as a system mgr user base ourselves), but things are improving, albeit slowly. and lastly, if you can move to a seriously routed network, you can manage to minimize issues more so than with a large, flat network (yeah, they exist, believe me!).

    i hope this helps, and good luck!

    jose nazario
    jose@biocserver.cwru.edu
  • Some of us are, Jose.

    Personally, I don't see why this should be a volunteer kind of thing. Security is a very important thing to maintain, especially at a university with hordes of lusers who could do some damage if they got into the wrong stuff.

    I say, hire some people for a security taskforce or something. Developing a security policy and then implementing it isn't exactely easy -- that's what I get paid to do (and I wouldn't do it for free, no way!).

    I also think that the most you can expect out of most users is picking a decent password (of course, it helps to force them to..). If you notice that a certain system has a problem then you'll have to fix it yourself somehow; the average user doesn't know or care one bit.

    CWRU would do well to get their net.act together.
  • by xyster ( 128 )

    I would say write a script that commented out the services in inetd.conf that you dont wish to run. I would also have a script (or in the same one) that adds lines to the hosts.allow and hosts.deny (tcp wrappers) files so that unintended security breaches don't occur.

    it shouldnt be hard at all to do this.
    -xyster
  • You can not secure any system in which someone untrusted has administrative access.

    You could firewall them, you could force them to all use redhat, but the computer still won't be secure until you force them to boot from the network and don't give 'em that precious root password. And (as stated above) the problem is not just with *nix systems. There are lots of people who are Back Orificed or whatnot; I know of some lame Win9x user who has an anonymous telnet server on his computer. With people this stupid, it's impossible to secure the systems.

  • One thing I've wanted to do but never got around to it is to sit in say a large IRC channel, like #linux, there's always 100+ in there. Then just sit back and see how many times someone tries to telnet/ftp/etc into your box.

    Something like that done live during a security seminar would certainly drive the point that you can't take this lightly.
  • This could take some work, but let's say you had your firewall block all ports nmap/satan/whatever says are running insecure services on a given machine, and send email to the owners of said machine?
  • Posted by razit:

    This problem has long been one preventing the developmentof Linux across the University system and ultimately may stop it altogether if the correct management approach is not taken.


    For Linux to develop and mature, then appropriate solutions to this huge issue need to be sorted out.



    How about a set of agreed but unofficial solutions ?

  • Posted by pwd:

    This made me think of the general problem of
    supporting a bunch of different systems, your
    example is an extreme case of not being able
    to force things. Off the top of my head I think
    that I would not separate the Linux users
    from the rest of the group, the problems may
    be more common due to the wider support for
    services under Linux but the problem is not
    limited to full service systems, you could
    get a back door problem on a Win 98 system.

    I think that the first thing that I would
    do is to set up a data base of each of the
    system with a routine to check for there
    location on the network; in other words have
    your hubs report what is mac address is
    using what port and then translate that to
    an IP address and a system ID. Here with
    only a few hundred system I built something
    like that and we find it a godsend. Then I
    would go further and loop slowly through
    the list and check for security problems on
    each system, this is the part that you have
    a handle on. One of the things that I would
    do is to use one of the OS type program so
    that I don't have to depend on the user
    telling me which OS they are running, you
    might have a number of dual boot systems
    (with that number of systems I would not
    be surprised to find some real strange stuff).

    When you find a problem then do one of two
    things, if the system is where you think it
    should be and is not some rogue add-on to
    you network send the "owner" of the system
    a e-mail (automatically of course) with
    the following info.
    1) that a problem was found.

    2) what the problem was and why this is
    a problem, this is where you need to
    do a selling job and give the owner a
    clear reason why it is in there best
    interest to fix this problem.

    3) instructions on how to fix the problem,
    (turning off services that are un-needed,
    updating software, running a anti-virus
    program etc.)

    Your question has started me thinking about the
    more general case of not having ownership of
    all of the systems on a network, this is more
    common that one might think at first.
  • by jabbo ( 860 )
    CMU, MIT, Cornell, and NCSU all run Kerberos for authenticating students, and that's just the ones I know of. If you want to use Kerberos people really need to know why their tickets expire, how to resolve problems with the Kerberos servers, etc., but it is a good large-scale system IMHO. Kerberos encrypts all authenticated communications so a sniffer won't do much damage if you get everyone to authenticate via kerberos. Same sort of principle as SSH, different implementation.

    As someone who wandered out of the big-corporate, big-university environment into a small company, I have recently begun to deal with the (pathetic) security precautions which seem to be commonplace outside of large computing centers. One irritating thing is that if something's too technical or inconvenient, no one will use it, and if it blocks access to something important, they'll also complain. So you have to weigh the amount of worry time against the amount of support time you want to spend (IMHO). I can't guarantee that everyone's data is secure against every attack, but I can at least point out obvious holes and good ways to plug them (thanks, IBM).

    nmap and SAINT are awesome tools, btw. There is a nice article on using them in this month's SysAdmin magazine, and O'Reilly even has a book on customizing SATAN for your own needs.

    I want to take a look at the Deception toolkit and Secure Mailer as well; people have repeatedly stated that these are great. (i.e. securemailer + qmail as a relay + gateway, versus sendmail as both; and dtk just in general) But I haven't had time yet.


  • So use something like SSH or kerberos. There is basically NO REASON in this day and age to use telnet. It's insecure and inflexible.
  • I'd bet that you could do a lot to solve the problem by rercuiting knowledgable users to secure systems. Go around to each dorm, frat house, department and ask for volunteers (or pay a nominal wage) to "assist" users in setting up their boxes. Whenever someone needs an IP address, new ethernet cable, or some other routine service, these "assistants" would be sent in to deliver the goods. While they're there, they'd offer to do a security audit of the computer. Nearly every newbie admin I've met would be more than happy to agree and to allow the assistant to turn off and/or update services. It doesn't guarantee that all the boxes will be secure, but ensures they start off secure and they get a few security audits over the years.
  • I agree with you, but don't expect that to solve the problem. It takes a lot of time and attention to maintain a secure system, and even if someone cares they may not be willing/able to spend the time.

    - Ken
  • Have all your routers block the IP addresses of the "problematic" machines, and/or set DCHP/BOOTP to refuse to issue an address to the MAC address of problem machines.

    You'll find it increadible how quickly people will resolve their config issues once you cut off their fat pr0n pipe...

  • ...when I SysAdmin'ed at a university. Not much you can do about it, except put a firewall in front of everyone.
  • Maybe it isn't just the Linux users but all users who operate servers on your lan that you should focus on. LAN managers tend to dismiss Win95 users even though Win95 has the worst security of them all.

    Maybe you should make people take a test or a class in network security before they can operate servers. Give some grad students a job. I somehow doubt as many bedroom servers are run by idiots as you think. There seems to be a surplus of gifted sysadmins these days according to job listings.

  • There are several things that are useful here.

    The first is that you must have senior management backing for the need for an overall control body.

    You then need to populate this control body with people who knwo what they are talking about and have the people skills the sell the solutions - not easily found.

    Basically you need to establish the program with senior PHB backing. Once you've done this THEN you can set about all the good stuff previously mentioned about security policies etc.

    A good book to help I've found is "Information System Security Officer's Guide - Eatablishing and Managing an Information Protection Program" by Dr Gerald L Kovacich ISBN0759698969.

    Martin Hepworth
  • I find this somewhat offencive:

    "How do we then get hundreds of different computer owners to update their systems when they didn't know what they were doing in the first place?"

    I believe the case is more that your worried that _you_ don't know what _they_ are doing, not that they don't know what they are doing.

    This is interesting, as I have learned from history (local department security hole where hacker comes in, and tries to get somewher else on campus from hacked box). But the problem isn't yours so much as it is the users... Just follow me for a moment...

    Basically every box on your network is it's own little world, and security problem, and that problem is isolated to that IP address, if you take the right precautions. The person who set that box up is responsable for security of that system, not the campus at large or the computer department or the computer gurus.

    The key to good security is to insure your not relying on inadaquite security protocol. And by inadaquite, I mean, "If the request comes from a box in our IP block, then it's safe, and we can allow it special privlages." That's LAZY security management, and the source of the bulk of your security risk.

    Here, in my department, where we live in the massive IP space of ".nodak.edu" we don't want people telling us how to fix our security problems, at all! ".nodak.edu" covers the whole state, and North Dakota Higher Education Computer Network (NODAK-DOM), which means every university in the state relys on people in one city at one center for management. They are 70 miles away from us, and they don't know what we do, or why we do it.

    We are very thankfull that they keep the backbone up, and keep our bandwidth sufficent, and keep the main servers and routers running. But to think about having them try to manage security for the whole state is rediculus! There are boxes I know of on our LAN with know security holes, and we realize they are there, and are leaving them there for the time being. Because the case is this, when the security patch is installed for a know hole, the applications we rely on no longer function, we submit a bug report, which takes up to 2 months to get addressed (IF it even gets acknoledged), and wait. Meanwhile, what are we to do? Fix the hole and make the rest of the campus happy, meanwhile not being able to do any work or research? Or, leave the hole, closely monitor connections and contacts, and keep doing research (which is why the computers were purchaced in the first place).

    People make judgement calls, and assume risk. That's a fact of life. In order to achieve a goal, sometimes risks are taken. But, I still say, everyone has to see it's simply a case of "breach my box, my fault for not keeping track. Breach your box from my breached box, it's YOUR fault, not mine."

    Ultimately, it's the hackers fault, not the admin. But the admin's job is to keep a secure box. If your worried about something being breached on your network, then you admin it. If your administration range covers more than you can the boxes you can admin, deal with the fact that you managed to work your way up the bandwidth chain, and from your level you have to deal with bigger issues, so make it clear that security on an indivual box basis is the responsbility of the person who uses that box.

  • First, for people who don't know what they're doing, standardize on Red Hat or some other distribution with a good package manager and the ability to do kick-start network installs. (People who do have a clue should of course be able to use whatever they want.)

    Then, set up a kickstart install with good default packages and a post-install script that changes security settings to be appropriate for your site.
    In this, include AutoRPM, pointing at a patch server you maintain. This runs out of cron, and keeps all systems up to date.

    Then make sure no one breaks into your patch server. :)

    --

  • I am a college student. I live in the dorms as a junior and admin my own insecure linux box. Its so insecure its not even funny. In our networking class we needed a nearby box to portscan so we used mine. Found a million ports open that didn't need to be (I run no email, but POP2 is running).

    The moral of the story is, its none of the colleges business to worry about MY security. If someone hacks my box, its my problem. Not the college's. You should stop servers that suck bandwidth and hands off everything else.

    Firewalls are lame. It only limits the power of students to learn. Thats the point of a college. Talk to admins of important boxes (department servers, etc), warn students about security. But, don't waste the time, effort and money trying to prevent students from themselves.
  • I am not a college student anymore. Though I did not have ethernet in the dorms I lived it, I did admin a few *nix boxes which were on the backbone. They were secure (well, mostly anyway).



    If you think that the security of your box while you are on the _university's_ network is none of there business, then you are wrong. Imagine this, someone cracks your box, and from there cracks into some company and steals their secrets. What do you think is going to happen, the company call you and complain? Wrong, the company and their very unhappy lawyers will immediately go after the university. Believe me on this, it happend to me (for reasons other then security). The company couldnt give a rotton shit about a puny college student with no money, but a university is a different story. Now some stupid college student is going to cost them alot of money. You are on their network and it is their responsibility to make sure that it is not abused.



    As far as a solution to the problem, i'm not sure. I would do religious network checks (nmap, nessus, etc) and would prolly create a seperate subnet for the dorms, and have a firewall at the enterance blocking out 80, 6000, and other misc services that college student dont need to share with the outside world. You can learn just as well on a small lan (ie, dorms only) as you can on a larger lan. I've learned everything a ton from a couple wd8013's and a string of coax.

  • I agree

    From a local university i've seen this type of plan put into effect. Their idea was to simply alow file sharing/resource sharing servers to exist but deny outside access to the problematic ones untill they get things fixed. Frequently if the IT department finds a server they didn't know about they simply do a quick audit on it (the usuall stuff, check for open ports, check versions on all servers running). If they find a problem they firewall that server out and contact the admin by e-mail letting him/her know what's up. After that some advice about encrypting traffic to and from the box and the like.


    A big big thing is to do as outlined above, trust noone execpt the machines you built and maintain yourself. I have heard so many stories about people breaking into well known companies simply by getting a week admin's password and then following the chain on up.


  • I agree with the firewall idea, to allow amateur linux administrators to do what they want and still protect the security of your network. But as far as dictating that they can't have their linux boxes on the network unless there is a firewall wont fly at most universities. These are faculty, grads and postdocs doing research and running linux and you can't tell them they can't run linux without a firewall any more then you could tell them you can't answer the phone unless you speak in German.

    Also, in general the UNIX adminstrators in a university are not the same group that runs the ethernet (telecom), so their hands may be tied in terms of what they can demand from the faculty, staff and students.
  • One thing I've wanted to do but never got around to it is to sit in say a large IRC channel, like #linux, there's always 100+ in there. Then just sit back and see how many times someone tries to telnet/ftp/etc into your box.

    I did that a while back. Opened 2 xterms, one running ircII, the other running tcpdump on the PPP interface. When I noticed a scan, I looked at the source, did /WHO, and asked the originator privately with a /MSG if they'd found anything interesting. Had some interesting conversations that way...

  • What about doing some sort of remote boot over the network, they grab the kernel and all other core os components off a nfs server, which you keep up-to-date, let them then mount and use thier local filesystem at thier own risk, and any services they want to run that you arent maintaining on the nfs server would be have to be local... So anything you have security concerns, you load and run remote, even keep the conf files for said services and such remote...
    in an educational environ it will be hard to get anything approaching this implemented, due to user`s 'rights' and 'academic freedom' - I had to deal with this constantly when i worked at a college...
  • Swarthmore college (where I'm a sys admin for the cs dep't) is right now confronting exactly this issue.

    A cracker broke into a student's Linux 2.0.34 machine in a dorm subnet (we presume he port scanned to find the machine, haven't quite tracked down from where yet), ran a sniffer with which he gained passwords to cc.swarthmore.edu (Digital Unix, school's official server, all student users have /usr/bin/nologin as their shell), sccs.swarthmore.edu (Linux 2.0.35, run by a student computer users' group, see this [swarthmore.edu]), and cs.swarthmore.edu (Sun Enterprise 450 running SunOS/Solaris 2.6, CS departments main server).

    The details of the breakin are available if you want them (email me, be sure to prune out the NOSPAM part), but are less relevant here than our response. To date, that has been:
    • to lock up and check out the systems we know where effected
    • to (plan to) perform our own portscan of all the campus subnets (basically, one for each building) so we can find any other machines in promiscuous mode and warn users of other insecure OSes (sorry, but Linux 2.0.34 or earlier definitely qualfies as insecure... really, any Linux does)
    • to form a new users' group (or, really, a subset of the SCCS [swarthmore.edu]) with open and announce mailing lists for all students running some form of Unix on their dorm computers
    • to schedule a security seminar (run by the SCCS [swarthmore.edu]), to which all of the above group are strongly urged to come.
    I'd be interested in hearing how other people have responded to similar situations, how it's worked, etcetera, and I'd be glad to discuss our own results. (It'd be nice if you could email me, though, digging through /. can be a pain, and it'd waste less bandwidth to post a compiled statement on the subject.)
  • A large number of the posts on this topic seem to have a disturbing common element.

    "Box X got broken into but we dont know when or how."

    More important than firewalls, and much easier to maintain. Logging. The Uni I used to attend would log *ALL* traffic through the core routers.

    ie Time, Dest IP, Dest port, Src IP, Src port, Bytes transferred, etc...

    The routers would dump these logs to a centralised machine every 10 minutes. These logs were stored for *Months*.

    This may not be the 'Rolls-Royce' solution most people recommend. But compared to multiple firewalls segmenting portions of the organisation, its a damn sight more budget friendly *AND* you know exactly what has happened.

    Oh and I would recommend scanning the network at least once a week. Makes it easier to look throught the logs :)
  • Many suggestions where made here. All were good ideas. However, Firewalling and Masquarading are overkill for your needs. Being a network gestapo is also not worth it.

    The single biggest thing you can do is to inform and educate admins on your network. To that end, you might be best off if your IT department sponsorred some users groups and regularly gave talks about network security. Also, after finding problems in people systems simply telling them they have one is not good enough. Tell them where to get the fixes and how to fix it. Few, if any users actually WANT their systems to be insecure but they just don't know how to go about fixing their problems. Make it easy for them.

    That being said, you will run into the few that simply don't care. For those you will need some method of keeping them off the net till they learn to care. Combinations of DHCP and managed switches and the like are a good start.

    And lastly, you will need an absolute policy regarding any and all university owned machines, whether they are department servers or personal workstations. Especially in regards to allowing trusts between them. For instance, in the case of a student admin over a departmental server allowing trusted access from his own system for admin purposes is bad. What if the students own system is compromised? It's then a simple step into a departmental server. This can cascade into a whole slew of compromised systems.

    I come from a university that had no such policies. I also have ties into the hacker community in the area. Security at the university was a well known joke. We always had a good laugh is someone mentioned the two words together.

    ** Martin
  • I think you are overlooking some of the consequences of a single compromised machine on a larger network. Consider this: the attacker has a slow connection to the network which limits the attacks that can be run. If the attacker can gain access to one machine on the network they now have a base to start attacks against more interesting machines on the network. Particularly if the compromised machine is not well admined to begin with a smart attacker could use it undetected for a long time.

    It really depends on the setup. In your case it would seem that if your local network is compromised no one loses but you. I agree with you, you should be able to set your own policy. But I don't think that this is the case for the admin who asked the question. It sounds like a big network with a whole bunch of workstations attached to it some of which are poorly administered. This can cause many many problems for everyone on the network in denial of service attacks alone.

  • All of the comments I've read are bogus -- they fail to take into account the only thing that really matters: If a good cracker wants to compromise your box, it's only a matter of time before they take it. The only 100% secure box is one that isn't connected to any networks.

    The problem at a university is made worse by the fact that by it's very nature, the university is about sharing information, so it must have a certain amount of openness, which will inevitably result in a security hole. I saw someone touting MIT's Athena software... Don't think for a second that MIT doesn't get hacked, just ask anyone who works in the AI lab at MIT (either professionally or academically) -- they get hacked constantly.

    As for your concerns about using a compromised box on the university network for a home base, this is just a waste of breath. If the cracker doesn't use a linux box on the local network, they'll use any number of commercial business servers (which of course, have fast connections) that are badly configured as a base instead. There's hardly any difference. The only issue is that we need to protect the university from civil action based on the actions of a cracker... why should a university be responsible for actions of a criminal who has invaded a third party's machine? This is simply absurd and is a question that you should bring up to your legislators if it concerns you. Make them protect the universities by writing new laws.

    This isn't to say that you should just give up, but as an admin at a university, you should resign yourself to the fact that you're going to have security problems. About the best you can do is keep the dorm computers on different subnets making sure NOT to give any of them any sort of trust relationship, lock down your servers as best you can with diagnostic tools and things like kerberos and/or ssh (but be careful if you use them both since there are security bugs in some versions of ssh when used with kerberos), and have a beer and pray.

    Any administrator worth his salt isn't going to allow these boxes trust relationships, and sure as hell won't allow root access to the university systems from them. The university's best defense is to clamp down security on their machines as much as possible. Worrying about the linux boxes is a waste of time. And there likely are far more windows boxes on the network, which can also be attacked.

    You're going to have crackers breaking in to your network wether you have well-configured or misconfigured or NO linux boxes on your network. Don't penalize the users for the misdeads of others.

    Feel free to send comments to me at
    captain_carnage@hotmail.com

    Linux
    -----

  • Bryn mawr, another "tri-co" college (swarthmore, haverford, bryn mawr), hasn't yet formulated an official policy regarding students running linux/unix. Ironically, we have filesharing on the Macs (not PCs) but we can't run servers and do likewise on Linux (or a like OS). Admittedly, it's less secure. Regardless, as far as I know, I'm one of three (by my count) linux users on the campus, and i was asked to make my box inaccessible (no logins: i had 15+ off-campus users w/ accts on my system at the time.) to anyone off-console, "until they formulated a policy." No official policy yet.
    It's true that it's easier, when attempting to control clueless users and therefore dangerously insecure systems, to just deny such service to all, but it would be a bit more work and a lot more fair to announce a policy by which certain security measures must be implemented by the users (and instructions provided) if they want to have linux on their machines.

    --Anneke

  • Randal Schwartz (yeah, the Perl Guy) was sentenced to prison for being a too concerned about security on a network he didn't own, but was connected to.

    Before you start, or even begin concerning yourself with the rest of the network:

    1. Cover your ass before doing anything. Get permission from your superiors. In writing.
    2. Tell Everyone You Know, on both networks, their bosses, their bosses' boss, and anyone else you can find what you're up to.
    3. Repeat Steps #1 and #2 ad nauseam.
    4. Then go poking around the network.
    Ultimately, if you screw up, you too can wind up with jail time, legal fees, and a felony conviction on your record.

    For more details: http://www.lightlink.com/spacenka/fors/ [lightlink.com]

  • The ipmasq/firewalling ideas already mentioned are very good - consider them!

    I have some others that might help:

    Find/write a simple script that locks down /etc/hosts.*, and create a basic set of ipfwadm/ipchains rules to block access to vulnerable ports on the box, such as 2049 and 6000.

    Have the script add a listener to the Deception port (see http://www.all.net/dtk/), and use the replies from it as a marker that "this box has been secured", during your automated scans.

    Of course, the user *can* change the scripts and open his box up, but at least this way he has to think about why he is doing so, and hopefully has read some thought provoking material included with the lock-down script.

    In fact, if you can get all your Linux users to install versions of the DTK (see the above URL), then it will be extremely hard for crackers to zero in on new "unsecured" boxes. This would require packaging the DTK into .deb, .slp and .rpm packages (and getting permission from the author of the DTK, which is not free-for-all-use).

    The basic idea behind the DTK is very appropriate in situations like yours: since you cannot be sure that everyone is secure, make sure *everyone* looks like a potential victim (even if they aren't).

    The hardest trick is to get the users to install it (and other hacks you might come up with). One way to (help) ensure this is to maintain a few local distribution-mirrors, where your protection packages have been added to the default installation. Encourage people to set up Linux from the local mirror (speed! security!).

    Good luck!
  • I found this so funny! But if you were serious about it, it really wouldn't work in the long term - something along the lines of 'the boy who cried wolf'

  • I've installed drawbridge as a firewall for a company, and this is exactly what they were looking for. You simply put two network cards in a computer, one connected to the net, the other to the internal network, and the machine will block off access from the inside or from the outside, depending on tables you load.

    No matter how big the tables are, the bridging is lightning fast. It's been running here for a while with no problems.

    It runs on FreeBSD (needs kernel modifications), and it's freely available.

    I believe the URL is http://drawbridge.tamu.edu [tamu.edu]

    Ben
  • If you can provide access to an archive of Debian GNU/Linux [debian.org] then your users can regularly upgrade their installations simply through the command "apt-get dist-upgrade". This will only upgrade just those parts of Linux which they currently have installed, and can easily be set up to run from a cron entry as well. Of course if you don't have any control you probably can't enforce a particular Linux distribution either! If you can select a particular distribution though Debian is definitely worth consideration because of these powerful network update features.
  • I think the key is to make it easy (and fun :-P) for the users to keep their systems secure.

    - Start a security mailing list and encourage people to subscribe. You'd probably want separate lists for Unix and Windows, with some messages posted to both.

    - When RPMs or other packages appear with security fixes, announce them to the mailing list. You could also mirror them locally to help your users. (We have sunsite.doc.ic.ac.uk in the next room here, which is useful.)

    - If there are user groups, encourage them to tell their members about security issues.

    - It might be worthwhile to produce an 'official' Linux distribution for your university, maybe including software which is widely used on campus. I don't think you should _force_ people to use it though. But if it's simply Red Hat with a few security updates (eg Kerberized versions of everything) then many people will be happy to switch.

    Some people have suggested some of these ideas above, I know. I think they work well as part of the same strategy.
  • I should have said "Look at the University
    of Cambridge". This does seem to be a widespread
    situation. I was considering setting up a
    Linux users group for students here; maybe we
    should have a worldwide Linux students group -
    aimed at getting Linux as an option in
    computing rooms, providing support, getting saner
    management policies, etc.
  • Firewalls won't do it by themselves. In about an afternoon, I can write a little program that, when run, initiates a connection from inside a firewall to my machine over some standard port, like port 80. Then, I write a listener that waits until the connection is established. Then, the program running inside the firewall allows me to initiate commands from outside the firewall. I then e-mail the program to a couple of students that appear to be computer neophytes. They run the program that is attached to the e-mail, and *bam* I have access to the network behind the firewall. It's a simple matter of sniffing packets until I have accumulated a ton of passwords for machines behind the network. Presumable one of those machines will allow outside connections...

    Try Kerberos. This is the only security I've seen that works well *within* a network. No passwords are sent over the wire in plaintext. Thus, when the student's machine is compromised, the rest of the network won't be...
  • Use firewalls to deny brute-force attacks to your university.

    Use Kerberos to deny attacks that originate from inside the network.

    Let students do what they want. Don't waste your time monitoring security over machines that you have absolutely no control over. That is a headache. It's expensive. It requires a lot of resources. And it opens you up to potential lawsuits.
  • The second someone on the subnet attempts to telnet to one of the university machines, the integrity of that user's password can be violated by anyone alse on the subnet.

    VPNs are a little different, though. I'm not sure how this would work with VPNs. I suspect that security could still compromised.

    I can't emphasize enough the concept of using Kerberos in these situations. There are other solutions whose names I cannot think of now--they basically send all passwords for telnet, ftp, or whatever, over the network encrypted. Kerberos has stood the test of time, though. It's also rather complete. And it's free.
  • Firewalls are nice, but only work if you have control over the systems behind the firewall. Imagine if I, Joe Student, download a little program from outside the Internet, then run it. This program then creates a tunnel that originates from *behind* the firewall, yet allows the author to tunnel behind the firewall. The user then monitors packets on the internet, collecting *any* plain-text passwords going on the internal network. Before long, your entire network can be compromised. This is not a good solution.

    Firewalls are nice. They are a good and necessary part of any security system. However, unless you plan to isolate all these rogue machines outside a firewall (a firewall that blocks different departments from one another), firewalls won't work by themselves. Besides, if your university is anything like the one I attended, it's not just students setting up rogue machines, but also staff & faculty. It's really hard to isolate the rogue machines outside a firewall in these cases.

    IMO, you should look at MIT. They use Kerberos for all their non-rogue machines. All passwords (to university machines, at least) are sent over the network encrypted. This way, if Joe Student's machine is compromised, the cracker won't be able to sniff packets for passwords to other, more sensitive machines.

    Beyond this, I think you need to ask yourself, "Should I really be enforcing security upon the students/staff/faculty?" This is a huge headache, and one that can never be handled completely. Besides, by acting as a security police for the students, you open yourself and the university up to a ton of potential lawsuits from machines that, despite your best efforts, are compromised.

    I'd put in firewalls (IP Masquerading are nice, because there are no changes to users, nor are there any training issues regarding setting up proxy services within net applications) and Kerberos. Ensure that Kerberos is installed on *all* official machines. Then, let the students do what they want. If their machines get compromised, then that is their fault. Make it clear that security responsibilities lie solely in the hands of the students, and that the university will not monitor security issues for *any* machine not administrated by the university.

    -dan
  • woah! i feel sorry for the admin with this kid on his network... maybe you should rethink what you're saying here. what if someone root's your machine, then roots one of the school's machines, and screws something up. now you're responsible because you ignored security issues on your personal machine. the network can only be as secure as it's weakest link, which in this case would be you. and precisely how does a firewall prevent students from learning?
  • How about this policy. Everyone by default is behind a firewall. This will satisfy many of the students who only want to browse the web. Students who want to have their box on the other side of the firewall have to ask to have this done, and then have to attend a monthly security meeting, or subscribe to a local security mailing list. If they install the patches and secure their system, good for them. If not, they should have stayed on the other side of the firewall.

    You could even have the leader of the security meeting or mailing list be a student work/study position, or tie this in with the student sysadmin staff.
  • by pimp ( 6750 )
    That's an ugly^H^H^H^H^H^H very good question. The solution that I
    have seen work best are a combination of policy and technology, you
    need both. You need to determine what the current network usage policy
    is, how well it is enforced, and whether or not you can get it
    changed.


    You need to sit down with the rest of the folks in charge
    of administering the networks (and at 20,000 users I hope you aren't
    the only one). Determine what services you want to support, what
    services you will allow but not support, and what services you will not
    allow. You also need to determine what happens if a user should use
    those services that are not allowed, and it must be enforced
    consistently.


    For example: All users with machines on the university network
    must have their OS and root/administrator contact information
    registered with the NOC. Users are responsible for maintaining the
    security of their machines. *nix machines may only run services x and
    y, as well as z if they register it with NOC or will cut off from
    network access. Win95 users can go suck eggs, etc.


    Users may not attempt to gain unauthorized access to any machines
    on the university networks or otherwise, or they'll be referred to the
    Dean for a spanking.


    Then implement as many technological constraints as you can. Have
    your routers block naughty traffic. Look for other nastiness[1], scan
    your networks[2], and make sure the policy in enforced regularly, or it
    isn't worth the work.


    Most importantly: good luck.


    1. See SHADOW [navy.mil] for network monitoring (non realtime) on the cheap.
    2. Make sure you get permission to do this in writting from all of the
      right people. You may need permission from just the IT director or maybe the
      President of the university

  • > what if someone root's your machine, then roots one of the school's machines

    Why should rooting a students machine help them root one of the school's machines? If the school's box gets cracked, it's the schools fault, not the students. The only advantage the cracker should be able to get from coming through the student's box is a little more anonymity, but he could get that anywhere.
  • This policy really doesn't work. The problem is very straight forward -- when you have loads of these insecure systems around, and someone decides to crack one of them, two things happen. The first is that the cracker now has a nice comfortable fairly anonymous platform on the network that they can beat up other boxes on the network from. These boxes may not be student machines -- they might be after the campus webserver next. Now it most definitely is network operation's problem, and they have to go out, hunt down you box, turn it off, and try to clean up the mess. This takes time. Lots of time. Not to mention the other thing that is likely to happen is that the student/proffessor notices that their box is behaving wierdly, and runs for network operations, who now has to come and troubleshoot the machine (yes, has to, as tenured prof's don't take kindly to being told that "it's your problem"). If you want the school to allow you to run you linux box on a network without too many restrictions, you have a responsibility -- you have to make sure that your box, if not ridiculously secure, is at least decently well guarded. Do this, and you can keep your nice open network. Otherwise, expect to have the clamps put on, and tight.

    /P.
  • The only solution I can see here is either to firewall everything off and force them to use that (it's not a pancea for all problems, but it'll keep outside attacks down) - as you do own the network bandwidth, right? Or you can use a IP masqarading host and don't tell anyone you made the change. You can then transparently proxy everything about and keep outside connections.. outside. You could also just take their boxen off the network unless they either maintain them themselves or let you -> ie, sudo access to RPM. Make sure to keep this very simple, and open if you do want to do this - the users will castigate you if you mess up and give them a broken package or make it more than trivial to upgrade. Or simply deny access to any university resources from unsecured machines.

    --
  • Often times this isn't effective. How can you tell whom is accessing what? By IP address or MAC address? Remember that we're dealing with 20k users.. it's abit hard to manage that many people without some "hard" measures - such as a unbypassable proxy. What are the major concerns too? Just keeping crackers out, or censorship? Need more information!

    --

  • The only solution I can see here is either to firewall everything off and force them to use that (it's not a pancea for all problems, but it'll keep outside attacks down) - as you do own the network bandwidth, right?

    Or you can use a IP masqarading host and don't tell anyone you made the change. You can then transparently proxy everything about and keep outside connections.. outside.

    You could also just take their boxen off the network unless they either maintain them themselves or let you -> ie, sudo access to RPM.
    Make sure to keep this very simple, and open if you do want to do this - the users will castigate you if you mess up and give them a broken package or make it more than trivial to upgrade.

    Or simply deny access to any university resources from unsecured machines.



    --
  • Granting extra privileges for demonstrated competence is always a good idea. It would also enable you to identify those who are competent enough to help. Maybe publish their names as 'student network techs' or something. This could be really good if you could piggy back it onto some of the required CS lab work, etc.
  • See http://www.vais.net/~efinch
  • Thank you for pointing out my case information, which is still in progress. I'm still a triple-felon, still on probation for another 15 months, and still in the appeal phase, which could result in a whole new 10-day trial again if it gets bumped back to the trial court level. For donations to my legal defense fund, send a blank message to fund@stonehenge.com [mailto], and my Perl-bot will reply with the latest information.
  • Once you have them firewalled, you can then dictate terms about really egregious problems.

    Things like "either update your widgetd to a version without the buffer overflow documented in CERT bulletin #666, or on 4/7/99 we lock all traffic out of the widgetd port on your box."

    Then actually do it when you say you will. If they can't figure out how to upgrade widgetd, then you lock their ass out until you have time to fix the problem.
  • by Diz-E ( 12353 )
    Now how many people would be annoyed by this "show of force" method of solving problems? And all sysadmins know that end users never learn, they only troublecall.
  • Use switching hubs with remote control and possibly even some filtering capabilities.

    That way, if a particular machine is causing trouble, you can turn it off. That's particularly effective in combination with network security scans.

    Switching hubs also keep crackers from monitoring network traffic once a machine is compromised. If you have filtering, you may be able to enforce (well, strongly encourage) the use of particular network protocols and turn off others when there are specific problems.

    You also get better performance with switching hubs, another reason to install them.

  • I may be confusing the terms and technologies, but would setting up an internal subnet or VPN, with it's own firewall protection, help the situation? Can it even be effectively done?

    If so... this way, the trusted and managed machines are protected from the unmanaged internal machines, which are in turn protected from the outside by the campus-wide firewall.

    Yes, it's layering protection on top of protection, but it should reduce the problem to the typical "US vs THEM" situation.

    Also, stated and public policy is very important. It won't prevent internal problems rooted (no pun int) in these unmanaged hosts, but it will give the perpetrators due warning of consequences.
  • Having worked at a school for some time, this point really hit home. Teachers refuse to be taught, especially by someone they percieve as working for them.

    A support staffer, telling me - a professor / dean / dept chair - how to run MY system?? The gall!!

    Tread lightly, and use words such as 'administration approved standard' and 'uniform networked systems interface' as opposed to 'the right way to do things' and 'RTFM!'...

    - As my father taught me, diplomacy is a way of telling someone to go to hell, in a way that makes them look forward to the trip. :)
  • Here's my take on the matter. As others have noted, you need a better policy.

    It's insane to consider that while each department has responsibility for maintaining its own systems, YOU and your central organization have responsibility for security for those systems. No. Can't work. Impossible. Insane.

    As the central unit, you DO have responsibility for setting policies, for communicating them, and for following some certification procedures. But unless you have the brass cojones to start telling them what they can and can't use (not exclusive to Linux distributions), you shouldn't have the responsibility of ensuring their data is safe.

    They go off-standard, THEY get to deal with the consequences. It's how the business world runs; just because you're off in sleepy academialand doesn't mean you can't take some cues from us ...
  • Your stated a machine base of 10% linux, I am assuming that a good persentage of the remaining un-managed boxen are windows.

    I agree with most of the statements that suggest firewalling, security policy, etc.

    Your problems in my opnion will be the enforcement of the policy, creating a policy, implementing the policy, and user complaints based on the policy.

    Implementation will be one of the hardest aspects(beside enforcement). You will have to assume that the majority of your users are idiots, and write comprehensive step by step documentation
    that will lead them by the hand through the process of securing their machines.

    Enforcement of the policy will also be hard, a simple port scan, or external security tool run will catch blatant violations. Where I see problems is in the realm of the more subtile aspects, things that can only be checked by internal access to the machine. Do you have the manpower/inclination/time to run around and check all the machines that are in use?

    The soloution IMHO is to combine good firewall
    policy, with good internal security policy(someone who knows the numbers can point out how many security violations are internal), both of which need to be combined with user education as to why security should matter to them.

    Keep in mind though, that as soon as the firewall goes up, and the policy becomes enforced, you will get complaints from all manner of users. Asking why "the thing they did yesterday" doesn't work today, with little to no more information.

    Sorry I'm rambling.
  • I've had similar problems here where I work. We are a small content provider and software company, and just about all the staff here has a moderate level of computer skills. The problem is that our typical C++ programmer here lacks some of the fundamental knowledge in networking to properly administer their own Win98/NT box. This is worsened by the fact that, while I might even be able to solve the problem for them, many times people's egos get in the way: they do not want to admit they have a problem, nor do they want to look stupid in the eyes of their superiors.

    The solution here was simple. Any machine, regardless of it's operating system, must either be behind a masquerading proxy firewall; or if that is not desirable, then they must fill out a small questionairre regarding the system's configuration and file the machine's root password with my office. I've found that the simple act of having to fill out the form prevents a lot of the potential "problem users" from wanting to suddenly put their machine on the public side of the network.

    One of the big problems many network people face is that our job involves not only computers, but people as well. Increasingly, we find ourselves having to not only answer technical questions and solve engineering problems, but we have to handle political arguments as well. Having policy in place, while it can be burdensome at times, at least can provide everybody with ground rules.

  • I suppose that would be effective in some cases, but the real rogues out there would just hardcode the IPs they last leased, and you'd be none the wiser until a collision occured. And that may be never.

    My cable company has DHCP on their network, but they lease out fixed IPs. I haven't tapped their DHCPd in ages!

    PS to Rob: You busted Preview Mode...
  • yeah, having your connection yanked sucks....nothing like having something else i can think of yanked :)

  • I agree on all of this. But,the systems are all managed by outside people, it is often difficult to enforce a security policy without making a lot of people upset. I believe that education is the best policy when it comes to helping secure the network. Most people who are willing to use Linux (or some other *nix) are willing to learn more about their systems. Holding a free seminar that would educate users about the perils of having an insecure system and would help users to make their systems more secure would encourage normal users to secure their systems (and reduce the amount of work by the network staff) and also educate the masses (which is also a good thing.)

    Rick Wash
    rlw6@po.cwru.edu
  • What you might do is consider blocking off outside access to all "non-certified" systems. You set up some sort of process by which somebody had to demonstrate that they are capable of handling the responsibility. Perhaps you could have some sort of seminar on basic system security, and show them how to get information about security breaches (check your local postings, get on the CERT mailing list, etc). Make attendance of that seminar mandatory for anybody who wants the access, then have the user sign an agreement which puts the burden of responsibility on them for keeping their system secured. So if they get hacked into and that system is used to take out other systems, they are held responsible for what they caused.

    This way, only the people who are willing to go out of their way to set it up will get the services they want. Also as a part of this you might want to offer some sort of official security testing procedure where if soembody wants to, they can call you up and ask you to run SATAN, etc, and make sure that they are in good shape.

    In general it sounds like this is more of an issue to be solved through policy and communication and less through technical means.

    ---

  • Are you trying to block things that interfere with your network (a poorly configured, interfering server) or things that are just for their own good? If the latter, I say leave them to themselves. It's not the University's responsibility to make sure people don't have BackOrifice on their machines, neithr should it be their problem if someone has a null root password. An interfering machine, though, could be a problem. I hate those stupid Invalid ICMP Packet messages. I know how to get rid of it, but they shoudn't be there anyway.
  • by Bryan Andersen ( 16514 ) on Thursday March 25, 1999 @10:05AM (#1963197) Homepage

    Talk to Texas A&M University about their tools for security. Especially their firewall Drawbridge, and Tiger security auditing scripts. They also have monitering software to moniter their internal network for cracking signatures.

    Another sorce is to look at CERT [cert.org]. They have lots of links to documents and articles on security. One of their documents pointed me to the TAMA stuff.

    Drawbridge is designed for blocking off site access on a machine/port by machine/port basis. Machines that pass the tiger scripts are enabled for more external access than ones that don't. As a default only SMTP is enabled from off site to a machine. Higher levels of external access can be obtained when a machine meats tighter security levels.

    One of the nice things about Drawbridge is it can be run on a PC, and securly remotly updated. It also uses lookup tables so it's fast. It is a memory hog, but then that's the price for speed. I belive it will only work for Class B and C networks.

    Email me at bryan@visi.com, and I'll gather a bunch of related links from my bookmarks at home. There are some good PDFs on their experiences, and the tools they made to implement security.

    I've been dealing with security alot lately as I've recently setup a firewall for my home system. I personally don't use Drawbridge as my network is small and Linux IPCHAINS is more suited to my system. I do use some of the Tiger scripts for auditing. I also use Tripwire (available from CERT).

  • What do you mean, sysadmins are the only people who get to telnet to their own machines or run web servers? One big reason I have my linux box is that I can use it from a terminal or an alphastation somewhere.
  • Telnet is perfectly secure when there are all switches in between the user and the host - this is true in my case. I agree SSH is far more secure but sometimes it is not an option.
  • First items have been suggested elsewhere -- a clearly stated, fair policy which needs to be distributed to anyone and everyone on the system. Especially the problem boxes. Secondarily, a firewall as has otherwise been explained.

    I'd like to propose an idea to guide your philosphy of how to succeed in this project, which is that the University network in this case is acting like an ISP (or at least a gateway)Alot of the same customer service issues apply.
    1. Don't piss off the user community by becoming a Network Gestapo. You'll end up very unpopular and possibly unemployed. Some of the offending users might be tenured professors, etc. who might take a dim view if you're not nice about solving the problem.
    2. Create/work with the local Linux Users group. Provide incentives for the user group leaders and members to take charge of the issue.
    3. Write a sniffer program to ferret out the appropriate information from the compromised machines. Using the output of the sniffer, send a friendly e-mail with
      • "here's the problem....
      • here's how to solve it....
      • here's the consequence if you don't take care of it yourself....
      • and finally this last item should have a "pull the trigger" deadline after which the sniffer's output (systems not in compliance) will have their access privileges terminated.
    4. After the cutoff date, use the local Linux users group as a resource to help the procrastinators and newbies meet the minimum compliance standards which will allow access to the network.
    Good luck in cleaning up the mess.
  • This is, I think the most effective way to go about solving the problem. If you've got a security hole or something else in violation of a policy on your system, cutting off a port will get the problem fixed fast.

    Case in point: yesterday, I let a friend from outside my campus's network make a rather large download from my system via FTP. Unknown to me was that the university imposes a 500MB/day limit on outbound traffic from machines in the dorms. My port got deactivated because the guy downloaded 1GB, and because he was outside the campus network. Two policy violations, and my port got turned off. I got it back on by adding a DENY rule for ftpd for non-campus hosts. Quick and easy fix.
  • You might want to check out the following paper: Designing an Academic Firewall" [stanford.edu]. It's kinda dated, but tries to address some of the policy issues. The basic idea is to do firewalling on a per-group basis, in an attempt to partition the network according to trust.

    In this model, publicly accessible hosts are considered "expendable", and their contents are recreated regularly through the firewall.

    It doesn't address the issue of how to secure residential networks, unfortunately.

  • You should not be worrying about their boxes - if someone breaks into their box then they should not have access to your own boxes. You must consider each of the users machines as separate networks outside your militarised zone. Give them Internet access by all means, perhaps put a firewall in or use network address translation to protect THEM, but you should only manage your internal network and the machines, possibly putting a further firewall between the users and yourself.
    You mustn't think of these users machines as part of your network - you should think of yourself as their ISP/NSP.
    Once you look at things in this way the problem seems to go away (at least it does for me!)
  • Just fyi - there is a fascinating set of posts from RISKS (from 1986 no less -- what does this tell you?) about the exact same sort of problem. For some reason completely unbeknownst to me, a transcript of this has lived in /pub on rtfm.mit.edu (exactly at ftp://rtfm.mit.edu/pub/reid.txt) for the past 13 years. Their conclusions? Policies will work if you don't have to cross organizational boundaries, but probably won't if you do.

    I think the most interesting thing in the thread is the idea that we still lack a strong ethic regarding cracking. While I'd venture that it's better now then it was then - it's interesting to note that most universities (well, at least the one I went to) don't have to have people who go around and make sure that the libraries and offices and labs are locked up tight as a drum.

    Creating social ethics that make computer breakins as infrquent as physical breakins are not a short-term solution though.

    Read the file if you get a chance -- it's long, but well worth the time.
  • I have a few suggestions as a security researcher.

    1. Policy - Impose a policy on all of these people running boxes.

    2. Make maintenance a little easier - If you are mainly worried about Linux administration, create a mailing list that sends updates about patches, etc. to them regularly. Then go get the rpm's,deb's, .tgz's and mirror them locally. This helps reduce the time and trouble of keeping a system up to date.

    3. Education - have a few seminars...stress good passwords, good administration, minimal services, etc.

    Good Luck
  • Um and that solves the issue how? Sure, the people with weak kernels get nailed out, but what about the people with open/exploitable/script kiddie loving ports?

  • I think if you have a well thought out Network Policy (ie the 'Acceptable Use' guidelines) and make a good faith effort to get security info to your user community then you can justify bringing out the big guns to deal with potential problems...if you control the switches and routers in your organization then maybe think about a policy of blocking or turning off ports belonging to systems that fail your basic published security guidelines.

    The kicker of course is making a good faith effort at getting timely info out, having a sensible & legible definition of what your security requirements are going to be, etc. Someone needs to decide when something is problematic enough that blocking it from your network is the safest thing to do given the needs of the at large community...this has to be balanced against being too heavy handed.

  • Or just setup access lists on your routers
    to deny incoming port 23, 25, 80, etc to all
    boxes you don't directly control.
  • First step, a policy. 'nuff said.

    Second, ipmasq, forwarding, firewaling, proxyies will work, but will also cause major bandwith restriction when your dealing with such a large numbe of users. Considering moving systems that do not require network acess (the popular www and e-mail I mean) to a private intranet in a secure location. Then you don't have to worry about the world using thoes exploits.

    Third, rember you still have twenty thousand users. I'll bet you that at least one of them want's to dable in hacking. With that large of a system you also have to protect from yourself. Look up kerbose or other WAN login/security systems.

    The problem that you face is a matter of convincing the individual system administrators that security is necessary. Publish your own internal security advisory list dictating exploits and e-mail them to the systems that have them. If you show them the importance than they will listen.

    Just my $0.02

    ClarkBar :)
  • well, for most of the university rescource sharing, they use kerberos, and who cares if some windows users have thier machine nuked? It's only their personal box. I know that the physicas deptartment uses ssh a whole lot.

    I believe that our network was set up so that no computer trusts another (except maybe the kerberos servers)

    -- A wealthy eccentric who marches to the beat of a different drum. But you may call me "Noodle Noggin."

  • by rhaig ( 24891 ) <rhaig@acm.org> on Thursday March 25, 1999 @06:39AM (#1963212) Homepage
    It sounds like there is going to have to be a policy set. "all systems will comply to the security guidelines outlined at (some URL) or they will not be allowed on the network."

    Once you get the guidelines set, implement some detection measures (the easy part as you put it) and some automated notification. after some number of warnings (say 3 in as many weeks) just filter all their packets at the router (based on their MAC address).

    Yes, it wouldn't take much to change your MAC address, but then they've intentionally circumvented policy & that, I'm sure, is covered in some other policy, with it's own punishment.

  • I am also in a university setting and a similar thing recently happened. Our network had over 100 machines which allowed spam relaying. The owners of these machines were given instructions twice and if they still had not compiled their network ports were deactivated untill the could show that they had taken the steps required.

    Also a monthly meetings are schedualed where the University admins show others how to fix common security holes. If a machine is shown to have these problems they are informed via email about the meetings, and if they do not attend, or at least do not give reason for not attending their network ports are deactivated.

    It seems that when ever a port is turned off the user will generally fix the problem within a few hours, even if it means using another computer to dl the latest version or patch and using zipdisks to transfer to the machine in question.
  • And you could mirror the /updates directory of the most used distros on your site (and even the MS Service Packs, if the license allows it), then mandate that autorpm[*] (or similar tools) be launched at least weekly (better : nightly *and* at reboot).
    With a quick perl script to suck the FTP mirror's logs (combined with a ping to not fire upon people with machines switched off during the night), this is quite enforceable, and at least partially ensures no one runs a too old version of sendmail or tcpd.

    -- Cyrille

    [*] I use autorpm http://www.kaybee.org/~kirk/html/linux.html, but there are plenty of them for the various package formats.
  • Soooo much agreed....

    Here in my Uni (ENS Cachan, France), this is what we're doing : two routers with each its own set of ACLs (yeah, we're wasting a quarter class C for that). One (a small cisco) is owned by the school, one (a four-NIC hack'n'trash Linux box running ipchains) owned by us.

    Both implement various security blocks (for instance, NO SMB access whatsoever to the outside world. NFS forbidden as well. SMTP mandatorily goes through the one server, and the MX records set a fascist way in the DNS. /etc/sendmail.cf built using the Jussieu Kit, with the most autist anti-spam measures on (even if that means we loose all mail from poorly configured sites, and despite the authors of the Kit publicly saying some of these features are too much and should be withdrawn). No FTP or web serving unless through the server, subject to quotas. etc.).
    The probable next step now that France left Iraq, China and Iran alone in the club of countries which forbide encryption is probably to lock out POP3, telnet & X access in favour of SSH pipes.

    (Oh yeah, none of this is really strong. One can always do a httptunnel on a pipe pseudo-network device -- but that's circumventing the barriers... And these barriers are first here to protect the newbies from accidentally exporting their c:\WINDOWS\TOTO.PWL files).

    There's also a tangle of legal documentation (the campus-wide Network Security Charter (NSC) each individual, lab or association has to sign in order to get even a simple login access anywhere on the campus; the dorm subnet NSC each member has to sign ; specific security agreements between the association in charge of the dorm subnet and the uni ; the nationwide Renater NSC, etc.).
    Finally, there are quite a few daemons running on the inner router/server, we for instance strictly forbide MAC address changes if not warned in advance (and we do pull the plug for that), etc.

    Yes, we have some problems from time to time with people having trouble with reading French and/or not willing to understand the rules, and I believe the guys now in charge are going to see quite a few incidents of the sort per year, but overall, having well-explained legalware signed by everyone (and spending a good deal of pedagogy after the signature, in order to make clear the Charter is not just a piece of paper !), and explaining why some services are blocked, is IMHO quite well working.



  • The CAF has a collection of academic computing policies here: http://www.eff.org/CAF/faq/policy.best. html [eff.org]
  • This is not restricted to Linux. Other operating systems can be victims and sources of attacks (and both if an intruder launches an attack from a victim). Specify the purposes and procedures, not the tools.
  • Just ask the Administration and the Legal Counsel about paying expenses to repair machines damaged by intruders, paying for bandwidth being used by outside intruders, and liability for spam, redirected attacks, and servers operated by outsiders.

    As others have said, first establish a policy. You cannot enforce what you do not have. Actually, you need separate Administrative and Academic policies. And probably one for the dorm buildings which is more lax (and which compensates for high bandwidth non-academic net use...although a time-sensitive throttle which allows more game bandwidth in evening "low-academic-demand" times might be enough).

    Then you can start with implementation details. An Internet firewall and exception procedure is obvious. As these are autonomous departments, each obviously has to be firewalled to protect all from errors within each. Everything else depends upon your policies and implementation.

  • Trust me on this -- no student wants to be caught with their pants down. Run a script that searches the network for security holes. If it finds one, send the offender the following message:

    "There is a security hole in your computer. If it is not fixed within 3 days all of your files will be transferred to a public server and made accessible to the entire student body. Futhermore, (add more scary stuff here). Access http://documentation.site to learn how to block this action."

    Then have faculty spread scary rumors about how
    - some guys got put in jail when it was discovered they were dealing drugs using their computers to track their business

    -some guys got mugged when their gay love letters were made public

    -any other extremely explosive/controversial subject tied to using computers

    The object here is to make security the 'fad'. The buzz will increase knowledge. Most people with knowledge (especially those in college) like to flout it. Eventually, "You're running POP3 and you don't even serve mail!?!" will be as derogatory as "You took a dump and didn't wipe? And then TOLD you're girlfriend!?!"
  • MIT had the same problem. They called their solution Project Athena.

    This is where Kerberos came from.

    The idea is to keep your Kerberos Master and your administrative systems & file servers in a locked room, with a guard preferrably.

    Then your users can get to your resources only if they get authenticated with Kerberos. Student-owned machines can be loaded with athena..

    School-owned machines can be installed (or re-installed) with a button push and a 15 minute wait.

    No user files are allowed on workstations.

    etc.
    etc.
    etc.
  • Here at my university, they've been creating a network-specific distribution of Linux, which is not only optimized with respect to the network environment (tied in with AFS, ability to access programs across the network, kerberos, zephyr, etc.), but is also configured to be as secure as possible. This is offered to anyone new to the OS, who want to give it a try.

    In addition, the Network Development group has configuration guidelines and suggestions handed out to people who are bringing computers to campus. These include security guidelines as well. Anyone who is detected not following the guidelines has their machines disconnected from the network for a given length of time (a few weeks to a year). If well-enforced by a Computing Services group, this could be very effective as a deterrent if people don't follow guidelines.
  • University usually = no money (for network maintenance and support.)

    Assuming that there is little enthusiasm among the overworked network people that do exist, all that is available to you are simple solutions.

    If there isn't already a firewall in place at your university then anyone on the Internet can attack any machine on the LAN. If this is true you've already lost the battle. Stopping users on your LAN from attacking each other is another problem.

    Firewall the incoming traffic (assumed to be present.)

    Set up ftp and WWW services outside the firewall to allow publishing of data.

    Install smart switches (Cisco Catalyst 5000 series, etc.) with port security. (If there's no money then there's no money, but this is just about mandatory today to keep from actually having to visit hubs and yank patch cables for offending parties.)

    Lock each port to a MAC address on the NIC. Changing NICs locks down the port and requires a help desk call and explanation as to why you are changing computers/NICs on that port.

    Log incidences of NICs being turned on in "promiscuous mode" and lock out those ports for at least a week to prove you mean business.

    Publish simple rules about what is not allowed on the network (IP spoofing, MAC spoofing, etc.)

    Don't worry about people's out-of-date OS software. Who cares anyway? They're not hurting anyone but themselves. They may need an old OS for some reason. Don't even try to track that sort of thing, it's meaningless for the good of the network - which is all you really should care about. Individuals must learn to take care of themselves. Obviously you can suggest how they can secure their computers, but enforcement is impossible and unnecessary.

    In summary:

    Firewall incoming traffic
    Implement remote port management tools
    WWW and ftp services available outside of the firewall
    Publish rules
    Lock out violators
    Let the users manage their own systems any way they choose.
  • There is no way you can control every box on a university network. Texas A&M uses their firewall to block all incoming connections on everything but port 80 and allows all outbound connections to go through. The goal is to limit outside access without imposing too many limitations on the users behind the firewall. If a system has a need for other connections the to be let through then it must meet university security requirements and be certified by a sysadmin.

    Texas A&M uses the drawbridge firewall package. It can be found at http://drawbridge.tamu.edu [tamu.edu]
  • Have a meeting and invite the people managing the systems to contribute ideas. Nothing will cause more trouble than to tell people who are managing machines in such an environment that "they don't know what they are doing." If you actually take some time to learn the reasons why the problems you are seeing exist, you will learn that there are probably more skilled and resourceful people managing the machines than there are people who don't care about security and don't know what they are doing.

    As one anecdote, I was told my machines were insecure after a remote probe. It turns out the software used to probe the machines was interpreting the message from tcp_wrappers "You are not allowed to connect from IP www.xxx.yyy.zzz" as a successful login. The people running the probe were so sure that they were checking machines run by idiots, they didn't even look at the logs themselves to check what was really happening.

  • by Bibo ( 111206 ) on Thursday March 25, 1999 @08:15AM (#1963225)
    I am sitting in the same admin seat and after having read some of the comments here I got this idea:

    With a virtually endless number of systems on the network one cannot ever possibly check each and everyone computer for security problems. It is way too time consuming even for a large IT-staff group and it will probably not be appreciated by people who feel you are sniffing around in their computers.

    Firewalls and blocked routers are a nice idea, but Professor A. has a friend who must be able to telnet into box 123 and Professor B wants to ... and and and - you will end up being forced to punch a million holes in the firewall, rendering it useless.

    An own distribution is probably a too complex thing to go for. As soon as a distributor will update, some users will do so too. Your own distribution becomes old and you soon run into new problems.

    So my idea (just an idea) is to create some kind of "ticket" which allows the users to connect their computer to your network. Assume that you write a program or a set of scripts which run a number of security checks on a computer, presenting the output in a code number, call it the ticket. This ticket is submitted to a server which grants the sending machine access to network - if the ticket shows that all tests were passed.

    The idea is to limit your work to writing a - say monthly - version of the security check script. Let the program produce a ticket which is valid for a reasonable time span and place it as a complete, runable package on a public server. This way you will MAKE THE USERS CHECK THEIR OWN COMPUTERS. No valid ticket, no IP-number.

    As I said it is a very raw idea, but I think it could work.

Life is cheap, but the accessories can kill you.

Working...