Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

Words From Bastille Developer Jay Beale 87

How secure do you feel? Occams Razor points to "A great interview with [Jay Beale,] the Lead developer, about the Linux Bastille project." Beale talks about the direction that Bastille has taken, and seems fairly pragmatic about the Linux security model and computer security in general. A nugget: "... to fully secure a system, you really have to grind it into dust, scatter the pieces to the wind, and hope that Entropy does [its] part. Since you can't do this, you make tradeoffs."
This discussion has been archived. No new comments can be posted.

Words From Bastille Developer Jay Beale

Comments Filter:
  • by Anonymous Coward
    I have to take issue with your idea that NT boxes are more difficult to secure due to their newness. As a recent graduate of my corporation's MCSE program, I not only received a 75cent/hour raise, but also learned everything one needs to know about securing my NT box. For one thing, always make sure to have the latest service packs installed. Second, check bugtraq, and look at everything on the front page. These are the exploits you need to watch, as everyone is trying them (nevermind the older ones, hackers get no joy from yesterday's exploits). Subscribe to a good impartial online publication like ZD-Net and make sure to read all of their reviews, and purchase any products they reccomend.

    Simple, efficient, and 100% Microsoft certified.
    -Fred Persec
  • by Anonymous Coward
    It's ok, the reason /. shows your user# so prominently is because they want to encourage an air of elitism here, where Bruce Perens' comments are automatically weighed in with a higher karma than another unfortunately named bruce, who hails from Australia and even put a period on his name so people could tell a difference. If your userid isn't lower than say, 140k noone will want to listen to you. They will call you a troll for no reason, ostracize you, refuse to answer your comments or questions, responding only with inane shit like the person who responded to you did. I reccomend advogato.com, where you can trust the people there because everyone there is subject to a full background check before posting.

    Ò
  • by Anonymous Coward

    I'd just like to wibble a little about my experience, as a developer in a corporate environment, with watching boxes get secured. I'm afraid I've had to post this anonymously, as I don't want to give away any information about which corporation I'm talking about (moderators: AC's need lovin' too...)

    I've seen our architecture/support/security guys lock down NT and Unix boxes (both of various flavours). What's struck me as the major difference between the two is the level of experience they've brought to the job. It seems to me that Unix is a much more known quantity; people *know* how you lock down a Unix box, they've been doing it for 15 years. NT boxes, OTOH, are relatively new technology. No-one really has a big-picture overview of what you've got to do to them. This leads to all sorts of mistakes (I want to change permissions on this directory...of course that means override all subdirectories) - and it's difficult to tell when you've finished.

    The other big problem that you face is reversibility --- if someone makes a security change, can you undo it without rebuilding the box? If not, what will that stop you doing in future --- all software that goes on web boxes will need upgrading, and if that upgrade, eg, involves a new DLL & registry editing, can we do it? [insert analogous Unix-type question here]? Is the technology described in the article reversible? It does say (ominously) "You see, to secure a system, you'll have to remove some functionality" ... how closely defined can you make "some" in that remark?

    Just my $0.02

  • More opportunity. Trojan horses are easier to get as user, as it is to exploit problems with pine or lynx and such. Not to mention the much higher likelihood that you can sniff a user account's password, or that if you crack a password file, odds are good that the users will choose weaker passwords than root.

    How about the recent problems in BitchX (made me glad that I had been running BitchX in a well-supervised chroot for quite a while), that would also get you a user account.

    etc etc etc
    ----------------------------
  • Linux and OpenBSD are different operating systems. Could it be possible that there is a piece of hw or sw that Linux supports and OpenBSD does not? On the hardware side, note, for example, that OpenBSD does not support SMP. On the software side, some may prefer ipchains over OpenBSD's ipfilter. CIPE does not exist for OpenBSD either.

  • Yes, OpenBSD has a good firewall but the parent to which I responded to said: "I don't understand why anyone would use something else in a situation in which high-security is needed (which is any internet site these days)." Now, is every Internet site a firewall, or are there other sites, that, say, are webservers, mail servers, ftp servers, etc? Would you want SMP for any of these?

    Could you give an example of an exploit that cannot be blocked by a plain ol packet filter that a stateful filter can?

    Re IPsec: there are standards and then there are widely used/implemented standards. Until recently Cisco 675 routers did not support the protocol numbers of IPsec (50 and 51 I think). US West will most likely never release an update to its customers that do not have a version of CBOS with IPsec support. What are we to do? I also refer you to the related /. discussion about the immaturity of FreeS/WAN. CIPE is time-tested, production-quality, simple code.

  • I think your basing a distro's merits on how quickly you can reinstall it not quite good (I would even say bad). I was going to say that Slackware (my distro of choice) can easily be installed within 15 or 20 minutes (which it can), but you have to do a few hours (or more accurately a few hundred hours) to get it just the way you want.

    So the easiest way to do it would be to get your Slackware system just the way you want it, burn your root ext2 image to CD (more complicated if you have more than one partition that you need to back up), and then when your hard drive dies, you just pop in tomsrtbt, repartition, dd the .ext2 from the CD to your new partition, and you're reinstalled in hopefully under five minutes (more if you have a slow CD-ROM). Bonus points for compressing the disk image.

    I think it would save you a lot of effort (unless you don't have a CDR), and plus you don't have to put with all that Red Hat garbage.
  • Oh! That is really cool. I had honestly never thought of it that way. I'm sort of torn though: to make the perspective historically correct, you'd have to imply that all crackers are basically just grey-hats who are cracking just because they are pissed that you are stupid and want to prove it. (i.e., peasants trying to overthrow an evil government.) This probably isn't correct: it does describe a lot of hackers, but doesn't really cover the criminals.
    I can't really think of a better historical parallel, so I'll suggest Fort Knox in Goldfinger would be better: the bad guys are trying to take what you've got (or at least destroy it), while simultaneously embarassing your government and exploiting the fact that the American Army is pretty incompetent. This casts the product formerly known as Bastille in the role of Bond, dashing in to save the day while screwing Pussy Galore. Oh, wait... guess the parallel isn't quite exact. Oh well...
    ~luge
  • Why name a security product after a fort whose only claim to fame is that it was stormed by a bunch of peasants?
    Seriously, it sounds like a cool product, but a) no debian yet so no help to me :( and b) really, a better name :) My suggestion would be Gibraltar, or maybe (once they get IDS set up) "Invading Russia in Winter."
    ~luge
  • Well, they were undoubtedly really *smelly* peasents. They may have even had pitchforks and torches. And foul breath. Don't underestimate the power of a frenchman's funk.
    --
  • The idea is that if an outsider cracker manages to compromise a user account (which is much easier than getting root directly) you want to prevent them from then using internal exploits to gain root.

    Why would it be easier to compromise a user account? The only thing I can think of is the "you can't telnet in as root" thing, but I've completely disabled telnetd.
  • On my own personal system, where I'm the only user and I'm also root, I don't think it's very useful to prevent "normal users" from running programs like ifconfig, eject or userhelper. If my normal password is easy to crack, then my root password is probably just as easy to crack.
  • by Zagadka ( 6641 ) <zagadka@noSPAM.xenomachina.com> on Monday July 17, 2000 @01:06PM (#926389) Homepage
    I installe Bastille a few days ago. It's a great idea... a security "hardener" for Linux. There are a few things about it that kind of bugged me though.

    On thing that bugged me is the fact that it doesn't make it easy for you to choose what kind of security you're really looking for. For example, all I'm really concerned with on my home machine is network security. I don't want people connecting from a remote location and doing nasty things. On the other hand, I don't care about people who have physical access to the machine, because I have physical security to prevent that. Bastille ended up chmod'ing a bunch of executables so only root could use them. This ended up breaking numerous things, including the Helix updater. I couldn't even run ifconfig as a normal user after running Bastille. At least it generates pretty thorough logs, so I was able to undo the "damage".

    The other thing is that it doesn't do any checks of what's turned on in your kernel. I was pretty sure I didn't have the firewall support compiled in, so I was pretty surprised that Bastille didn't complain. Some investigation showed that the scripts it installed to secure the network connection were all failing because of this. This is especially dangerous, because without actively checking, some users will think their system has been secured when it really isn't.

    Over time, I'm sure Bastille will get better. In the meantime there are some quirks though, so be careful.
  • by Mr Z ( 6791 )
    (+1, Funny)

    --Joe
    --
  • Security is a tough thing for OS to do. With any added security there is convinence taken away. This trye for computer and establishments. I know grocery stores where all 50 and 100 dollar bills have signed by management they want add security that that money is ok. But this extremely inconvinent. Lets of they can get it right.
  • Not sure I fully understand what you are doing but the central issue raised by the poster would be to make your info secure, not the OS. If the goons come and take your disk, they will have the tools to "inspect" your disk. Does your (and other's) method stop anyone from mounting the various partitions using whatever means they might have? I'm curious and would like to find out more about this.

    As I have a beer in my hand right now I can properly reply, Cheers.

  • The reason why Bastille did a chmod on a bunch of executables is because it asked you if you wanted to do it. You then answered yes, do the chmod. As for the kernel, Bastille is meant to be run after installing RedHat Linux (note: the latest version of Bastille runs on non-virgin systems). This means having the stock RedHat kernel that comes with the distro. AFAIK, the firewall support is compiled in. I'm not totally certain of this b/c I always end up compiling my own (newer) version of the kernel than the one that is shipped out.

    Bastille has its quirks but securing a system is a non-trivial task.

    After saying all of this, I'll sheepishly admit that I didn't fully run Bastille on my systems. I went to the entire procedure but didn't execute the changes. I realized that most of them were already in place. Furthermore, some of the screen msg's were kind of "funny" (curses, foiled again!). I wasn't sure that my answers were properly being handled. So rather that trust some mysterious script, I just made the few additional changes myself. Maybe this explains your mysterious chmods. I dunno.

  • by pen ( 7191 ) on Monday July 17, 2000 @12:57PM (#926394)
    Security is inversely proportional to convenience.

    --

  • I'm working on it via the loopback device. My current system is a two-stage bootloader. The initial stage gives you access to two accounts - root and stagetwo. Login to stagetwo, su to root, and losetup each drive. It uses initrd (initial ramdisk) to load a "fake" root filesystem. Once you've configured the kernel, lilo will then obliege and load up your loopback'd root filesystem. That's the theory anyway. So far I can do everything *but* the root filesystem. The reason is that the initrd docs are pretty vague.

    But there are people working on things like this.. I'm one of 'em.

    Cheers,

    ~ Signal 11

  • Close. I'm using initrd though, which means I don't need to worry about my "fake" root.. the remounting is taken care of by the kernel then. All I need to do is tell the kernel that /dev/loop0 is where it should mount root from. :)

    That's the theory anyway.. the practice is that creating an initrd.gz image with SSH, login/authentication capabilities (via PAM) and the necessary tools to boot up a basic mandrake 7 system is about 20MB. Bloatware (ugh). All that needs to go into RAM. Even worse.. the linux kernel only defaults to creating 4MB of ramdisk.. so as soon as it accesses anything past 4MB.. *BOOM!* fscking piece of #$@! anyway.. I'll write up a HOWTO once it's done.

  • OpenBSD natively supports IPSec out of the box, which is the standard protocol for creating encrypted VPNs (Win2K, FreeBSD, Linux, Solaris, etc... all in one way or another support IPSec) CIPE, on the other hand, is Linux only.

    SMP is not useful in a firewall/gateway device, which does not require a lot of horsepower. You might think encryption does, but OpenBSD supports some hardware encryption cards. Granted, if one needs a super high performance Internet server they should use FreeBSD or Linux, behind an OpenBSD firewall :-)

    Last, but not least, IPFilter is a true stateful firewall package, whilst ipchains is not, by default. If an admin doesn't know why this is important, they shouldn't be in the business of securing a box.

  • Now, is every Internet site a firewall, or are there other sites, that, say, are webservers, mail servers, ftp servers, etc? Would you want SMP for any of these?

    Of course you'd want SMP for extremely high-traffic servers, but you'd also want them behind a firewall, hopefully running OpenBSD. Cdrom.com, the busiest ftp site on the Net, has a single-processor machine, running FreeBSD, which I think says more about the importance of the infrastructure surrounding a machine (the internet connection, the RAM, the processor cache) than the number of processors. I'm not aware if it's behind any kind of firewall or not.

    Could you give an example of an exploit that cannot be blocked by a plain ol packet filter that a stateful filter can?

    If I remember correctly, the main reason for keeping state on a TCP session is to prevent session hijacking and/or spoofing. The firewall can remember which packets belong to sessions that are valid (i.e. originating from internal clients to external hosts (outbound conections only), if that is your policy). A stateful firewall can also help in DoS attacks, because it can determine the 'valid' SYN packets from the invalid ones ued in the attack. This is why after slashdot.org was DoSed they installed a bridging FreeBSD server running IPFilter to try to prevent this scenario in the future.

  • Comment removed based on user account deletion
  • Why name a security product after a fort whose only claim to fame is that it was stormed by a bunch of peasants?

    Well, the Bastille is a fortress, you see. And, sure, the building was stormed --- but the problem wasn't the building, it was the administration.

    It's a joke, you see.

  • I did submit the story on Bastille Day, when it was posted.

    My submission to Slashdot about that article was rejected.

  • Every change Bastille makes can (now) be reversed. This (very important) functionality was added in 1.1.
  • Why name a security product after a fort whose only claim to fame is that it was stormed by a bunch of peasants?

    Actually, I think they are on record as stating the name Bastille was chosen because many believe that the Bastille was stormed due to faulty administration. Thus the parallel to crack-able boxen.

    I thought it was rather clever. Oh well.

  • ...maybe beacuse OpenBSD does not run on everything that Linux run's on? Maybe because you don't have enough hardware for bath a server and firewall, and you running both on the same computer? But security should be default on the any new Linux system, and let the user install knowlingly each Network Daemon on the computer...

    Actually I think this becoming a Trend anyway, as RedHat 6.2 Workstation (according to rumours, I never installed it) wont install inetd by default.
  • >>... also learned everything one needs to know about securing my NT box.
    Yeah, sure, right. Nothing to worry about. ... until ... ;)
  • ...Then he would have a secure system.
    OpenBSD - Armed to the gills. [obenbsd.org]
  • You can read BUQTRAQ all you want, but you still have to wait for M$ to release the bugfix.
  • all the students do it anyway for free. its not a big deal.
  • redhat IS used on a lot of servers. i use it on mine since i know that if i have a hard disk die at 3am i can reinstall and be up by 4am (and i was...did it once). no other distro can say that. plus redhats support is good, everyone uses it so user level support is good, rpm is the easiest package manager ive ever used (and it beats dpkg -- although not apt...but then apt requires libc++) and its simple to lock down. its also incredibly standardised and is closest to the LSB.
    debian/slackware are mainly for the hard core geek audience..for server admins who have to struggle with multiple OSes ease of use/install is a MUST (i must be able to download an iso image if i dont have one, it must have all security fixes upto current, i must be able to install over HTTP/NFS/FTP/CDROM at least). i also run debian and slackware but the majority of my servers are redhat with ipchains/custom kernels and stackguard compiled binaries...and a few secret extras.
  • as an admin i havce to take issue with openBSD and its claims. OpenBSD recently had a DHCP root hole (remote). their claims still stands since they said that networking was not installed and turned on in the default install. this is a fairly ridiculous claim.
    Also, openBSDs limited hardware support, no SMP support (wha? who uses a single CPU server anymore? even my desktop has two) and confusing install (i tried it once - it worked but it was painful) is not going to help push it.
    you want to promote openbsd ? create a redhat like install for it so its secure out of the box AND has useful stuff/supports modern hardware(think quad xeons) AND its installer should be secure by default after starting all the common networking daemons. then you guys can promote openbsd all you want..hell..i'll buy a coupla hundred copies.
  • *shrug* why bother when redhat does what i need ? theres no real advantage to using slackware - it just increases the workload. when you have over 50 different machine configurations to support and over 600 machines you appreciate a quick install and setup. its hardly "bad" in any way.
  • by thogard ( 43403 ) on Monday July 17, 2000 @03:20PM (#926412) Homepage
    You used CERT to find out where the holes are. CERT is years behind bugtraq [securityfocus.com]
  • From the article

    ... as Bastille does more and more, it has to ask a lot more questions! ... but it annoys some users who just want a quick fix. ... we're making "One Shot" configurations, where they can choose a sample configuration that matches their own and deploy that. While they miss a crucial part of securing the system (Secure the Admin!) they still get a safer system...

    Secure the Admin means to educate the Admin on the tradeoffs between security and ease of use. As Pen said in the original post, Security is inversely proportional to convenience. Bastille takes the next step, and tries to educate as it undoes what the distros have done. Its easy to make a machine very secure, but you end up with a box nobody can use. With an under-educated Admin, it can be very tough to know what to turn on or off and why and how.

    the AC
  • First step to securing a system is to secure the admin.

    Then go to work securing the system.

    Its a motto I've been living by, but it can be very frustrating at times when all someone wants is a big security switch. I tell them its the one marked [| O], the | means insecure, the O means Oversecure.

    the AC
  • This method would require that you type in a password ot mount your root/usr/whatever partitions. no password, the partition is just random junk.

    This is already easy to do for most partitions, but not root. What sig11's trying to do is to make a boot sequence that mounts a temporary partition, asks you for the password and then remounts the encrypted root. This is kinda tricky, as it requires you to atomically ('cause you always need a root) swap root partition. I looked into this as well a few months ago, and as far as I could figure, I'd need to hack the kernel to make a swap_root_fs call or something.

    Too much hassle. I found an encrypted home-dir package which was a 95% solution for 5% effort.

    The real trick in all of these cases is to avoid getting the password swapped out to disk. Encrypting the swap can slow things down alot.
  • by casret ( 64258 ) on Monday July 17, 2000 @01:32PM (#926416)
    That is a dangerous assumption. Nowadays with the growth of broadband and always on connections, its important that all machines are secure.

    In fact I would say that since desktop machines are administered as well or as closely as server machines, its more necessary to have easy ways to secure it.

    Many insecure desktop machines are used to cover the tracks of crackers, as well as to launch DDoS attacks.
  • by zorgon ( 66258 ) on Monday July 17, 2000 @01:28PM (#926417) Homepage Journal
    Good point. In part you are describing a minor problem with documentation. Poor or less-than-complete (to use less judgmental terms) documentation is even now something that is not addressed well by developers, both in the closed and open source models. I'd think Satan will be handing out ice skates before Joe Hacker will say "Naw, I don't feel like coding on this project, I'll write documentation instead." (note to knee-jerk responders -- not everyone who uses an open source product can (or wants to) read the code to figure out what it's supposed to do). Of course, now nobody is expected to read what documentation is present because it goes without saying that it's not what you need to know ... sigh...

    WWJD -- What Would Jimi Do?

  • where he discusses the fact that he had to release the contents of a private mailing list due to a Netscape legal case

    Luckily this ought to be a thing of the past. Take a look at the StegFS Filesystem [cam.ac.uk]. It's all about the plausible denial aspect when being faced somebody trying to get access to your encrypted data. With this you can say you have no / no more data without them being able to disprove you (as opposed to having obvious encrypted files lying around). The worst thing that could happen is the other party wiping you drive.

    cheers,
    Roland

  • Well... also you can't ssh in as root (or at least it should be set to that).

    Okay, for example if you have a service running like a web server or an ftp server, they probably won't be running as root (at least they shouldn't be). There might be a bug in one of those services that allows you to execute commands on the machine (this is generally what happens in the buffer overflow exploits you hear about all the time). So now the attacker can most likely get himself access to your machine with whatever privlages the service that was comprimised had, not root.

  • by normiep ( 68432 ) <pblaer@panix.com> on Monday July 17, 2000 @01:17PM (#926420)

    Bastille ended up chmod'ing a bunch of executables so only root could use them. This ended up breaking numerous things, including the Helix updater. I couldn't even run ifconfig as a normal user after running Bastille. At least it generates pretty thorough logs, so I was able to undo the "damage".

    This is all part of network security though. The purpose of doing this kind of "damage" isn't just protecting you from local users. The idea is that if an outsider cracker manages to compromise a user account (which is much easier than getting root directly) you want to prevent them from then using internal exploits to gain root.

  • ...that it's a little odd to name a security enhancement after a fortress that was successfully stormed (admittedly by the good guys). Though not an exact analogy, it's kind of like a Texan naming their security system The Alamo. Which reminds me -- Go Armstrong!

    Anyway, at least they didn't name it the "Maginot Line" or "Dien Bien Phu".
  • "The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards--and even then I have my doubts."
    (Eugene H. Spafford, Assoc.Prof. CS, Perdue Univ.)
  • I thought the Bastille developer was guillotined during the French Revolution!
  • Why would you name security software after a building that was raided by French peasants?
  • Who thought that this was troll? You will be destroyed in meta moderation.
  • A very important thing to understand is that you are only as secure as the neighboring machines you allow logins from.

    Since most Unix machines will allow telnet access from any IP address, and many other machines allow FTP or other filesharing access from any address, then you are basically as secure as the weakest machine one of your friends or users happens to log into you from, which could be anywhere and is not under your control unless you make special effort.

    The reason for this is that if a cracker (always use the correct terminology...) should break into some less-secure machine than yours, he could install a network sniffer or keystroke recorder on it that captures your buddy's password the next time he logs into your supposedly secure machine from the compromised one.

    Poof goes your carefully secured fortress. He doesn't have to use any careful exploit at all to crack your machine. He just logs in using your buddy's username and password.

    Better hope you're machine is tightened down against root cracks, and you better hope your buddy wasn't logging in using the root password.

    One thing for sure - don't every log into anywhere as root, or do an su, if your telnetted via an intermediate machine, as there could be a sniffer or recorder running on that machine.

    This exploit may be even easier than you think - one of the original versions of telnet could be compiled with a debugging flag that, if set to true, would dribble all the keystrokes out to a file. All the hacker had to do would be to gain write access to the telnet executable file and set the value of the global debugging flag from 0 to 1 and he'd get everyone's keystrokes that ever used telnet.

    Me? I don't ever use telnet. I use ssh (secure shell). The only external site I ever log into is my web hosting service. I think a minimum requirement of a web hosting service these days is that they provide secure shell access to their customers - mine does, it is Seagull Networks [seagull.net]. Does anyone know any others?

    Also don't transfer files with FTP - passwords are provided in the clear and crackers can copy your files with sniffers. Use scp (secure copy) instead.

    Tilting at Windmills for a Better Tomorrow
  • by goingware ( 85213 ) on Monday July 17, 2000 @01:50PM (#926427) Homepage
    When I was at Apple [apple.com] in 1990 I was raising hell about security holes in A/UX. The thing shipped with no-password guess access enabled by default, and I could become root on the thing in about 30 seconds after a bit of practice if I could log in at all.

    While my complaints about A/UX fell on deaf ears in the A/UX team, the people who maintained the Unix machines for Apple employees to use (yes, some Apple employees do use Unix, they even used to have a Cray running Unicos) invited me to play capture /flag.

    In the root directory of some of the multiuser machines was a file named flag that was not writeable. The objective was to write into it and then tell the admins how you did it.

    When I started the current contents was "such and such a department rules". I guess I would have written "Mike was here" or something.

    While I was able to crack A/UX 2.0 every which way, I never could capture /flag.

    My understanding is the security holes got fixed in A/UX 3.0. It's a dead product now.

    The way I found the security holes was to start methodically working through the CERT advisories [cert.org] and checking which ones A/UX was not compliant with. When I'd find one and they'd refuse to fix it, I'd file a bug report and send some emails around with explicit details of how you can break root because they weren't listening to CERT.

    If you administrate a computer on a network, you should go through the CERT advisories yourself and tighten up your system.

  • There are some folks working pretty actively on adding Slackware support, and I think Bastille will have Mandrake 7.x support before too long. The biggest reason Red Hat is "the main target" is becuase that's what most of the developers seem to use, and being a volunteer project, there isn't a big ol' lab where Bastille hackers can play with all the distros (and architectures!) they might like to support. Anyone with a bit of experience and Perl knowledge who wants to help extend Bastille to support other distros would be welcomed!
  • Granted, if one needs a super high performance Internet server they should use FreeBSD or Linux, behind an OpenBSD firewall :-)

    That's why Bastille Linux is being written and maintained. The OpenBSD firewall can't stop what it has been configured to let through -- all the boxes in the DMZ need to be locked down as tight as a drum, regardless of OS. Bastille is for those Linux boxes that, for whatever reason, have to exist on the internet, be they firewalls, LDAP servers, VPN machines, or web servers.

    Aetius
  • Hmm... picking weak passwords is the automatic one that comes to mind. I've seen systems where cgi-bin programs are run as the user where they are located in, any other programs less experienced users may be induced to run, etc.

    The best defense against these (other than running John the Ripper or what not to force your users to use good passwords and locking down other things) is to minimize what the attacker can do.
  • Yes, but the point is, as a bastion of security, the Bastille was UNsuccessful. Hence, a poor model for or symbol of network security.
  • You are protected, in the U.S. at least though laws in other countries differ, from being compelled to bear witness against or present evidence against yourself. You would have to decrypt said data if it would incriminate other people (not including a spouse) or be of use in criminal matters not pertaining to you, but if decrypting that data would provide evidence against you then you are constitutionally protected from being compelled to do so. The only exception is that you could be compelled to decrypt the data if you were guaranteed immunity from prosecution for the offences the decypted data implicates you in, in which case you wouldn't have to worry about doing it since you'd be free and clear of all related charges. There are many alarmists who keep crowing about how you can be compelled to decrypt your data, but in the U.S. this is not at all the case. The U.K. I fear is a different matter entirely, but I defy anyone to find a single U.S. case in which someone was held in contempt of Court for not decrypting data that could implicate him, who was not given a guarantee of immunity to do so. You won't be able to find one.
  • Mitnick was never compelled to decrypt the contents of his drive. Law enforcement is keeping the drive *because* he refuses to decrypt it, but he's not in jail for contempt of Court for not doing so. He can't be compelled to decrypt it because the contents may incriminate him, and to force him to do so would violate his Fifth Amendment rights.

    Now, whether or not law enforcement can keep the drive(s) indefinitely is a very gray area with little precedent. As of now the matter hasn't been appealed as it could, and for good reason: Mitnick is still on parole, unable to be near computers and such things anyway. I wouldn't want to anger the people who fucked me over for so long and who can still put me back in jail for minor things. But I hope that when Kevin is truly free and no longer on parole, that he challenges this and wins--but I wouldn't count on it, since the value of those old computer components is negligible even now, since it's been so long, that a fight over them wouldn't be very financially justified. At any rate, I'd rather forfeit a couple of hard disks than my freedom, so it's not such a bad deal.
  • I'm always disappointed that there's not a greater effort to provide data security through an easy-to-implement optional encrypted file system. Yes, you can get the patch from Kerneli.org to accomplish this, but this really isn't enough. The first line in the Howto on kerneli is: "This process requires the kernel source code, knowledge of compiling this code, and a lot of patience."

    There should be a distribution--and maybe there is, can anyone point us to it?--which offers the encrypting file system as an option during install. Most of the install process for the more friendly distros already have all the install options laid out in fairly easy-to-use dialogs and what not, but it would go a long way toward insuring privacy if an encrypting file system were a standard install option in a big distro. With relaxation of crypto export regulations, it's becoming increasingly possible for the big US Linux companies to do this, and of course most non-US distros could have been doing it already.

    The fact is, most *nix OSes are already much more secure from cracking exploits and viruses than Windows can ever dream of being; something like Bastille is just icing on the cake. But the next step in security, and in ensuring our privacy, is having an encrypted file system as an option in widely used distros, or in widely used/easy to apply add-on products. A standard complaint when someone suggests this is the increased overhead--but with modern microprocessors, the overhead is barely noticeable--I'd know because I use encrypted file systems in Windows on a measly old K6-2 400, with overhead barely visible at all. Just try using an efs on a processor made in the last 2 years, and you'll see it's pretty snappy. Running programs from encrypted drives does sometimes have noticeable, but not deadly, overhead, but accessing data stored on those drives (logs, writings, multimedia files, etc.) is hardly slower than accessing it on non-encrypted drives. And this is my experience under Windows, I can only imagine that under Linux performance would be far superior.

    Just an attempt to point out that there's more than one issue in security; securing from crackers is far more well addressed, in almost all operating environments, than security for stored data. These days the U.S. and U.K. governments, and many others, are cracking down on expression of unpopular ideas and distribution of IP-infringing source and executables, and if they come to search your computer and find an encrypted file system, you're better off than if they find that copy of a DeCSS sort of proggie you wrote, or that article you thought you published anonymously but they managed to trace back to you, or the opinion you expressed about a company which has now decided to sue you for libel, or that copy of the webpage you uploaded which calls school officials and classmates the misguided bastards they really are.
  • if the linuxsecurity.com team had a slightly better sense of timing, they could have released this story a few days ago when it was Bastille day.

    I'm sure it's all just a ploy to get some French developers developing with nationalist pride!

  • What I mean by the subject line: don't think that ssh in itself guarantees security. Even with ssh (or scp for that matter), there's always a danger of a man-in-the-middle attack (as ssh itself calls it). A compromised machine between you and the host you're connecting to could easily pretend to be that host, sending you a fake ssh key, and forwarding your packets through another connection to the real host. This way, your communications is encrypted, but only to be decrypted at the compromised machine and re-encrypted and forwarded to its real destination.

    Of course, this could be prevented if you know the key of the host you're connecting to beforehand. But how many of us actually verify the authenticity of the host's key the first time we connect to that host? And some hosts do change their keys once in a while (perhaps a reboot, an sshd upgrade, etc.) -- although we *could* verify the new key, I doubt in practice most people bother to.

    Anyway, my point is, don't ever fall into the trap that human beings are so prone to: equating security with some entity, like unconsciously equating "ssh" with "secure connection". To be truly secure, you have to be careful at every step. Unfortunately, this is often impractical... but at least, be aware of the vulnerable parts of the chain so that at least you know what could bite you (or what bit you, if something happened). Never, ever, fall into the trap of thinking your system is secure.

    A system is never 100% secure; so, it's better be aware of its vulnerabilities than to blindly trust it, and then look around bewildered when something bites you.


    ---
  • ...is that it tries to educate the user/admin as it goes through and makes changes.

    I gave bastille to a friend that wanted to learn about security, and he was able to secure his box, get answers for some of his questions, and overall have a better understanding of how security on Linux boxes should work.

    Now if they would only port it to slackware...

    Jay, keep up the good work!

    --

  • Even with ssh (or scp for that matter), there's always a danger of a man-in-the-middle attack (as ssh itself calls it). A compromised machine between you and the host you're connecting to could easily pretend to be that host, sending you a fake ssh key, and forwarding your packets through another connection to the real host.
    Yes and no. This is what the host key stuff is all about. The man-in-the-middle cannot perform the same host key negotiation as the real destination host, not having the same private key, so it can't pretend to be the real thing. This is specifically addressed in the ssh documentation: "The second (and primary) authentication method is the rhosts or hosts.equiv method combined with RSA-based host authentication...[t]his authentication method closes security holes due to IP spoofing, DNS spoofing and routing spoofing." and "Ssh automatically maintains and checks a database containing RSA-based identifications for all hosts it has ever been used with...this mechanism is to prevent man-in-the-middle attacks which could otherwise be used to circumvent the encryption."
    Of course, this could be prevented if you know the key of the host you're connecting to beforehand. But how many of us actually verify the authenticity of the host's key the first time we connect to that host? And some hosts do change their keys once in a while (perhaps a reboot, an sshd upgrade, etc.) -- although we *could* verify the new key, I doubt in practice most people bother to.
    Well, this is very true, but there's also a matter of the convenience/security tradeoff, and just plain old probability. For example, the likelihood that my ISP's shell machine has been compromised or is being subjected to a man-in-the-middle attack the very first time I connect is fairly small. The likelihood that a newly installed server at work has is infinitesimally small. And reboots do not change the host key!

    But your overall point about security is well put. ssh will display the "host identification changed!" string, but a lot of people will probably just ignore it. And if the remote machine has already been compromised, ssh won't make a bit of difference. Everything's about awareness, tradeoffs, and awareness of tradeoffs.

  • As I'm sure you're aware, the telnet debugging flag is unimportant, since the ability to modify that flag implies local administrative access. That same access facalitates a simple keystroke-logging code change to the secure-shell client.

    The significant difference (again, as you're obviously aware since you mentioned the same issue with FTP) between secure-shell and telnet is that the data is encrypted when it travels over the network (as opposed to telnet traffic, which is cleartext and can be sniffed).

    So far I'm supporting your original statement that you are subject to the security of remote systems... but the risk can be reduced dramatically with some forms of token-based authentication. One-time pads might help too.

  • Firewalls only offer perimiter protection. Host security is still extremely important. According to the last information I saw on attack origins, most security compromises still come from inside the victim's own network (from disgruntled employees, snoops, etc.). Those aren't the compromises we hear about, because they rarely involve public things like defacing public web sites, and most organizations aren't very quick to brag that they've been compromised.
  • Of course, you can get in serious trouble for not revealing the key to your encrypted data, as Jamie Zawinsky briefly mentions in his really bad attitude [jwz.org] page (where he discusses the fact that he had to release the contents of a private mailing list due to a Netscape legal case).

    (What ever happened to the fifth amendment?)

  • I disagree. Some security solutions can also increase convenience. Single sign-on solutions, for example... sure, you're putting all your eggs in one basket, but that basket is padded and steel-reinforced.

    Kerberos-like authentication also increases both security and convenience to the end-user (for reasons similar to single sign-on).

  • I disagree. Some security solutions can also increase convenience. Single sign-on solutions, for example... sure, you're putting all your eggs in one basket, but that basket is padded and steel-reinforced.

    Kerberos-like authentication also increases both security and convenience to the end-user (for reasons similar to single sign-on).

  • I have not read the referenced article, but this is in response to the tongue-in-cheek comment (in the slashdot blurb) regarding information destruction as a means of securing information. I understand that it's a joke, but it reflects the currently dominant mindset that security is all about confidentiality of information.

    Information confidentiality is not information security. Confidentiality of is only one aspect. Integrity and availability of information are also important -- often much more important than confidentiality.

    Look at slashdot itself for an example. How important is data confidentiality? Ok, there are some passwords people could steal, but they're not significant by themselves -- they're only meta-information. Why are the passwords important? They don't protect confidential information -- they guard the integrity of authorship. To slashdot, integrity is more important than confidentiality. (Of course, there's the issue of true identities... but slashdot could only provide clues.)

    How severely would a denial-of-service attack impact slashdot's business? Very completely. An effective denial-of-service attack would completely disrupt slashdot's business.

    As far as I can tell, availability is slashdot's (and many other organizations') most important information security consideration. Confidentiality is the least important of those three information security categories (availability, integrity, and confidentiality).

    Most information is not confidential, so in most cases, integrity and availability of information are the top priorities.

    Many of the severe threats to information security throughout history have been its destruction and/or modification, and many historically significant compromises to confidentiality are now viewed as desirable increases in the availability of the affected information. Confidentiality is (as is illustrated to an extreme in George Orwell's "1984") also extremely important, but it's not the only consideration in information security.

  • I know Mandrake is redhat based, but I assumed it was different enough that Bastille wouldn't work on Mandrake. However, he makes direct reference in his interview to Mandrake - guess it works on Mandrake after all!

    tune

  • by casp_ ( 136507 ) on Monday July 17, 2000 @11:21PM (#926446)
    Ok, let me explain (I'm the person from Mandrake Jay is talking about)...

    Mandrake has its own security system, which was called Msec, and was renamed to Usec (Unix Security) because many people asked for Msec to not only work on Linux-Mandrake, but also on any kind of Unix system...

    Msec was coded too quickly, it was a bunch of shell scripts, hardening your system security and doing some security check using a cron job;
    unfortunately, it was unmaintainable...

    So Usec was coded with maintainability in mind,
    using two XML databases, one for security points (see questions, with predefined answer for default security level, etc etc ), and another database with defined actions for each answer to each questions...

    All of that was coded in a library called libbus,
    that can be easily used by frontends.

    Finally, Usec and Bastille-Linux decided to merge into one project called BUS ( Bastille Unix Security );

    The point is that we keep all of the Usec stuff,
    excepted backend, and that we use the Bastille-Linux perl backend, which many people have put a lot of work in ( Bastille-Linux backend support, as an exemple, transaction, and any change can be backed out. )

    All the Bastille-Linux security hardening point will be present in Bastille Unix Security, the security points just need to be rewriten in the XML databases (a lot is already done right now)
  • Don't get the wrong idea. Security is always a good thing, but in my mind, I don't see what the point of this is. Why not just use OpenBSD, with uncompromising security, and then stick your Linux boxes behind a firewall? For a home user, simply simply following standard procedures and closing off unnecessary ports can achieve a reasonable level of security.

    For a business, a real firewall, protecting the weaker systems can be enough. I'm not saying I'm a BSD zealot, and there isn't anything wrong with having an 'ultrasecure' Linux for its own sake, but I don't really see why this is needed.

    Amber Yuan 2k A.D
  • Actually, IBM does this, and they provide whets called a 'security audit' for companies for a fee. Interestingly, IBM has a lot more luck simply walking into companies and walking out with their systems in their arms then they do with breaking into systems. (about 70% success vs. 20% success)

    My high school tried a similar tact when testing out some crappy Mac software, unfortunately the guy they got to test the software a huge moron (The school wasn't really that bright, they only provided 3gigabytes of storage for 1700 students, they ran out of space 3 weeks into the first semester). The major problem with the security wasn't security, but the fact that the software managed disk space, and program crashes by deleting random (or all) files.

    Amber Yuan 2k A.D
  • I don't mean to start a huge flamewar here, but I really don't understand this. He is writing scripts that harden Linux (like the harden-suse perl script in suse?) Have these guys ever heard of OpenBSD? I'm sorry, but I'll take an OS that has been/is proactively code-audited for security, for years longer, over some scripts any day. I noticed in the interview that they found a vulnerability before others did. Wow. This can be said many times over for OpenBSD. Just take a look thru the CERT advisories. I think it's silly to reinvent the wheel here. All of the free Unix' have their jobs, and security is _definitely_not_ Linux' domain. - Horis
  • Why is Bastille Redhat only? IMHO is Radhat more a desktop distribution wich probably doesn't have to be as secure as a server.
    Would it be difficult to port it to other distros like Debian and Slackware?
  • Bastille was written to secure the distribution the UMBC user group gave out to students. Some of the people who helped write it were on the university pay roll. So the univesities aren't paying people to hack into their system but they are paying to make it more secure.
  • First step to securing a system is to secure the admin.

    Do you mean, make sure only a select (one, even) bunch of people have admin rights and that these people know about security? Or do you mean something else (please forgive a newbie if this is a dum kwestion :)? I remember that when apache got "white-hatted" the report the hackers produced criticised the fact that 9 people could su to admin-rights accounts.

  • you're absolutely right!! what better way to secure a distro than by never releasing it
  • I've heard of universities that have hackers that are on the payrole that all they try doing all day is to hack the system. This is their way of testing it's safety.
  • The reason I know is because I was asked to do this for a major university.
  • I bitched when OSM got sued or whatever happened, now I cant post AC anymore, is that a "Bitchslapping"?

    Lost all my Karma too. (sigh)
  • And anyone that walks up to your machine has access to all your files, network connections, passwords, browsing history, cached passwords, ect... The firewall wont help there, 'nor will it keep YOU from deleting system files by mistake.

    Trying to secure Windows 95 is like trying to secure a fart in a mitten.
  • What about Kevin Mitnik(sp?), His hard disk is still not released.
  • I noticed, Slashdot is my homepage, I thought my cable was out again!
  • Nope, he got locked up in the Eiffel [eiffel.com] tower, but was eventually cast into the C [gnu.org]...
    (not very funny, I'll post it anyway...)
  • The idea of a one-shot configuration for people too lazy to spend an hour on security makes me think that this sort of thing will only be effective if it does not hit the mainstream.

    Right now, many users install RedHat and then take no security steps. If many users started installing some default Bastille configuration and then taking no more security steps, then holes in the default Bastille configurations will be found and exploited. There's no such thing as a security free lunch, save maybe unplugging your computer.

  • by angry old man ( 211217 ) on Monday July 17, 2000 @05:34PM (#926462)
    Bagh. Everyone knows that CERTs don't have holes in them. Lifesavers have holes, not CERTs.

    In my day, we didn't have any fancy schmancy Bastille scripts to harden our systems. If we wanted a secure VAX system, we wrote our own scripts to do it. If we didn't know how secure it was, then we called our friend Kevin M. to come over and test it out. Nowadays, all you lazy Silicon Valley kids couldn't secure up a computer if it was turned off. You wouldn't know a secure computer from a circuit breaker and my friend Kevin M. can't help you because he can't come within 10' of a computer. The lazy government didn't like him testing the security of their computers, and threw him in prison!

  • After all, the only reason we even know the name Bastille is because the mob was able to batter down its gates rather easily and storm the citadel... But then, the only reason we know the name Crusoe is because he got left stranded on a desert isle -- not the image I'd want my potential customers to be pondering as they considered my product...

Do you suffer painful illumination? -- Isaac Newton, "Optics"

Working...