Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Red Hat Software Businesses Security

Red Hat Linux Gets Top Govt. Security Rating 128

zakeria writes "Red Hat Linux has received a new level of security certification that should make the software more appealing to some government agencies. Earlier this month IBM was able to achieve EAL4 Augmented with ALC_FLR.3 certification for Red Hat Enterprise Linux, putting it on a par with Sun Microsystems Inc.'s Trusted Solaris operating system, said Dan Frye, vice president of open systems with IBM."
This discussion has been archived. No new comments can be posted.

Red Hat Linux Gets Top Govt. Security Rating

Comments Filter:
  • by davecb ( 6526 ) * <> on Monday June 18, 2007 @09:11AM (#19549657) Homepage Journal

    This is roughly equivalent to "B" in the well-known U.S. "Orange Book" security standard. Previously all commercial off-the-shelf OSs were rated C or below, and had trouble even getting that (NT 4 got C only if the network was physically removed).

    The letters correspond with school grades: A is excellent, B is ok, and C is barely adequate.


  • Re:CentOS too? (Score:5, Informative)

    by Anonymous Coward on Monday June 18, 2007 @09:14AM (#19549687)
    > So does CentOS get some sort of auto cert then?

    No. CentOS (i.e., the actual binaries built by the CentOS team on the particular set of hardware used by the CentOS team) needs to go through the exact same evaluation process, with documentation and all.
  • Re:CentOS too? (Score:4, Informative)

    by crush ( 19364 ) on Monday June 18, 2007 @09:19AM (#19549733)
    The certification is specific to the combination of RHEL on IBM eServers. So specific hardware and specific version of the OS. That said, practically there'd probably be no functional difference with CentOS on the same hardware ... but you couldn't run it if the certification were mandated.
  • by Anonymous Coward on Monday June 18, 2007 @09:20AM (#19549751) 5/12-14CommonCriteriaPR.mspx []

    The following products have earned EAL 4 Augmented with ALC_FLR.3 certification from NIAP:
    • Microsoft Windows Server(TM) 2003, Standard Edition (32-bit version) with Service Pack 1
    • Microsoft Windows Server 2003, Enterprise Edition (32-bit and 64-bit versions) with Service Pack 1
    • Microsoft Windows Server 2003, Datacenter Edition (32-bit and 64-bit versions) with Service Pack 1
    • Microsoft Windows Server 2003 Certificate Server, Certificate Issuing and Management Components (CIMC) (Security Level 3 Protection Profile, Version 1.0)
    • Microsoft Windows XP Professional with Service Pack 2
    • Microsoft Windows XP Embedded with Service Pack 2

  • by sayfawa ( 1099071 ) on Monday June 18, 2007 @09:24AM (#19549801)
    That was a cut and paste troll. []

    They're never on topic, they just show up in random Linux articles.
  • Re:CentOS too? (Score:3, Informative)

    by crush ( 19364 ) on Monday June 18, 2007 @09:44AM (#19550019)
    And it should soon (Jun 21) also be certified to the same level on HP hardware. See entry 10165 here: cfm []
  • by CloneRanger ( 122623 ) on Monday June 18, 2007 @09:47AM (#19550045)
    Microsoft is only certified CAPP/eal4+ []. That is not LSPP/RBAC which is much harder and more secure.
  • by crush ( 19364 ) on Monday June 18, 2007 @10:01AM (#19550183)
    I don't think it's a flame. All that this certification means is that a government department tested specific aspects of security on specific hardware. It shouldn't be thought of as anything more, it's just a rubber-stamp for administrators that don't want to understand security.
  • by dylan_- ( 1661 ) on Monday June 18, 2007 @10:35AM (#19550529) Homepage

    the EAL4 + Augmented with ALC_FLR.3 rating, which BTW, both Windows XP SP 2 and Windows 2003 Server SP 1 also have, is only equivalent to C2, which is the same rating that NT 4 received.
    Here [] is the Windows cert. Here [] is the Redhat one. Notice that under PP identifiers Windows has CAPP, while Redhat has CAPP, LSPP and RBACPP.
  • by dylan_- ( 1661 ) on Monday June 18, 2007 @10:39AM (#19550575) Homepage
    It's not the same certification. Windows' is for CAPP only. Redhat's is for CAPP, LSPP and RBACPP.
  • by cowbutt ( 21077 ) on Monday June 18, 2007 @10:48AM (#19550657) Journal
    Sorry for the naive question in advance, but I was under the impression that some flavors of BSD (OpenBSD?) were extremely secure as well. Is that not so? In that case, wouldn't a BSD version be more suitable for secure/sensitive installations?

    No, because without the certification, secure/sensitive installations aren't allowed to use those flavours of BSD (or any other uncertified product). If there's no other way of performing a function, it might be justifiable, but it'll be a brave sysadmin that pursues such a course.

    The above has no bearing on BSD's relative technical merits and demerits compared with OSs that have achieved CC certification.

  • by lib3rtarian ( 1050840 ) on Monday June 18, 2007 @11:01AM (#19550867)
    I'm going to venture that you don't know much about serious professional level computer systems. I'm going to discuss, point by point, why you are just flat out wrong and not thinking clearly about many things.

    A)Many different versions of Linux have various binary packaging systems so you don't have to compile things, Debian and Redhat being the two most popular (yum and synaptic/ .deb and .rpm). The constant upgrade cycle where you discover that your most recent upgrade broke something has nothing to do with the process of compiling software per se, but interoperability between different software. The Microsoft WSUS updates are constantly breaking applications, and this is even more exaggerated in the server market.

    B)The vast majority of mission critical infrastructure systems that the internet and all high level computing systems run from the command line. Switches, routers, cores, these are the bread and butter of what makes the internet work, and nobody says that a developer has failed when they produce one of these that works. Frankly, you are just being hyperbolic, failure as a developer means that your application does not work. These devices and applications do work, and as anyone familiar with a command line interface knows, it is usually far simpler to troubleshoot a problem in an environment that you have complete control over (like the command line) than it is in some hairbrained GUI that is made to pander to people like yourself who consider themselves technical users but think that command line interfaces are bad.

    C)Linux documentation is far superior to that of Windows, because the API's and sourcecode are all available. Learn how to program, don't blame the difficulty of programming on inferior documentation and instructions. There are people who do what they want in linux, just because you can't, doesn't mean that there is something wrong with linux. Rather, it probably means you are not that smart. The entire notion that linux is an alien environment presupposes a fetish for windows.

    Your conclusion is complete bunk, because your arguments don't hold any water. Basically, what you've just done is ranted. Linux does not suck in the regards you listed. Nothing is perfect, and everything can be improved, but you simply don't make a nuanced point like this.

    Besides which, this thread was about Security!
  • by morgan_greywolf ( 835522 ) on Monday June 18, 2007 @11:25AM (#19551197) Homepage Journal
    Not only that, but those Windows is only certified on specific hardware, while the same is not true of the RHEL5 cert. Thanks for pointing that out. It shows once again that a solid stable system like RHEL5 is indeed more secure than Windows, even if only it is because the military believes it to be so. But I'm guessing that military IT might know a thing or two about good systems security. ;)

  • by Anonymous Coward on Monday June 18, 2007 @11:35AM (#19551323)
    This is actually a complex issue that cannot be summarized as "much harder and more secure".

    EAL4+ refers to the assurance level applied to the software in question. It measures how well the software is implemented - in some sense what the probability of undiscovered holes is.

    EAL4+ is actually a rather low level of assurance. After all, Windows can pass EAL4+.

    CAPP. LSPP, and RBAC are protection profiles that refer to the protection policy enforced by the software. CAPP coveres things like access control lists and protection bits, LSPP covers mandatory security policies, like the Bell and LaPadula policy. RBAC covers role based access control. These are all security features.

    You can have lots of security features and low assurance. You can have few security features and high assurance. You can have lots of security features and high assurance.

    The old Orange Book scheme used a single dimension to rate the level of security. Both features
    and assurance went up as you went up the scale. This was done to keep the system simple.

    The Common Criteria is vastly more complex and confusing, and it is much harder to compare the level of security of two different products with Common Criteria evaluations.

    very very rough equivalents:

    Orange Book Common Criteria
          C2 ------ EAL3 and CAPP
          B1 ------ EAL4 and LSPP
          B2 ------ EAL5 and LSPP + covert storage channel protection
          B3 ------ EAL6 and LSPP + covert storage and timing channel protection
          A1 ------ EAL7 and LSPP + covert storage and timing channel protection

    These equivalents are VERY rough, because LSPP doesn't really consider the issues at the higher
    EAL levels.
  • by IdleTime ( 561841 ) on Monday June 18, 2007 @11:37AM (#19551363) Journal
    When you use Linux for your commercial needs (which this is clearly intended for), you don't recompile kernel every week. The box stays the way it is unless some major security related updates are needed. You schedule downtime to make any changes and you are lucky if you get 1 hours of downtime a year.

    This is not desktops, but huge servers. I have many many times tried to get such organizations to even apply one of our patchsets to their servers due to them hitting known bugs and it may take a couple of months for them to schedule it, and then only after testing it on their test servers and getting approval from management. For all of them, this is perfect and does not constitute any problems at all.
  • by mattpalmer1086 ( 707360 ) on Monday June 18, 2007 @11:39AM (#19551411)
    Good news! It's ready!

    A) You don't have to compile anything. But you can if you want to. And you can forget about all those dependency DLL-hell issues too that you get in Windows, if you use a modern distro with good package management. Then you just fire up the GUI, put a "tick" in the box for the software you want, and it gets it for you and installs it. It's easier than having to trawl through someone's web site for the right installer, manually download it, manually run the setup. And then find the installer won't remove the software properly when you want to get rid of it or find it needs some obscure runtime DLL you never heard of and don't know where to get.

    B) I do take exception to the "force me to drop to the command line" bit. Why would you need to drop to the command line to edit a text file, assuming you needed to do such a thing? I do drop down to the command line quite frequently though - it's good for batch operations and scripting things together, but I use a graphical text editor if I need to edit text - I'm not a masochist! Having said that, I haven't had to edit a text file on linux for system administration reasons for nearly a year. It's not a constant occurrence, not anymore. Hardware - all auto detected on installation. All devices I've plugged in have just worked (no need to trawl manufacturer sites for the latest driver). GUIs for all common system administration tasks. As far as windows goes, anytime I have to directly edit the registry, you as a developer... oh, never mind ;)

    C) Help is better these days, but I agree is still patchy in places. And arrogant people do still exist on some of the forums (hopefully getting fewer all the time). But then, Windows help files were never that good either, and I don't recall getting any help from Microsoft unless I paid for it. Can't really think of a system where the help and service has been uniformly excellent.

    The truth is, linux is not the system you are describing anymore. Maybe 5 years ago - it's come on a long way. Why not download a bootable Ubuntu Live CD and give it a go, just so you know what's it's like these days.
  • by KiltedKnight ( 171132 ) * on Monday June 18, 2007 @11:50AM (#19551585) Homepage Journal
    Perhaps someone needs to inform Mr. Frye that there are things out there that are higher-rated...

    XTS-400 (Wikipedia entry) []

    XTS-400 []

    That particular system is rated at EAL 5. IBM's only achieved EAL 4.

  • by Anonymous Coward on Monday June 18, 2007 @11:54AM (#19551649)

    There seems to be a fairly significant amount of ignorance on this topic.
    Some information is available [], but it is as complex as the systems which need securing. The basic idea is to give a number that indicates the quality with which security is maintained. ALC_FLR.3 is broken down to mean:

    1. ALC = class of life cycle support
    2. FLR = family of flaw remediation
    3. 3 = level of ranking

    In particular, at ALC_FLR.3 it means that you have procedures for reporintg flaws, you fix the flaws, and you tell users how to get the fixes. All of that must be verifiable and documented. It is the highest ranking in that category. There are many other categories, however.

    Furthermore, there are additional protection profiles that may be invovled. Because of the difficulty of getting certifications, they are typically provided with a specific hardware baseline and a specific configuration. So, Sun's Trusted Solaris 8 was EAL4, but also has LSPP on top of it.

    Well, what does all that mean? It means that evaluated products may be configured securely, but not that they are secure innately. Effort is required. In practice, having an EAL rating is an initial requirement for an OS. In some cases there is an minimum rating for using the software. Sometimes it extends beyond the OS. All of that depends upon the contract. EAL4 is generally considered the minimum rating to be viable in a "low risk" environment.

  • by Mr. Hankey ( 95668 ) on Monday June 18, 2007 @01:08PM (#19552929) Homepage
    Integrity is an RTOS platform, not a general purpose OS. I've worked with their ARINC 673 product a bit, much standard UNIX functionality would break the guarantees made by an ARINC-compliant OS so it's just not present. Xen is a close enough approximation if you just want to partition the system off without using ARINC 673, but in order to get the same sort of certifications as Integrity (or VxWorks' ARINC 673 product for that matter) all the code involved with Linux - kernel, userspace etc. - would need a line-by-line code review, probable changes, and sign off.
  • by Anonymous Coward on Monday June 18, 2007 @01:15PM (#19553033)
    Actually Military IT isn't the greatest. Too many young kids with not enough experience. However, the NSA does the accreditation and they, unlike the above poster states, are very good at what they do. The testing does'nt prove that the OS is more secure; it demonstrates that it is designed securely, and more importantly, that it has adequately tamper-resistant auditing and adequately rigorous permissions. That's why POSIX compliant OS'es aren't very convenient to certify; the permission systems are very different.
  • by Bishop ( 4500 ) on Monday June 18, 2007 @01:47PM (#19553579)
    This certifications at the EAL4 and up levels are all functional tests. That is the actual system is run. Software by itself cannot run. It needs the hardware. These types of certifications are designed to eliminate as many unknowns as possible. Any RHEL system should behave the same but can you guarantee that? Consider the simple case as a bug in a hardware driver in one system but not in the tested system. That said, it is reasonable to expect that all x86 type hardware similar to the eServers would achieve the same certification.

    Also IBM paid a pretty penny for the certifications. They would rather their competitors pay for their own certifications.
  • by evilviper ( 135110 ) on Monday June 18, 2007 @02:53PM (#19554715) Journal

    Sorry for the naive question in advance, but I was under the impression that some flavors of BSD (OpenBSD?) were extremely secure as well.

    The confusion here is that this certification has nothing to do with exploits or kernel bugs (the form of security most people talk about on a regular basis). We're talking about CIA/NSA levels security. It's based largely on how finely-grained the system permissions are, so that an exploited application can't access any other files, open any other ports, etc., etc., as well as ensuring that a system can have multiple administrators, each with very limited scope of privileges (no single root account) and overlapping authority. It is known as MAC (Mandatory Access Controls).

    RedHat Linux has MACs mainly because it took the mechanisms from the NSA's SELinux and rolled it into their own OS.

    FreeBSD has a spin-off project called TrustedBSD which has actually been around longer than SELinux, and has had much more impact, with some of it's features having been integrated into other systems such as NetBSD and OS X. See: [] and rch-handbook/mac.html []

    The difference, though, is that RedHat is a company, which wants to pay for certification so they can use it to market their product. FreeBSD/TrustedBSD isn't run by any large company with a deep financial interest in marketing the OS, so it's unlikely to go through the evaluation and certification process.

    OpenBSD doesn't have any of those security mechanisms, but you can accomplish the application security part of it through extensive use of systrace. Both methods are difficult to use effectively in practice, and require a skilled an dedicated admin... not really cost-effective for 99% of companies.
  • by thommym ( 1059510 ) on Monday June 18, 2007 @03:22PM (#19555213)
    Have a look at the Solaris man pages. Plenty of examples in them...
  • by Anonymous Coward on Monday June 18, 2007 @06:08PM (#19557515)
    If I had a dollar for every "advanced user" who does everything you claim to do yet finds Linux are my responses - I'm a REAL linux user. I use linux for everything, from games, educational apps for my children, to work, document creation, web browsing, you name it. I don't use windows at all - no dual boot systems in my house.

    I haven't had to compile anything since I tried to make a Winmodem work about 4 years ago. Compiling is a feature, not a requirement. Ever run into a program that needs XP and you wanted to use 2000? If you run into that kind of thing in Linux, you can often compile the app to make it work. 99% of the time, compiling something and then installing it takes just three commands: ./configure, make, make install (as root or with sudo).

    Man pages are sometimes useful, but what I often prefer is google. An example is better than reading documentation, in my personal case, though I do refer to man pages when I want detail or when I want to tweak a command to my unique specs. What's wrong with man pages in any case? Would you prefer NOT to have them? You can access man pages from the gui and they work like web pages - did you know? Try "man:bash" in the konqueror URL field ;) It even turns the "see also" type notes into links to other man pages.

    As for the command line, you have it all wrong buddy. Why do you think they included a better command line with Vista? The command line is a FEATURE. Google this, "ubuntu unofficial guide". It takes you through a series of steps that you can use to make ubuntu work better, and to add features that they couldn't add themselves for legal reasons, mp3 support and the like. It's a matter of cutting and pasting many commands into the command line, including bits about registering mime types in some cases and such. Can you imagine how huge that guide would be if it was loaded up with GUI screen shots and asked the user to jump from menu to menu to menu? Mice didn't replace keyboards, and GUIs didn't replace the command line.

    An advanced user who uses macros, hot-key shortcuts and other advanced time-saving techniques would be WELL SERVED to learn some command-line shortcuts.

    Ya see, that's what the console is for me - a magic short-cut box. I'm lazy and wouldn't use it if I couldn't work FASTER with a shell open.

    It's interesting that you lump the Mac in with Windows. OSX is often referred to as the "best" GUI, yet if you look, you'll find command line programs of significant complexity available for Macs and Linux, news group apps that handle the NZB format and bittorrent apps, to give just two examples. There are advantages to running an app in the console. With VERY low bandwidth you can remote control them and do all kinds of wonderful things with tools such as ssh and screen. And, minus the GUI, they often take far fewer resources.

    As for text files, can you give specific examples? I often edit text files in the console because It's faster than loading a GUI tool. Samba is a good example, if I want to quickly add a file share. I CAN do this in KDE by sharing a folder and such. I can't really think of a single major common task that FORCES me to edit a text file. Refer back to that ubuntu guide that you googled above. Note that it has you add entries to some text files, the sources for the package management system, for example. That CAN also be done using the GUI package tools. Try both and tell me which you think is faster ;)

    Finally, it's ironic that you mentioned that you can use VirtualDub to make xvid files. I bet you know more about the details of an xvid than I do. I do it too, but I do it from the command line and I can do it FASTER and get by with knowing LESS About frame rates and such. Do you doubt this? Want proof? Go to and look up a conversion guide both in linux and in Windows. Windows will often have you download and install several apps and show you many screenshots with specific checks here and t
  • Not EAL-4 is not "TOP". Shame on the press release writers for spreading untruths.

    Nor is EAL-4 the highest rating an OS product has achieved.

    EAL-5 has been achieved by only one complex product in the world last I looked (BAE's STOP OS, a Linux look-alike in API/ABI running on an Intel CPUed platform) and it doesn't lose its security rating when connected to a network.

    The value of the rating system is that it lets everyone see the criteria under which you were judged and the degree of excellence against those criteria determined by independent judges. But the person selecting the product has to know a lot about security to be able to understand the value provided. For example, it is easy to configure most EAL-4 rated OSs in such a way that they void their rating.

    Having been the Product Manager during the STOP evaluation, let me congratulate Red Hat as achieving EAL 4 is a great achievement for their team (and was required of us before we could even submit for an EAL-5). May they now go on and undergo additional time, expense and pain in striving for a higher rating.
  • by deskin ( 1113821 ) on Monday June 18, 2007 @10:26PM (#19559935) Homepage

    Good question. I haven't spent much time with any BSD system, but I've spent enough with SELinux (personal pet peeve: it's not `SE Linux', though `SElinux' or 'selinux' are acceptable) to know a bit about the difference. Pardon me if I wax loquacious...

    In the computing world, the vast majority of security flaws come from bugs: improper handling of untrusted data leads to buffer overflows time and time again. Fix the bugs, and those security flaws go away. However, what about the ones you didn't catch? Someone is perfectly capable of discovering them, and exploiting them, until you discover the same problem and fix it. It's a vicious cycle, and you can never win: there's always another security hole, because there's always another bug. The security holes from bugs you haven't found yet are known as zero-day attacks [], since any patches to the bugs have existed for zero days (or something like that).

    The OpenBSD solution to the threat of zero-day attacks is to spend lots of time looking at its code, and reviewing its code, and testing its code, before vetting it to be `secure' enough to use. They do an excellent job: I don't know particulars, but I'd guess that an OpenBSD system out of the box is more secure than even a no-frills Linux distribution. They lock everything down, and generally don't run software that hasn't been tested thoroughly. Note, however, that you can poke holes in your shiny OpenBSD system by downloading and installing buggy code: Try any poorly-written FTP server, for instance, and watch your box get 0wnd.

    The OpenBSD approach shouldn't really be seen as a choice, because every operating system that wants any hope at security needs to go through this process, of reviewing code time and again, and squashing those bugs dead. The deviation from other operating systems is the point where the code is declared to be `good enough', and put into production. OpenBSD developers are just really careful about declaring software to have reached that point. But they aren't perfect. Go to OpenBSD's website [], and notice the text that says "Only two remote holes in the default install, in more than ten years!" Pretty good, right? Yup. However, as recently as three months ago, that read "Only one remote hole [...]". What gives? OpenBSD didn't handle some obscure IPv6 stuff right [], and it was found that someone could run arbitrary code through this bug.

    Does this mean that OpenBSD is a failure? No, though it does mean that they failed in their (rather lofty) goals at least twice (that we know about; I maintain they should change the banner to read 'Only X remote holes in the default install, in the last Y years, that we've discovered so far!'; but, that's just me). This doesn't (shouldn't) besmirch their reputation, and the OS is still one of the best, I'm sure. But ultimately, things like this will happen again; and inevitably, some cracker one day will write an OpenBSD exploit, and steal millions of credit card records because of an OpenBSD system which had a security hole, while the owner of the system believed it to be secure. In short, it's like most any other publicly available operating system: it tries really hard to be secure; and it is probably more secure than any of them, according to their accepted definition of having no security holes. It is an excellent goal, but it's ultimately impossible.

    SELinux, which is the core of what was required of Red Hat Enterprise Linux 5 to pass this certification, is a very different approach to security. There're tons of things that go in to making SELinux, but I'll try to keep things as succinct as possible, at the risk of leaving (hopefully unimportant) things out. SELinux operates on the principle of `domains', which are made far more abstruse than they need to be. A domain is a

Disks travel in packs.