Forgot your password?
typodupeerror
Security Operating Systems Software Government Linux News

How the NSA Took Linux To the Next Level 172

Posted by Soulskill
from the not-by-beating-the-end-boss-of-the-previous-level dept.
An anonymous reader brings us IBM Developerworks' recent analysis of how the NSA built SELinux to withstand attacks. The article shows us some of the relevant kernel architecture and compares SELinux to a few other approaches. We've discussed SELinux in the past. Quoting: "If you have a program that responds to socket requests but doesn't need to access the file system, then that program should be able to listen on a given socket but not have access to the file system. That way, if the program is exploited in some way, its access is explicitly minimized. This type of control is called mandatory access control (MAC). Another approach to controlling access is role-based access control (RBAC). In RBAC, permissions are provided based on roles that are granted by the security system. The concept of a role differs from that of a traditional group in that a group represents one or more users. A role can represent multiple users, but it also represents the permissions that a set of users can perform. SELinux adds both MAC and RBAC to the GNU/Linux operating system."
This discussion has been archived. No new comments can be posted.

How the NSA Took Linux To the Next Level

Comments Filter:
  • Re:wrong (Score:3, Interesting)

    by pablomme (1270790) on Sunday May 11, 2008 @12:58PM (#23369862)

    Everyone who matters has always just called the OS "Linux".
    Right. Because none of the packages on this list [fsf.org] matters at all.
  • by zrq (794138) on Sunday May 11, 2008 @01:18PM (#23369976) Journal

    .. hence many Linux admins simply switch it off.

    Fine by me.
    Means that when it becomes mainstream, anyone who is familiar with how to configure and use it will be in high demand.

  • by Score Whore (32328) on Sunday May 11, 2008 @02:10PM (#23370270)
    Forget the pain in the ass nature of the kit. Consider the legality of it. The NSA cannot legally own copyright. Anything they produce is in the public domain. Therefore they cannot legally develop code that is under any license.
  • by jxxx (88447) on Sunday May 11, 2008 @02:38PM (#23370442)
    Maybe I'm missing something here, but fork() and exec() do different things. I don't see how one could be used as a general purpose replacement for the other. Do you mean fork followed by exec instead of system()?
  • by lkcl (517947) <lkcl@lkcl.net> on Sunday May 11, 2008 @02:56PM (#23370558) Homepage
    the "NSA" is not developing "your" OS. the NSA is (indirectly) verifying via (indirect) sponsorship and advocation that an (independent) university-developed scientific security model (FLASK) is (independently) implemented by a company and then (independently) maintained by (independent) people such as stephen smalley.

    look at the web site. it say "POSIX not good enough for proper security. therefore we make it better so that civil services, and other environments where security matters, have someone to go to to ask 'is this secure to level XYZ?' and get a certification"

    the bottom line is: be damn grateful for their involvement because it beefs up linux and allows it to be recommended for deployment in places where it would otherwise be hopelessly outclassed. remember: selinux allows linux to be "certified" as "secure", and mathematically provable as "secure". those certifications are absolutely vital for deployment in certain kinds of environments.

    so be glad that linux is getting a leg-up, thanks to the NSA.
  • by John Whitley (6067) on Sunday May 11, 2008 @04:44PM (#23371362) Homepage
    This was also why a lot of folks prefer the competing grsecurity system [grsecurity.net]. First listed among its features (and this has been available in grsec for years):

    An intelligent and robust Role-Based Access Control (RBAC) system that can generate least privilege policies for your entire system with no configuration
    grsec has a lot of other great features; see the link above for details. IMO, it's somewhat unfortunate that grsec has remained a separate patchset for the Linux kernel. Unusable security is useless security; I'm glad to see some catch-up on the SELinux front.

    Anyone out there who's used both grsec and SELinux + AppArmour want to favor us with a comparison?
  • by mosinu (987941) on Sunday May 11, 2008 @05:40PM (#23371808)

    Means that when it becomes mainstream, anyone who is familiar with how to configure and use it will be in high demand.
    If no one's using it, how will it become mainstream?
    Quite simple really; Government mandate. Some agency will mandate it or make it part of some policy. From there it will spread into private sector via companies that do business with said agency. The Agency I work for is already doing just such a thing for new projects. Any company that is running Linux by contract has to secure their system through multiple methods; including SELinux.
  • by sjames (1099) on Sunday May 11, 2008 @05:49PM (#23371882) Homepage

    None of that is the problem. The problem is in the WAY the access is specified by slicing and dicing the namespace to assign a security context to each object.

    If I write an app that needs to access JUST ONE file in /etc and other apps already access it and a few more under a common context, I have two choices. I can allow my new app carte blanche on /etc (bad) or modify the policy of the other apps that may access the file to grant them the new context. Lather, rinse, and repeat until you've made a hash of the policy source (and the admin rips out his last chunk of hair).

    Then, now that you've hacked away and sliced and diced enough to grant everything just what it needs and then you do yum update. I swear, you can actually HEAR satan laughing maniacally below as you have to either abort the security update (and be insecure), turn off MAC (and be insecure) or accept that half your system will be broken now (I suppose an app that won't even run IS secure, but now the ADMIN feels insecure).

    what's needed is a policy.d directory. Each app is allowed to drop one file there that will be evaluated in isolation from whatever else is there to grant that particular app what it needs without having to understand the rest of the policy (perhaps modified locally anyway). The directory is there, but there's more than one and the files there have to understand the others and the global policy to avoid problems (like preventing the policy from compiling at all).

    Most places simply do not wish to pay for the amount of admin time required to make all of that work. In many cases, they're well justified in that, the data on the systems just isn't worth that much.

    When it is used, a common pattern is to run the app and then mindlessly add permisssions for whatever was denied until it finally works. The natural result is an overly permissive policy. All the disadvantages to using AppArmour automation plus granting entirely unnecessary access to any files related to the files needed.

    My post sounds almost entirely negative, but that would be unfair. In the environment SELinux was developed for, where leaked information can be a real disaster, it makes perfect sense to invest the administrative effort that is required. For the rest of us, it got the kernel code moving in the right direction for MAC and MLS. That makes follow on schemes better suited to the rest of us more likely to happen.

  • by Score Whore (32328) on Sunday May 11, 2008 @06:05PM (#23372024)

    They can let contractors own it - happens all the time as a form of corporate socialism.


    No they can't. They can contract with a contractor to develop a piece of code, but the government cannot develop something and then give it to a corporation. If government employees are building it, it's public domain. That's the nature of US law.

    They can also release to the public domain and let it be incorporated into the kernel - the GPL is compatible with the public domain.


    Your statement is fallacious because the code is automatically public domain. There is nothing to be done to release it into the public domain as it's already there. The legal problem comes from the fact that it is a derivative of a GPLed program. Therefore if they want to distribute it it must be GPLed. However government employees cannot produce anything that is not in the public domain. See the problem? The GPL license requires that they release their changes under the GPL and the law requires that they release under the public domain.

Man must shape his tools lest they shape him. -- Arthur R. Miller

Working...