Can Open Source Be Trusted? 304
"I also menitoned OpenBSD to him as an example of a secure system that was open source. I argued that it was exactly because of the OpenBSD/FreeBSD development model (i.e. closely controlled with a top down hierarchy) that it was able to be more secure. Dr. Spafford still felt that OpenBSD did not fit the criteria of a well-trusted system: because it was not designed to a formal spec, and there are no formalized tests or standards being applied to it. Are there some ways in which we can get OpenBSD more trusted by testing it against some infosec standards?
For the rabid reader, I would just like to point out that Dr. Spafford NEVER disagreed with the 'more eyeballs means less bugs' tenet of faith that so many open source advocates preach. He just felt this was irrelevant to his point--how do you judge whether a system is more trusted than another system when there was no design spec or goals listed out to which to test the system against?
All in all, it was a challenging lecture. I felt myself start to get irritated, but by the end of his lecture, I was convinced he had a good point. How do you think we can address his criticisms?"
Honestly, I would say the same thing about a lot of commercial software as well. Just because you sell something doesn't mean that it's been designed properly, and likely just because something is free doesn't mean it's been slapped together with duct tape. Further more I'd trust a program with source more than one without and many open source developers are always willing to accept a better design. I'd also like to say that just because there aren't many pieces of Open Source that proceed with fully documented design goals, that this won't always be the case for future projects.
Some people have too much time on their hands. (Score:1)
Whats that got to do with it? (Score:2)
What does the way something is developed have to do with the final product (or a given release), and the tests performed on it? You are testing the product, not the developement environment, surely?
I agree with him (Score:5)
I have to agree (Score:2)
Commercial software (i.e. closed source) benefits from not being developed in a chaotic way and can be more secure. Techniques such as the Cleanroom approach, software inspection can lead to almost zero-defect software. Which is something I don't think Open Source can ever really achieve.
The use of formal methods for specification and verification does achieve secure systems, the only way Open Source can match commerical systems developed in this way is for them to be developed in a similar fashion. Sure, you can still give the source away, but developement does need to be centralized for these techniques IMHO.
More than closed source (Score:3)
Open source software can be trusted more than closed source software when it comes to security, for all the reasons that you all know (quicker bugfixes, code open to scrutiny, etc). Closed source software can have hidden APIs, bad implementations and bugs, and the release cycle is slow.
OpenBSD is interesting, as they do audits on software to get rid of the security holes. They can only do this because the source code is available.
Of course, software like Sendmail, various ftpds, POP3 daemons etc, all mess up the security aspect of an OS. The OS can be as secure as it can possible be whilst still being usable and useful, but if the software being run on it is vulnerable, then backdoors into the system will be found. Having the source code available allows the cracker to find better access methods than having to guess and feel their way into a system.
You just have to remember that there will never be perfect security, and plan accordingly.
it's all in the definition (Score:4)
My assertion is that open source challenges the notion that you need a formal spec to develop trusted software. Much like the submitter of this story, I would hold up OpenBSD as an example of a system that I consider trusted, yet was not developed under any formal spec. Perhaps it's time to realize that formal specs help to get things done correctly, and they certainly help get things done quickly (by preventing, in theory at least, feature-creep), but they certainly are a requirement.
open source is orthogonal to trust (Score:2)
Honestly, I would say the same thing about a lot of commercial software as well.
I suspect that Dr Spafford would agree with you. Whether or not a piece of software is open source is orthogonal to whether or not it can be trusted.
Just because you sell something doesn't mean that it's been designed properly, and likely just because something is free doesn't mean it's been slapped together with duct tape.
Just because you sell something doesn't mean that it's not open source.
development methodology != design and testing. (Score:2)
A benevolent dictator, acting much like Linus, could accept only code that brings the product closer to the design.
The test suite or testing procedure could be released along with the code. Sure, goals like "ISO 9000 security compliance" are less popular than "a working operating system", but that doesn't mean you have to keep your source closed. And it doesn't mean you can't accept patches that bring you closer to your goal.
Its all about scale... (Score:2)
Trusted, Assured, Safety Critical, these are all areas where IMO Open Source won't work. They require a level of discipline and upfront analysis and design that doesn't merge well with release early and often. OSS creates great software for large user bases, however it tends to create products rather than enterprise applications.
The key to trusted, assured or Safety Critical is the specification. You must know in advance what you have to prevent. Its no good after you've lost all of your data fixing the bug that allowed it open.
That's not what he said. (Score:2)
It doesn't matter whether the source is open or closed.
Re:I have to agree (Score:2)
I don't know of many big Open Source projects that have just been hacked out from the start now, there has always been a lot of planning going on, and surely less bugs means more security?
I bet a lot of the closed source software that is developed is not developed in a less chaotic manner than the Open Source way? If you call distribution of programmers "chaotic" then yes, but I believe that you can have more chaos when all the programmers are in the same room and some are doing things they don't want to really be doing, than when all the programmers are spread out around the world (and never see each other even!) but have the desire to write the software. Formal specification and verification is all very good, but it takes so long that it isn't used in any commercial companies, except in some kind of loose analogy. Commercial Companies want to get their software out of the door quickly, this does not lead to good secure quality software. Open Source can take its time, witness the Linux 2.4 kernel - getting it right before it is released. OTOH, Gnome 1.0 was a mistake, they tried to compete with KDE before the software was ready.Tools such as rational rose can be all very good when used correctly, but all to often the designers do not have the time to get familiar with the tools they use, so they underutilise the software in such a way it takes them twice as long to do something that could have been expressed on a sheet of paper.
So you can see how Open Source works, and it looks chaotic, distributed, and messy, but the code isn't, the people have an interest in the code beyond making a fast buck, and this means that bugs and security flaws will be detected, and fixed, and the software will end up being of a high quality, when it finally gets released. Most commercial companies do not have the luxuries known as time, dedicated programmers, etc. They have money to throw at a problem.
Re:Of course - just look at its proponents (Score:2)
Eric Allman is the sendmail hacker, ESR wrote fetchmail
And yes, sendmail sucks. run qmail [qmail.org] or postfix [postfix.org] instead.
(I can't belive that I actually reply to such a blunt flamebait ... )
MICO? (Score:2)
Trusted systems=??? (Score:2)
Just because something is closed source, that doesn't mean it's developed to a formal specification with formalized testing.
Furthermore, what constitutes a formal specification? Both OpenBSD and Linux derive their security models from the Unix security model. That model *is* a specification. But is it a *formal* specification, and if not exactly what constitutes a formal specification?
FInally, as for formalized testing, I don't know that much about what Linux kernel hackers do, but I'm to understand its a very chaotic environment. I agree with the professor here: I don't think Linux has had formalized testing for *anything*, let alone the area where it matters (security.) Yes, to a bazillion eyeballs all bugs are shallow, but formalized testing means creating a formal benchmark test that puts the program into virtually every conceivable situation. I'm not saying that closed source developers do that (isn't it obvious? look at the number of security holes in the Windows operating systems over the years.) but I do think that Linux, and software development community as a whole, needs serious improvement in the areas of testing.
From what I can see, the proessor's whole gist is this: software development needs develop an engineering culture. Software needs to follow a systems development life cycle (SDLC)where formal specifications (requirements documents) are written up in advance, the software is developed within an engineering culture, and there is formalized testing and user acceptance. These seemingly superfluous controls, especially in the area of infosec, are vital to controlling the inevitable bugs that crop up and to making sure that software meets the requirements.
I think Linux and *BSD at least need to take a hard look at requirements gathering and testing phases and see if there is any room for improvement.
Two variables (Score:5)
Formally designed and reviewed software can be more trusted than chaotically assembled software.
These seem orthogonal to me (and to each other! wakka wakka wakka!). Sure, most all of the software out there is NOT formally designed for security first. A lot of it is open source, a lot of it isn't. Open source obviously doesn't make a programme or suite instantly bulletproof, neither does formal design and review. Nothing is 100% secure, trustworthy, or bug free. Loads of things can help or hurt the process.
Opensource is what we wish it to be.. (Score:2)
Definition of trusted (Score:3)
What we really have to remember is that Open Source OS's simply don't have the features that the trusted system evaluation criteria dictate -- it has nothing to do with whether or not they're secure in practice, but has *everything* to do with if they're secure in theory, such that a poor implementation can't break the security model.
Memory that's segmented in hardware such that even increasing your process priveleges doesn't allow access to the memory space of another process? Filesystems that log every transaction (including read/stat operations)? Systems that log every system call reliably, in an untamperable state? These are the features of government critically evaluated trusted systems, and until Open Source OS's support them, we shouldn't gripe. =)
Trusting trust (Score:2)
I think that it could be worthwhile to reconsider Ken Thompsons all time classic Reflections on Trusting Trust [umsl.edu] in this context.
Peer review (as one of the streanths in opensource) won't alone give you a secure system, there are far to many other factors to be considerd. Peer review is however one very important factor.
Apologizing.... (Score:5)
Well, yes. But that's not the point. You're playing the ad hominem deflection game: "seeeeee, they're just as bad, toooo!!!"
Boiling this issue down to open versus commercial is completely orthogonal to the actual point: well-designed, specced-out systems are going to be more trustable than ad-hoc slapped together ones.
Where the code comes from should be irrelevant -- if the spec is good and the code actually matches, all is well, regardless of origin. Granted, open source makes bugs easier to find, but also adds an element of chaos into the implementation phase that could be detrimental; the bazaar doesn't necessarily follow specifications. It's a tradeoff....
Trying to cast this issue in the light of open versus closed is really just playing the 'when what you have is a hammer everything looks like a nail' card with the Slashdot open-good-closed-bad mentality.
Not every software development issue can be solved by a free license. Really. I promise.
--
Re:I agree with him (Score:2)
Good chance for some company to make some bucks.
Trusted systems or secure systems? (Score:2)
Trusted systems are usually of the kind where every action is auditable and traceable; system administrators do not have access to delete logs or change audit trails, etc. The 'trustiness' and 'security' of these types of systems is rather designed through system architecture and specification of how, exactly, everything works, and very strict definitions of security access and permission. Bugs and exploits dont even figure into wether or not the system is 'trusted'. It's a measure of how it works; ie, your sysadmin cant add a $50000 bonus to his salary in the economy system (he probably cant even access that data) and wipe the logs of his doing it, bugs notwithstanding.
Of course, no commercial or free 'standard' unix lives up to that, simply because of design (some trusted unix-like systems do tho).
Which, I think, would be his point; no open source OS currently exists that implements something like that, and the usual design methods in opensource favour functionality and features above things like total fascistic control and auditing of every action taken in the system.
That is not to say an opensource model wouldnt work for a trusted system; it would probably even be better, but bugs are irrelevant to the very idea of wether a system is considered a Trusted System or not.
Understand what "trust" means. (Score:5)
To describe this you generally start off with what you don't want it to do. Examples include "kill someone" or "let someone write to a file without authorisation". Then you have to say exactly what you mean by that (e.g. how might the system kill someone, and what constitutes "authorisation"), and before you know where you are you have a document several hundred pages long, much of which should be Z or VDM. Then you need to check that for any holes.
Then you have to prove that the system fulfils these requirements. Now a full formal proof of this is going to be a larger project than the original software by an order of magnitude, even with today's automated support. So the only feasible solution is to write the software very carefully. You have to identify each piece of code that might cause a non-spec event to occur, and then explain how it prevents any execution which might be outside the spec. And since this is important, you have to leave an audit trail behind so that potential clients (who are going to be staking lives and fortunes on your software) can see that you have done this properly. Unless you do all this, your system cannot be trusted.
(Aside: you also have messy recursive problems with trusting your development tools and hardware)
Put it this way. We all know Linux is reliable, right? But would you stake your life, or even your house, on keeping your Linux box up continuously for the next 12 months? I sure wouldn't. I wouldn't even do that with BSD. There are a few bits of software I would do that with, but... they were all written to these kinds of standards.
No matter how you slice it, this stuff requires a lot of hard work and bureaucracy. The question of "who will watch the watchers" is particularly germaine to the creation of trusted software.
Paul.
"Secure", and "Trusted" are not the same thing. (Score:2)
The problem is that "trusted" within the infosec community means that you can reliably assume in a beyond-mission-critical environment that the system is totally secure from attack.
This means a careful, meticulous design, and very rigid, formal implementation and testing. The aspects involved are far more comprehensive and cohesive than Open Source generally considers...
We look at Open Source software in terms of the safety of a package. Sendmail is not secure, or QMail is secure, and imapd is not secure or... You get the idea... Building a trusted system means looking at the big picture -- as low level as how kernel data structures are defined and manipulated, to as high level as individual file permissions and pretty much everything in between...
IIRC, "trusted" Solaris tends to lag a couple versions behind the general release of Solaris and takes a bit of a performance hit. Why? Because it takes a lot of time to evaluate, and fix an OS to make it "trusted."
A simpler way of looking at the difference is this: A "trusted" system goes to great lengths to meticulously ensure that no edge cases exist. A "secure" Open Source system takes the shotgun approach: With enough monkeys pounding on enough keyboards for a long enough time you'll get the works of Shakespeare... Translation: Open Source counts on enough talented, skilled developers working on a project to cover all the cases without anyone specifically telling them to do so.
In the end, Open Source may or may not come out with a product whose security matches that of a "trusted" system -- but you wouldn't be able to recognize it if it did come out. You couldn't *verify* it.
-JF
If someone made a complex system... (Score:2)
...completely adhering to specs, it probably would be trusted. However no one ever done that -- even systems that are supposed to be "trusted" contain parts that aren't. Probably some embedded stuff can be "trusted", however it's the place where security problems are the least important, and only proper behavior with certain hardware is necessary -- what is much less of a challenge compared to, say, HTTP server + operating system, desktop environment or even a router. If people had many decades for the development of each version of operating system, switch from "chaotic" development to formal specs and proofs that a piece of code adheres to them probably would improve things, however this is not the case, and even if it was, it's possible that specs would end up being developed even slower than anyone would be able to follow then, thus slowing the pace of development to a halt.
Re:I agree with him (Score:2)
Is there anything preventing building automated test suites? I would think that there are plenty of excellent tools to build the actual software: Expect, perl, etc. The problem is what tests to run and the time for someone to write code that doesn't extend the functionality of the system. It is useful, but until it actually catches some bugs before they are found by other means, there isn't any glory in it.
Re:Whats that got to do with it? (Score:2)
The "but it's better than Microsoft" arguement is wearing thin, and might soon be rendered meaningless.
You'll have to come up with more meaningful ways of praising Linux and Open Source in the near future.
Re:Trust (Score:2)
Check the code against what? Besides looking for obvious bugs, how do you verify that the code meets the requirements if they have never been written down?
Re:Whats that got to do with it? (Score:2)
It's a well-established tenant of modern 'quality' theory that it's not enough to 'test quality into a product.' One cannot simply ignore the process by which something is produced, and just test it as a final step.
I think this article, however, goes further than this, and actually hits Linux and Open Source at one of it's weakest spots: the lack of a top-down well managed design. Linux has no team of central designers determining the basic structure of the OS. It instead relies on knowledgable developers using 'Unix' in general as a reference design to produce a 'kinda sorta' Unix clone OS. The severe lack of any real architects at the top overseeing the whole effort is an issue that isn't well addressed by the 'Open Source' ideologues.
Re:I agree with him (Score:2)
But does Open Source mean that there CAN'T be a design spec or any formal methods? Just because the current leading lights don't use them, doesn't mean it CAN'T be done.
There could be design specs and use of more rigorous techniques up to the use of formal methods, but would Open Source developers be prepared to submit to that discipline? There is no reason why the code produced should not be free (both senses), but are there enough developers with both the skill and the inclination to work to that model in an Open Source effort? Perhaps the problem is that those inclined to work on Open Source projects like freedom to use whatever techniques and tools they like, and this makes it hard to verify that the results meet the negative (no unexpected behaviours) as well as the positive aspects of the specification.
There's plenty of closed source stuff that doesn't have any formal methods - does that mean that it can't be done? No. Maybe it's harder to do with open source, because the development cycle tends to be more relaxed, and it's harder to impose methodologies on those working on it, but that doesn't make it impossible.
I think that only a tiny proportion of closed source software has been developed with a serious input from formal methods. I am not sure that detailed and accurate design specs are all that common either.
Being unable to impose methodologies on voluntary contributors is probably the main obstacle to having a typical Open Source project meet the view of "trusted" that Spaf was using.
Attitude to those who write specifications also plays its part. We will never get an Open Source project with a rigorous demonstration that it meets a usefully detailed specification unless the writers of the specification get at least as much respect as those who write the most difficult code.
I don't know if he said it at that infosec seminar, but Spaf has also pointed out that not enough people are being educated in computer security, and how to build more trustworthy systems. The shortage means Universities can't keep faculty with experience in this area, and this leads to the vicious circle of less teaching and research that means the shortage continues.
Trusted Systems - Building Paranoia into the code (Score:3)
The difference in trying to build a trusted system is the formalization. The team which tries to break a system (or certify it), is called upon to review the design, and try to find all weaknesses that might be used to break it. While this could be done with open source, you'd have to find someone to play the role of review team. This could be a serious problem.
I think that it's certainly possible for an open source project to build an operating system to match "C2" specifications, or better. The problem with meeting the Orange Book requirements is that they require trained people to review and test a system. This requires a large commitment of time, effort, and especially, money. Then when you are done, you have certified one version of the system, and nothing supsequent. The open source model is one of many releases, with bug fixes as we go, which has many strengths, but makes the rapid obsolencence of what would be the "blessed" version a big weakness. So, it's possible, but I don't think we should try for it.
I do think that there is a better way, if the testing procedures can be automated, so that the code can be covered from every angle, by the a set of computers pounding on a system, this could be MORE secure than a "Trusted System", at least at the C2 level. A set of workstations could be set up to pound a system, and try to find weaknesses, ala "SATAN", with emphasis on stack overflow weaknesses, etc. If a computer does all the functional tests that a human review team does during C2 certification, this is a good step in the right direction, if nothing else. Add to this a group dedicated to adding test conditions for new flaws, as they are found, or hypothosized, and I think you've got the winning combination for a truely secure platform.
What is really needed is a way to make all of the consequences of a line of source code visible. With C, C++, Delphi, Fortran, et al, there is no way to see all of the possible (side) effects of a line of code. That simple iteration, could have a bounding condition that causes a major hole in security, or it might be perfect, how can anyone know? I consider this to be the problem of brittle code
Object Oriented Programming is a good step in the direction, it does, if properly used, make code testable down to the object level. There need to be better tools for hammering on each object, and making sure they work as delivered, but at least you have something which CAN be tested on a unit level. OOP is good, but still not good enough. It does go in the right direction, because cracks in a system can be traced down to a single component, but those components can be brittle. Testing those components to a high degree brings us close to the goal (my goal, actually) of perfect code. Close, but no cigar.
A platform needs to be built which can make writing a computer program as simple as working with physical objects, such as hammer, nail, etc. When you pound a nail into a 2x4, you know right away if something goes wrong, and what all the effects are. It's easy to test, and visually verify the results. Coding needs to be the same way. This would take a lot of the "magic" out of the process of coding (and uncertainty). What ideas are out there for making this happen? Is a system of individually tested components good enough?
I have a few ideas, which I posted more than a year ago with my own manifesto at basicsoftware.com [basicsoftware.com], ( and I need to review those ideas myself) but really haven't contributed anything myself. So I'm in the same boat... my contribution is a possible pointer in the right direction, and a keen eye on others, and their response.
The goal should be to make software better than any possible real world analogy. We need components that don't fail, and work predictably in a all conditions. If that's not possible, it would be good for the component to signal exactly what went wrong. Nails holding things together loosen over time, software doesn't have that weakness. Nails don't signal failure, but wouldn't it be nice if the nails in your roof signaled failure before the disaster struck?
--Mike--
Trust what? (Score:2)
First of all, based on this definition of "trusted", the open source issue is irrelevant. Software could be "trusted" whether you could know the source or not, so long as the thing was designed to a precise formal specification and underwent sufficiently rigorous testing. None of the software I use (directly) is "trusted" in this sense.
That doesn't mean I don't trust my software. I put a certain amount of trust in it -- some more than others. But this is "trust" in the more usual sense. The sense in which it is used in the article above is an unfortunate jargonisation of a word with a well-defined meaning in everyday use. Let us rather refer to these systems as formal systems. That's probably an overloading of the term "formal", but at least there's less chance of getting confused relative to "trusted". Of course, you would expect such a formal system to be highly trustworthy by merit of the rigorous design and testing, and critical systems should almost certainly be constructed in a formal manner.
In general, I think more systems should be handled in a more formal manner. Indeed, in my opinion, lack of formality is one of the major reasons we have crap software. Usually we have specifications written by marketroids or by hackers scratching whatever itch is pestering them most at the moment. This doesn't make for good formality. Even when formality is attempted, it's rarely done well.
What's surprising about open source is that it seems to have a generally higher reliability than many commercial offerings, despite the fact that the development process is unfunded and chaotic. In that sense, it is relatively trustworthy, despite being informal. This isn't an absolute statement: there's a huge spectrum of quality, and it's mostly the top end of the quality that's interesting.
It would be interesting if a formal open source system were to be developed. The code contribution process could still be informal, but the specification and testing would have to be more organised. Infrastructure systems like DNS and TCP/IP stacks would do well from this, I think. Would there be enough interested parties to write the test harnesses? Who knows.
Cultural Dichotomy (Score:2)
Re:it's all in the definition (Score:2)
Untrue. It hasn't been done yet, but it's very possible. However, it wouldn't be a "typical" OSS project:
I would hold up OpenBSD as an example of a system that I consider trusted, yet was not developed under any formal spec
Not to make this seem like an attack, but your personal opinion regarding OpenBSD isn't really relevant to the rest of the world. Right now, having a software system be labeled as "trusted" means something: that the system's developers used the best techniques available to make sure that it could fit the regular definition of "trusted". If we abandon it in favour of "I think it's pretty secure, and so does the general consensus, so why not just call it trusted", the word loses its meaning and purpose entirely.
This is a formal concept of trusted (Score:2)
I think that an important part of what you have to understand on this issue is that he's refering to a very formal concept of a trusted system. If you read the government guidelines on building trusted computer systems (e.g. the Orange book), one of the specific factors that is involved in designing systems at level B3 (IIRC) and above is that they be formally specified and proven to meet that specification.
While it's easy to gloss over this kind of requirement, there's some reason to think that it's actually a good idea. By the time you get to a class B system, you have to think about things like mandatory access controls, covert channel analysis, and the like. A formal demonstration that A) your system specification succeeds in meeting those goals and B) the system as built successfully implements the specification seems like a reasonable basis for reaching a high state of trust.
It's not impossible that you could build a free software project that could achieve this kind of goal. It certainly seems to be the case that every time someone says that free software can't do this, that, or the other thing, they've been proven wrong. But it's going to be tough to attract people to a project where you have to do stuff like keeping records of what you've done to meet design specifications, which is actually a requirement of high level trusted systems.
OH YEAH? (Score:2)
Next, copy losetup, bash and your kernel to an unencrypted /boot partition. Encrypt everything else. Add an option to lilo specifying that /boot/myscript.sh is init. I disrecall if you need to specify boot as the root partition and remount it, so some experimentation may be necessary.
In the myscript.sh, there should be the set of commands to run losetup and prompt you for your passphrase(s) to mount your partitions. Enter them. Script should then remount root and exec the real init. System boots normally.
Dear sir, I humbly ask that you attempt to bypass the login: prompt and access my data on a system so configured. You may use any tool you like.
Open Source not secure my arse...
Old advice, new risks. (Score:2)
"If it has to secured, DON'T put it on a computer".
I used to believe he was joking but now, with a little more experience, I totally agree with this declaration. If it's sensitive, for goodness's sake, don't put it on a computer, and especially on a computer which is connected to ANY form of network.
With all due respect to a senior 'net citizen such as Dr Spafford (who is certainly more intelligent than I am or ever will be), it is true that Linux (or *BSD) evolve in a chaotic and ramshackle way. But we should always remember a few points:
Want to keep something a secret? Remember the advice of that engineer: write it down on a piece of paper, use a one-time-pad, lock the paper in a steel box and put the box in a military-grade safe. Burn all other traces and throw the ashes in a vat of acid. But, for goodness sake, DON'T leave it on a computer!
Of course, this is just my US$0.02...
Re:Whats that got to do with it? (Score:2)
The trust has to begin with a trusted algorithm, then the trust has to follow in an unbroken chain through all the coding and testing. And the entire system has to be trusted -- although you might do this process with a cryptographic library, you also have to trust the system library routines, the program loader, the network library and drivers... At least with environments such as MULTICS the trusted items can be well isolated from the untrusted, but it's still a big job making and keeping it clean.
Fortunately, there is a difference between a provably secure environment and an environment which is secure enough in practice. People have been creating small provably-secure environments at great expense for 30 years, but most people can't use or afford the results.
There have been efforts to blend mathematical algorithm provers with programming tools. Perhaps someone will succeed with something general enough to be able to review existing code.
Bruce Schnier says ... (Score:2)
... Security is not something that can be tested for.
Makes sense if you think about it. And it blows a truck right through the "you need a formal spec to test against" premise.
I think Schnier makes much more sense from a theoretical point of view.
From http://www.counterpane.com/crypto- gram-9911.html [counterpane.com]
The only reasonable way to "test" security is to perform security reviews. This is an expensive, time-consuming, manual process. It's not enough to look at the security protocols and the encryption algorithms. A review must cover specification, design, implementation, source code, operations, and so forth. And just as functional testing cannot prove the absence of bugs, a security review cannot show that the product is in fact secure.
No mention of a formal spec.
Go Bruce!
Spafford's right, but wrong (Score:3)
In fact, the two are orthogonal to one another in most aspects, but they can be aligned. I don't see any dichotomy between open source and the kind of intense process that produces trusted systems, it's just that the open source movement hasn't matured to that level yet.
Spafford's observation is correct today, but he makes the erroneous assumption that open source community will continue its current practices of doing inadequate planning and testing to ensure real trusted systems. This is one of those areas where the arrival of the commercial backers of open source, like IBM, an offer substantial contributions.
The one thing that has marked the open source movement from the beginning is the ability to respond and change quickly to get the job done. We are in the adolescence of the open source movement, and since trusted systems do require more process than the standard open source method provides, and since people will want trusted open source systems, it's only reasonable to assume that the open source process will morph as required to "scratch the itch"
We need _formal_ documentation! (Score:2)
A few outdated man-pages, HOWTOs and pointers to Web-URLs just won't cut it in this context. In Trusted systems you will need formal proof of ownership, wholistic design, white-box testing/analysis and black-box testing, plus other things I can't come up with now. Documentation on the whole process of conceiving and creating the system would be a great benefit. This would have to be Applied To The Whole Shebang(tm), down through every library, including formalization of every function and good comments on most of their lines.
Now doing this properly _after_ the actual implementation has always been a bad idea, and would further require much harder work than if it had been done in the first place. However, this is an impossibility when regarding how the Open Source-process really works. It would never be as good as it could have been.
If more companies started supporting Open Source solutions, they could perhaps fund this kind of work and release it to the public (either for free or for a fee). It could benefit these companies, because now they can Trust and use free software. Actually, I saw a book that documented the whole Linux kernel once, so I know that has been done successfully.
Of course, "trust" is a word of many meanings. I for one trust many Open Source solutions simply because I know they have stood the test of time again and again. However, I'm always aware that new versions may break things considerably, and the documentation is not always updated. That is why Open Source is not currently a good process for building really Trusted systems. (This has nothing to do wether you release the source or not, which should always be a benefit to trust.)
So to me personally it is good enough, and currently the Open Source-process has quite a few benefits over closed source in this context (and many more regarding price and freedom):
1) Peer review (less bloat, great functionality and inter-operability, harder to put in trojan-functionality)
2) Large techsavvy userbase (quicker bugfind- and fix cycle, easy to get help and gain a community)
3) Ability to find and fix errors or improve the system yourself (Although this should never be necessary in a trusted system. Doing so may also contaminate the system with eg. bad binaries. However, it can be done safely if you use the right process of doing so.)
Note that these points are connected to the very fundamentals of how Open Source works, and should be seriously considered by companies that not merely want to ride the "Open Source Wave".
However, if I were to buy a Trusted system from a company, I would make sure there was a contract that held them accountable. That would be one point to proprietary software I guess. I think it would be hard to find a company that wants to be held accountable for Open Source code (written by others) that they have merely certified..
And lastly, always remember this: There's no such thing as 100% security. You cannot prove security, only prove insecurities or specific lacks thereof.
So don't put your trust arbitrarily.
- Steeltoe
Re:it's all in the definition (Score:5)
My assertion is that open source challenges the notion that you need a formal spec to develop trusted software. Much like the submitter of this story, I would hold up OpenBSD as an example of a system that I consider trusted, yet was not developed under any formal spec. Perhaps it's time to realize that formal specs help to get things done correctly, and they certainly help get things done quickly (by preventing, in theory at least, feature-creep), but they certainly are a requirement.
I'm assuming you mean 'aren't' a requirement.
You may trust OpenBSD, but many people won't. The US Government, in particular, won't; nor will any company/agency/whatever that defines 'trusted' in a way that corresponds with TCSEC or ITSEC.
'Trusted' means a lot more than 'look, historically, it's got a great record' or 'we audit all our code'. Surely, these elements do form part of the equation, but there's a lot more to it than that: configuration management systems, proper specifications, and proofs of code against those specifications, a structured engineering process etc.
TCSEC and ITSEC do define themselves, at least partly, in terms of formalisms. The sames is true of DEFSTD 055 in the UK, and presumably similar standards in the US. Quite often, what is required of a trusted system is a proof of the security of that system.
Most of the systems that are developed for high levels of assurance under ITSEC/TCSEC are specified in highly mathematical notations that your typical UNIX hacker doesn't really have much interest in. The Certificate Authority for the Mondex smartcard system is designed to ITSEC E6 (which is roughly equivalent to TCSEC A1, for those more familiar with the Rainbow Books): the formal top level specification runs to 500 pages of Z.
Even once you've got a specification to work to, you still have to implement it. Now, if proof of source code against specification is required, you can throw away your C compiler right now, because proving properties of C programmers is a nightmare: you want a programming language with a simplified semantics, with dataflow annotations, and an associated toolset. Something like Spark ADA.
Some open source Unices may have a good record for security, but I doubt they'll ever meet the higher assurance levels. Most of the people who enjoy working on open source don't have the skill set or enthusiasm for the sort of work required here. How many of you wince when I say 'formal axiomatic semantics'?
Moreover, the customers for systems like these want to be able to hold someone accountable. I know that in the context of your typical company, this is an 'old hoary chestnut' and a much debunked myth, but the fact is that when the subject matter becomes sufficiently serious, support becomes a real issue, and the only way that companies _can_ sell is by standing behind their products.
I'm not saying that a trusted system (in the current context) could not be developed open-source, but that there are obstacles:
The unfortunate fact of the matter is that writing trusted code is quite hard, and often requires a different mindset from 'hacking'. OpenBSD may have neat features like encrypted swap, and an audited SSH component, but it doesn't have an FTLS, MAC, or (God forbid) object code MCDC testing.
Cheers, Nick.
I sat in on this presentation wednesday... (Score:2)
We can definitely learn from this man, listen to his experience and knowledge, steal his ideas, and write more secure software, but leave the obsolete preconceptions behind. Then again, he should know that the only reliable crypto is open/peer-reviewed crypto, and security in general needs to be scrutinized by many people of different talent areas to be of quality.
Your Working Boy,
It's all in the people... (Score:5)
The following is dry, and opinionated, from the POV of an old-timer VV&T/QT/Tester.
I'm big on specifications, and will argue both sides of a contract when a spec is violated. I've even been in a couple shouting matches over them, fighting for the correct implementation, not supposed "flexibility" though they do need to be bent at times.
Fortunately, the shouting matches are rare and as a Contractor Scum(tm), I never take them personally...only as a barganing point and to help stiffen the backs of those who are easily swayed. It's a shame when good projects go bad, but that's other people's money!
Good specifications are invaluable in eliminating all sorts of conflicts and allow projects to actually end without different groups wanting to kill each other.
Unfortunately, specifications are by necessity limited in scope. If it's not in the spec, it can't easily be added. If it's in the spec, it can't be modified easily.
On a formal contract, adding in goals like "The system shall be fast" don't work well, so more detail is usually specified; "The system shall retrieve a query on the client stations within 4 seconds at all times".
There's always a few details that slip by, and if the people on the project aren't reasonable the details will cause quite a few social and technical problems.
Even relying on an outside specification is a problem...since APIs/protocols/... are usually vauge on some level.
The people who implement it and the environment have a much greater impact on the results; there will be good and bad free software / open source projects...as there are good and bad commercial projects.
From what I've seen, I'll trust open source as much or more in most cases...but I'll test it first.
Re:it's all in the definition (Score:5)
OpenBSD is a wonderful, secure system. If I had to trust that an out-of-the-box, off-the-shelf system wouldn't give me a security problem on my hypothetical servers, OpenBSD would be there in a second.
What this guy is talking about is rather different. We're talking trusted functionality. Trusting that this software controlling your nuclear power station does exactly what it says. Trusting that your rocket will fly into space properly. Trusting that your ABS brakes will stop your car.
Now, if we're being at all practical, this requires a tight, formal specification which can be effectively tested. You can't ensure that a system works properly if you don't know what working properly means, while you can't practically ensure that it will work properly if you don't have a tight, complete and agreed definition to work from. Anything else means you'll have to spend a long time chasing down problems, which may well tun out to be fundamental to a design technique used in the implementation of the system.
Current 'open source' development styles simply do not permit this. There isn't any way to get this level of control, or even of proper design. Now, that doesn't mean that it's impossible to implement such software _as_ open source, merely that current methods won't work.
Frankly, though, I'm not sure there's any real point. Open development works very well with consumer applications marketted at computer nerds such as ourselves. We're prepared to put up with problems to get the bleeding edge in a certain respect. Release early and often is clearly sensible, while there are plenty of people who are demonstrably prepared to use this incomplete, unstable software and help the developers make it complete and stable.
Now, let's move this across to the field this guy's talking about. Let's imagine we're talking about the hypothetical ABS brakes on your equally hypothetical car. Release early and often becomes, to be perfectly honest, dangerous as it results in brakes whose functionality isn't certain. You can't be sure they'll stop your car. So, do you drive it? No. How do you find the bugs? Well, you can play with it on test cells, tracks, simulators and the like. But, how many people have them?
Now let's imagine the final release has a bug in it - a major problem, but not totally impossible. Let's suppose you manage to spot what's causing the bug. Should the team running the project take your submission? Well, I can't say I'd recommend it to them. If you're coding a bigfix without total knowledge of the system, its specifications and design parameters - which is inevitable in this environment - the potential for an unseen effect is huge. They're better off to get their own engineers, who know the problem well, to reimplement it. That way, they can know that it won't produce an unforeseen consequence elsewhere due to inadequate domain knowledge.
Releasing the source code for outside inspection may well help others to trust their code performs as they say it will, but it's not going to usefully do much more than that.
We are talking about a problem space which few of us here will ever encounter. It's hugely different, and the same models aren't necessarily true. We aren't talking about trusting your WP not to crash or report your every word back to its makers, we're talking about trusting your nuclear power station not to go into meltdown. And for that, the current 'open source' development methods are wholly inadequate.
Re:Amazing - (Score:4)
It's about OSS not being developed in a careful, thoroughly planned and controlled fashion. It's about the fact that right now, no OSS system out there can be satisfactorily considered "trusted", not even the high-and-mighty, "look ma I'm secure" OpenBSD.
I've pointed out elsewhere why a formal definition of the term "trusted" is important. Shortly, it's not enough to simply say "the general consensus is that it's secure, so let's just say it's trusted".
(And your absurd overgeneralisation to the effect that all expert reviews are corrupted doesn't help your case one bit. FYI, in the Real World (i.e. outside of Slashdot), ad hominem is frowned upon.)
Methodology is Irrelevent. You will be debugged. (Score:2)
Your testing is all very well-defined. If the pre-conditions are violated, then your input is in error. Your routine should determine this (how is unimportant) and exit safely. To test this, run the routine in a test harness and see what happens when you enter "normal", extreme and erronious data. (Ok, hands up! How many of you have heard exactly the same thing from Comp Sci lecturers, throughout school and University? Just cos they're lecturers doesn't make them wrong. At least, not all the time.)
The next round of testing is the post-conditions. If your routine exits and violates a post-condition, then the logic in the routine is faulty. The case being tested is not being handled correctly, according to the specification. So you fix the routine. Big deal.
Who defines the pre-conditions and post-conditions? That's simple. For IP-based networking, the IETF RFC's define those very nicely. For device drivers, the specs for the device do the same. Then, there are POSIX and UNIX98 standards docs. Since Linux is a mish-mash of BSD and SysV, the specs for both of these are usable.
All in all, the problem with Linux is NOT that there aren't enough formal specs (there are plenty!) but that Linux doesn't have any wide-spread test harnesses, or databases of pre/post-conditions.
If someone (anyone!) could come up with a way of plugging ANY Linux module into a test harness, and pound it with different cases, and then have the responses checked against the pre/post-condition DB to see if the module violated the specs, you'd have ALL the formal testing you'd ever need and auditing would become a breeze.
Dibs Linux for B1 before Windows!
Be All You Can Be (Score:2)
Specifically, it wants all access to all objects on the system to be fully logged, leveled security, and blatant marking, plus whatever else is in there and too dry to read.
However, remote access to systems is a job not only for the operating system of the host, but also for the network it runs on. Being that networks in 1983 were a little different than they are now, I would hope that a system that was meant to provide access to possibly classified data would rely on more than simply the security of the selected operating system, regardless of the openness of the sourcecode. The system in that case would take into consideration the OS, the firewall, and the network connection from client to server. The possibility probably exists to buld a system based on Linux that could be trusted, but would need to be spec'd out with "system" referring to more than just the OS of a host computer.
When it comes down to it, would you really want the good name of Linux drug in the mud by some military stiffs who can't secure the Army web servers?
--m
He's setting a specfic definition of "trusted" (Score:2)
It is *not* true, however, that an Open Source system could never satisfy this definition. Set your formal specification and get your Open Source coders to work on fulfilling the spec. When they're done, then voila! A "trusted" Open Source system.
Alternatively, one could reject Spafford's definition of "trusted", as not matching the intuitive notion of trust. One could reasonably argue that "trust", in the common and intuitive sense, can only be realistically achieved by extensive peer review. I would furthermore argue that satisfying a formal spec is not enough for trust, because a spec might fail define a system that most of us would intuitively view as trustworthy.
If all he's saying is that all we need is a spec in order for a system to be "trusted", without saying what such a spec has to be like, then I say he's wrong; because a spec can specify any kind of untrustworthy foolhardiness you imagine.
To demonstrate this, I hereby give a formal specification of "trust": A system is "trusted" if and only if it crashes at least once for every six hours of operation. The formal test is: Run the system for sixty hours; if it crashes ten times or more, then it's "trusted". There you are, a formal spec and testing process. But hardly anyone could describe such a system as "trusted".
BTW, although I'm disagreeing with him, I remember Gene Spafford with great respect as a major driving force in the early days of Usenet, and the author of an O'Reilly book on security. I'm a bit dismayed that so many Slashdotters don't recognize him.
Re:Apologizing.... (Score:3)
If we approach this from the viewpoint of trusting that our OS will not crash or be hacked (or whatever) then this argument has some merit. But we're not, and this isn't really about commercial vs. community development. Trusted code, after all, normally means that we're talking a custom job for a particular application, rather than off-the-shelf.
A zen story (Score:2)
"Where are you going!" he said in a rude voice to the rival monk.
"I am going wherever my feet take me" said the rival in a mysterious voice.
This flustered the first monk. When the first monk went to his masters and related this, they beat him, and said "You fool! Ask him, 'What if you had no feet?'"
The next day, the monk saw the same rival walking down the same road.
"Where are you going!" he challenged.
"I am going whereever the wind blows," replied the rival solemnly.
This flummoxed the first monk, who was not expecting this. Again he related this to his masters, and they beat him, and said "Idiot! Ask him, 'What if there were no wind?'"
Thinking he had this figured out, he lay in wait for his rival, who in due course came down the road.
"Where are you going?", he challenged.
"I am going to the market to buy vegetables" said the rival.
---
The point of this is that when somebody is thinking on a deeper level and carefully factoring in your own methods of reasoning, he can defeat you. The Morris worm was a perfect example.
I agree that open source is not a panacaea, but neither is formal specification and testing. Paranoia suggests multiple and orthagonal methods for enhancing security, not relying exlusively on one strategy.
Re:Whats that got to do with it? (Score:2)
What does the way something is developed have to do with the final product (or a given release), and the tests performed on it? You are testing the product, not the developement environment, surely?
Nope; that's not true actually.
Probably more than half of the work that goes into the design and implementation of trusted systems has nothing to do with 'code', or how the final product tests.
When you get to high levels of trust, you cease to regard testing as an adequate assurance. You want to see evidence that the system has been specified carefully and correctly, and that the code meets the specification. Needless to say, if you don't have a specification, you can't be trusted.
The most that testing can tell you about a program is that you haven't managed to make it Do The Wrong Thing. For high levels of trust, you want a proof that it can't. And, if Doing The Wrong Thing includes disclosing top secret data, missile launch codes or cooking a patient with gamma rays, then I'd bet you'd prefer the latter too!
Cheers, Nick.
Re:I agree with him (Score:2)
Automated testing sucks. Don't think it's going to save you any time or catch defects any more reliably or faster. It's tempting, and can be useful, but it's usually a big waste of time.
To do it right, you need to basically duplicate everything that the target program does, and verify it's output. When the code changes, your scripts often break. Guess how many people on a project of 30 people will code that?
Now, how many people will want to do automated test scripts on a project with often and early release schedule?
Having said that, limited automated testing can be very valuable. Usually testing protocols or data manipulation...ideal to catch some security issues.
Re:I agree with him (Score:2)
"Good chance for some company to make some bucks.": Yes, it can be!
Re:MICO? (Score:2)
My (richly illustrated) point is, it's not just about getting tested. There are specific formal procedures involved. It's not as simple as it's made out to be.
Re:it's all in the definition (Score:2)
There are no other conditions which comply with the phrase, and those pre/post conditions completely define all cases.
Thus, OpenBSD's aim to be secure out of the box, where secure is defined as preventing unauthorized access to services or data, IS a very formal spec and one that can be tested against.
The problem comes when managerial types define "formal specification" as something with a number attached to it, ISO 9000 compliant development, a rigidly-defined heirarchy of management, an EULA, Powerpoint slides, and a cheese-and-wine party.
NONE of these have anything whatsoever to do with formally specifying anything. Least of all the ISO 9000 stuff. ISO 9000 is the least formal doc I've read in a LONG time. It's worse than useless.
A formal specification is simply that. A specification of what goes in, and what comes out. THAT IS ALL! Specifications say NOTHING about implementation, they are simply definitions of what a given black box does.
Assurance != Trust (Score:2)
ISO certification is a great comfort to purchasing managers who prefer trusting a bureaucratic process to trusting individuals or their own judgment. This is similar to companies that require job candidates to have a certification, rather than independently assessing the candidate's skills and knowledge. It's not meant to be an equal opportunity process. It's meant to be a discriminating process that allows a decision to be made without anyone having to exercise their own judgment and thereby risk their reputation. It means you aren't risking your own reputation, and you don't trust anyone else's.
The fact is that making judgments of quality or security is difficult, and most people are lazy. Nothing prevents anyone from coming up with a way to measure security and putting OpenBSD to the test. This mentality, however, would make it an imperative that the software be retested with every change. Any what assurance is there that the test itself, or the person conducting it, can be trusted. ("We'll need to see some certification").
Certain types of people will always have greater confidence in the legal recourse to hold someone else accountable than they have in their own judgment. It's just too bad that those people are given so much credit.
Re:Bruce Schnier says ... (Score:2)
But what we're discussing here isn't just whether a system is secure: it's whether it's trusted, which has a very specific definition. That's the entire point. Is OpenBSD, by itself, secure? Very much so. Is it trusted? Nope.
Re:formal seems to be the word here (Score:2)
No wonder so much software sucks. I wonder how long a civil engineer would stay out of jail if he went by "if it hasn't fallen down so far, why should we bother making sure that it won't?"
Trusted unrelated to reliable/secure (Score:3)
Remember the NT C1 (now C2) compliance thing? Because NT's design happened to include some of the elements of the C2 definition, they were able to come up with a configuration that could be trusted. Not bugfree or secure, but trusted. (Note, NT's C2 security used to involve no NIC, but I think they fixed it).
OpenBSD is really fucked secure, but isn't designed to the spec, doesn't include the ACLs and other stuff needed for DoD compliance, etc.
Neither does FreeBSD, but remember the Trusted FreeBSD [slashdot.org] project? They are trying to make a B1 compliant (trusted) BSD based off FreeBSD.
Also, Operating Systems are not inherently trustable. It is the entire system that earns a security rating. It largely involves a fine tuned control of file access, but not a fine tuned fixing of bugs.
Alex
Yes, I picked NT as my "trusted OS" mostly because it will generate the stupid /. effect of "waah waah waah, NT is horrible, you must be a moron" the required "you are the stupidest ass in the world" and other stuff like that.
Get a grip kids! OSes are NOT the end all and be all of life. Further, drop the prejudices. I am an MCSE, that does not make me a moron, clueless, or a lemming. Indeed, just because all the MCSEs that you know are dumb does not mean that they are dumb BECAUSE they are MCSEs. Some of us happen to be very competant administrators, and happen to have a certification. Some of us actually learned are shit to get it, instead of just memorizing study guides.
Quit hurling insults to people that sometimes disagree.
Alex
Problems with his argument. (Score:2)
Secondly, you can use the Bazaar to reduce the cost of developing such software. Take a Bazaar-developed program that does pretty much what you want it to, draw up a formal spec, pay some programmers to audit the program and bring it up to spec, and then go through the formal testing. If you pick wisely, this could greatly reduce the cost of such development, and it should comfortably meet his definition of "Trusted".
Thirdly, I question whether formal specifications improves security; granted, that's no consolation if you're working for an organization that requires it. Formal specification merely means that some thought went into design before the program was written (or modified in the above example). It is very difficult to test a specification for security problems, for obvious reasons. Also, it is easy to write a program that matches a formal specification, yet introduces subtle security holes.
Fourth, there is no reason why you can't perform formal testing (for what it's worth) on any piece of Open Source software, Bazaar or not. Then you have a formally tested program. You must refrain from upgrading without running through the testing procedure again; just like proprietary security software.
The bottom line is, if you want formally trusted software, you've got to spend some money. Open Source will not prevent this, Free Software will not prevent this, neither the Cathedral nor the Bazaar will prevent this. That doesn't mean they should be excluded from consideration.
----
problem with specs... (Score:3)
When WYSYWYG interfaces, someone point out that not only is What You See Is What You Get, but also that What you see is ALL you get. This means that while you can see on the surface what document or page or whatever that your creating, and how its looking, but thats all your going to get, nothing else, no more, no less.
Take example a standard graphics program like the GIMP, and compare it to POVRay. The GIMP's WYSYWIG interface is really slick, but with POVRay, you can create ray traced images that would be next to impossible in a WYSYWYG environment, but you dont get to see the exact creation before its done.
With programming, a formalized, structured process ensures that the program will give you what you want, but it will never provide more than that.
True "Innovation" will never occur. No one may spot the flaw in the security model, no one may realize that 40-bit encryption is a bad way to protect DVD's from being copied, no one may predict that a Record-Industry defined "secure" file will only be effective for a couple of
But by setting Goals, but allowing for large amounts of flexibility of the Programmers allows for products to be delivered that are not only Programs that meet their requirements, to Products that truly meet their needs.
And trust me, clients requirements and their needs are almost always two completely separate things.
Use Gnumeric as an example - the author did what? Copied Excel. What did excel do? Copied Lotus? Lotus? They copied VisiCalc.
But the question remains, these products are meeting requirments, but are they really meeting the NEEDS of the people that use them. Couldnt someone have thought of a better way of setting up a spreadsheet? Making the formulas? Hell, why are there formulas at all?
But I'm sure you get the point.
Linux may not have the strict methodology that modern business management and Quality Testing require, and that, is exactly why it ends up with higher quality products in a shorter amount of time.
Re:Two variables (Score:2)
Not really. Remember, most of the internet was built on "formally designed and reviewed" "open source" software. Look at the BSD development cycle, for example. It is much more controlled than say, the release-early-and-often-new-micro-version-every-f
Haiku (Score:2)
Call me chaotic
Formal specs are safe but dull
While I live, I hack
------
Re:I agree with him (Score:2)
The trouble is, you can have something like Lint go through and say 'that looks dodgy'. But you still can't prove that the code is doing what you want.
Our company has an in-house testing tool. We write long (LONG!) lists of test scenarios in Excel, then use the tool to check that the outputs of each function are what we expect. This has 2 flaws: (1) the tests may themselves be wrong, or (2) the design requirements for the function may be wrong. If the tests are wrong, either a bug can get through or a valid bit of code gets flagged up as being wrong. If the design is wrong (eg. the design asks for us to add 1 instead of subtract 1) then it'll be coded wrong too, and we'll never see it.
So the layer above this is to put all the functions of a module together, and then do some more higher-level tests on that. And then put all the modules together, and do some more tests on that. And you've got a pretty good chance that your code is right after all that, but you're still not sure.
As for automatically generating the tests - where from? Do you have an explicit, computer-parsable set of requirements for your project? Have you ever seen, or even heard of such a thing? Ever? I'm afraid it just doesn't exist. The design is done by humans, and the test scenarios are extracted from the design by humans. Humans are fallible. Shit happens. The best you can do is test as best you can. And if you're thinking about getting the test scenarios from the code - well, d'oh! if you're testing the code with tests extracted from the code, nothing's ever going to fail!
We're writing safety-related and safety-critical software for car engine controllers. We're going on to other stuff like drive-by-wire. We can't afford errors, so there's oodles of testing. But shit does happen, so all we can do is say "well we tested it the best we could". Nothing is ever 100% safe, and 100% testing is provably impossible. So the problem is getting folks to make sure the designs are right, then that the code meets the designs. Open-source seems to fulfill that requirement pretty well. The problem is for older things like 'sendmail', where a design doesn't actually exist - newer projects like Apache would (I imagine) have proper design documentation.
Grab.
Formal Specs? RFCs! (Score:3)
The HTTP RFCs are a good example.
This is what you MUST do:
This is what you SHOULD do:
This is what you CAN do (if you feel like it):
This is what you MUST NOT do:
Pretty cut and dry, and rather effective (the web works, don't it?).
I'd say that most JAVA implementations (from most companies) would fail full compliance to the Java spec (well, at least some of the Java specs... there's so many these days).
Re:Trusted unrelated to reliable/secure (Score:2)
Just as an aside, there is signifigant evidence to the effect that Microsoft paid for their C2. Specifically, the people originally hired to do the C2 evaluation later quit and went public with the information that Microsoft attempted to bribe them into certifying NT. Why did you not hear about this? The same reason you probably didn't know that www.microsoft.com.br was defaced over Mermorial Day weekend, (that story was only available in Portuguese) Microsoft has considerable weight with the media in this country.
Now on to more important topics... If open source is by definition an untrusted system, then why did the NSA contract a secure version of Linux? I can't think of the name of the company off hand (it's a firewall company out of California) but I know that the NSA contracted them to build an NSA certified secure system, for use by the NSA (and later everyone else) based on Linux.
And BTW: Alex, before you start whining that I'm just an anti-NT zealot, I am also an MCSE and while I agree that being an MCSE does not make you a moron, I would hope you realize that Microsoft does NOT test for the administration skills you really need.
Re:Whats that got to do with it? (Score:2)
Trusted != Secure (Score:2)
I can specify a (software) system in a formal way using formal languages like Z or TROLL (TROLL really is a formal specification language, honestly). But this does not mean that it has to be more secure, it only means that I am able to define the systems behavior *exactly*, so that the resulting software system behaves in a predictable way.
This is working fine in theory but to my knowledge there are no working (as in usuable) tools to automate the derivation of the software system from the formal sepcification. So you are still left with several gaps in the design/implementation process.
Proofing of your specification is simple: Transform your spec into predicate logic clauses und do a resolution
Until this process is fully automated, formal specification and *verification* (!) is too difficult and cumbersome for widespread use.
So this talking about trusted/untrusted systems is completely unrelated to any Open/Closed Source security debate.
+++ after all, it's just my thoughts +++
bye,
blurred
Re:it's all in the definition (Score:3)
f.
Re:That's not what he said. (Score:2)
Re:Whats that got to do with it? (Score:2)
Source + Spec + Test
I think that might be even better.
Re:Trusted unrelated to reliable/secure (Score:2)
Free Software - Software available to all respected the rights RMS believes are important
Open Source - Source available, can modify source, etc., not as many guarantees as free software
Developed as Open Source - Bazaar approach, many eyes, many contributers, anyone can submit code, roll their own patches, pressured to accept all patches, chaos, etc. Produces some really neat stuff, but it is totally chaotic.
There is a distinction there. The "Open Source Model" of millions of programmers cannot produce something to a strict spec, as that requries careful management, coordination, etc., the Mythical Man Month still holds.
Anything can be made Open Source/Free Software. there is a difference. Being Open/Free merely involves licensing, Open Source Development is the chaotic model that Mozilla, Linux, and a few other projects function with (Apache?). Others have been more controlled (the BSDs, Xfree86, GNU, etc.) while still being made available as Free.
On the NT thing: I'm not whining about it, I'm getting sick of the nonsense I see here. Microsoft's tests don't test much, they test a bare minimum. However, I'm sick of being called a moron because I know systems other than Linux. I routinely use Linux and NT, some Solaris, and learning FreeBSD and OpenBSD, unfortunantely I'm just not smart enough to realize that Linux is all you need.
Re:I agree with him (Score:2)
I think you responded to something that wasn't in my original message. Take a look below and feel free to hit me with a clue stick if you have a legitimate gripe not voiced in your reply.
Yes, things should be properly designed prior to coding. Yes, testing should be an early consideration prior to coding. Manual -- and implementation independent -- test scripts will be created before any code on good projects.
- Sidebar: Are you talking about mixing testing and design with coding? That's a bad idea.
Theoretically an automated test could be created prior to coding. In practice, nobody does that unless they have a very limited scope for the tests they want to perform...or if they want to waste an amazing amount of time!Each automated test script I've created using over a half dozen different tools all have the same problem;
I haven't been able to create scripts prior to code being delivered unless _I_ wrote the code. Mixing coders and testers in the same group is just a bad idea on multiple levels. If you don't know the objections, I'm not going to tell you...ask a few people you respect.
Remember, you're validating the spec not how the spec is implemented. How do you, in an automated fashon, track deviations unless you limit yourself to IO?
The only exceptions are when you focus on data and protocols; does the input x always result in the specified output y? Works for files and data regaurdless of volume, though how long a test takes can result in different results.
Re:Apologizing.... (Score:2)
of components you have to trust, like OS kernels and compilers. No
one designs and implements these on a per contract basis, so everyone
depends on properties of generic tools.
Spafford surely is right: security can't come just from being open
source. But I think being able to look at the relevant source code is
a very powerful advantage when trying to design secure systems.
Re:Understand what "trust" means. (Score:2)
Re:it's all in the definition (Score:2)
source development model does directly contradict most of the software
enginerring principles that are called upon in the development of
trusted systems'. Do you have a specific contradiction in mind, or
are you just making a an assertion about hacking culture? The latter,
I think, is as irrelevant as an analogous generalisation about most
commercial software development would be.
PS. I note your email address is in Oxford: are you a member of
Roscoe's group?
Re:it's all in the definition (Score:2)
explicitly authorized to use" is a formal spec.
Does that mean you see no ambiguity in the phrase `explicitly
authorised', or that you think that any way of disambiguating it is
equally good?
Re:As experienced as this gentleman is in security (Score:2)
I agree, my issues are largely related to the academic vs. real-world differences. OSS works pretty well in the real world, but it doesn't fit into a traditional academic perception of security design (though peer-review of code and algorithms _is_ a standard tenet of secure design). Perhaps it was unfair to put the caveat at the top of the note, it should not have conveyed that much emphasis, but I still stand by my belief that Dr. Spafford may have his objectivity clouded somewhat by institutional oldthink and, possibly, by conflicting interest.
However, as an earlier post pointed out, Spafford's lecture seems to be aimed at a controlled, replicable, process (gee...that sounds like software engineering).
Yes, that's why I said (in that earlier post [slashdot.org] I referred to) that while we can agree that software designed by small teams of competent designers and coders makes for stronger, more secure software, that isn't the actual point. The point is, in the real world, we have to deal with all kinds of software from different companies and groups, and often the decision process for selecting software isn't the most logical or objective process. So how do we live in that world? Dr. Spafford puts forth a view of a software world in which we'd like to live, my thing is we don't live in that world, it's not coming anytime soon, so how the hell do we deal with what we have to deal with, while doing our best to improve matters when we can? Again, it's academic vs. real-world. Both can appreciate the Right Thing, but you can't always get it in the Real World, for a variety of reasons. It's a respectful difference of opinion, not a flame.
Aside from the fact that he gives his information out for free (it's payed by research money but donated to the community at large), it's illegal, immoral, and, from a socio-economic point of view, inappropriate to steal.
Good artists borrow, great artists steal. And smart artists properly attribute.
Your Working Boy,
what is this spec? (Score:2)
It would be nice to know what the requirements are of this spec. If the spec were or is an open spec it would then be possible to make Linux, OpenBSD, or any other OS abide by this spec.
I think that it is important to understand as some have pointed out that trusted does not mean secure. This is one of those english language semantic things. Trusted means that they have a set of requirements that this OS / program is supposed to do and it does them. Basically they can trust that if this happens the software will do what they wanted it to do. Secure means that if someone tries to hack in or exploit a buffer overrun then the system wont let them. OpenBSD is secure, NT is trusted. This does not mean that NT is not secure, it is in its own right, but I wont get into that. It also does not mean that OpenBSD cannot be made to be trusted either.
The advantage to them of Open Source would be that if the sytem is not trusted, they have the source and can make or have the system made into a trusted system.
Wouldn't it make more sense for them to take a secure system and make it trusted as well?
Just my .02 cents, but I think that the goverment needs to start rethinking some of its policies of doing things.
send flames > /dev/null
If I understand him correctly. . . (Score:3)
However, this kind of formally verified system is extremely costly to develop, extremely difficult to adapt to changing circumstances (and retain the verified properties), and still doesn't guarantee that it does what you want it to do - mistakes in the specification or mistakes in the verification process are just as likely as mistakes in coding.
Frankly, for 99.9% of the software written in the world, this kind of thing is utterly impractical and will remain so. I don't mind consigning the remaining 0.1% to cathedral-style approaches (though open source can still help spot bugs that the verification doesn't catch).
Open or Close is irrelevant (Score:2)
Neither open source nor closed source by itself makes a system secure. There are way too many systems, both open ones and closed ones, that are swiss cheese, to even think that security can derive from either mechanism.
With regard to security, what open source would allow you to do is verify whether or not a given system truly is designed to a rigid trusted specification. None are now, and OpenBSD seems to have the greatest potential. But I'm absolutely NOT going to trust the security of a closed system just because the marketing folks, and their hired security consultant, says it's perfectly secure.
The question I think is this: If you do have a system you believe is truly secure, does making it open source compromise that security? I believe that if it did, it is flawed in design and can't possibly meet spec.
Where closed systems get an advantage with regard to security is when they are really not secure. That advantage gained is a delay between market introduction and discovery of its insecurity (perhaps by reverse engineering).
It's not open-source, it's UNIX. (Score:2)
Read the decade of CERT advisories [cert.org] for Sendmail and BIND to convince yourself of this.
The notion of "root" is bad enough, but "set-UID to root" is worse. This results in far too much code being trusted. In particular, it should be impossible to run non-trusted code as root. This means no root log-ins, for example. In a secure system, as your privileges go up, the amount of software you're allowed to run goes down. In a sandbox, you can run anything. As administrator, you can only run a few tools that do very specific things with lots of checking. This is completely alien to UNIX.
A serious attacker will find their own holes, and will keep quiet about them until they break in and steal something. Fixing known holes protects against script kiddies.
DoD has a simple security model, which is reasonably enforceable. See the Orange Book. [ncsc.mil] There's Linux support for it. [rsbac.de] You take a performance hit and can't run some popular software. But in that direction lies real security.
With discretionary security, users can turn off security. "chmod 777" is the usual way. With mandatory security, if you're processing SECRET information, nothing you or your programs can do makes it non-SECRET. The problem with discretionary security is that it's extremely difficult to tell if the system is in a secure state, and it's very easy to make a change that opens a security hole.
OK, you've got a secure system. Want to run Napster? No way; it's acting as a server and transmits your files. Want to download a game? If it will run in a sandbox, OK, but it can't talk to other players. Running a web browser may be OK, but the browser will be shot down if it tries to launch MS Word to read a
A secure open-source system is quite possible. But it won't be Linux as we know it.
Re:it's all in the definition (Score:2)
When a statement genuinely IS ambiguous, but has been given as the pre/post condition pair, then ALL interpretations which comply EXACTLY with that condition (both in the positive and negative) are equaly valid.
Say you have the statement "data must be kept on disk 1". Then the converse of that is "!data must be kept on !(disk 1)". By applying both these, you can conclude EITHER that data is restricted to disk 1, and everything else is restricted to disks other than 1, OR that data is restricted to disk 1 and that everything else can go on any disk. But since your pre/post-condition says nothing about anything other than the data, then as far as that condition is concerned, the "everything else" could be on Mars. In this case, logic would argue that there's no implicit interpretation, so it really doesn't matter.
The way I would always go, though, in ALL cases, is take the most restrictive interpretation of the pre- and post-conditions. You can always loosen up a bit, but it's a royal pain to tighten things afterwards.
Thus, using this axiom, the first conclusion would be the one to use.
Re:My experience is quite different (Score:2)
This doesn't sound like the earlier posts. Script changes after the specifications are created isn't the same as scripts that are made once and run throughout the project. Making updates can suck up an amazing amount of time.
Are you talking about testing at the end of a cycle, ongoing, or both? The projects I tend to do usually have a GUI-intensive part covers a few hundread forms plus related specialty screens. That part doesn't work well with automated tests. The backend parts do, though, since the interface to them tends to change very little.
For me, testing starts with a formal test plan (from the spec), occurs constantly, and the remaining time is used to plan for the milestone releases and do documentation. Automated testing is time consuming and isn't worth setting up for most rapidly changing projects. In limited parts, yes, across the whole project noooo.
Yes, definately. Each group talking as early as possible and hashing out the details is definately benificial.
I wouldn't dare create a script for something that's changing on a regular basis unless it were small or I had a big staff (about ~1/2 size of development group).
If you get away with this kind of thing and don't drive yourself mad, I'd like to know how!
Without access to the source code, it's useless. (Score:2)
Also, empirical tests are insufficiently strong to prove anything. You can test the binaries ad nauseum and not find every security flaw.
So basically, this is just another bullshit attack on free software.
Re:Why do spec's matter? (Score:2)
The problem is that you aren't sure that your tests cover all the cases and that you haven't left anything out. More importantly, you aren't sure exactly how the system should react in all situations. For instance, what should the maximum response time on a component of the system should be? If you don't specify this then you can't test it to make sure the system responds correctly. BTW, you need specifications like this in order to make sure the system can handle the environment its in and doesn't say start applying the brakes too late.
Basically what the formal spec does is it lets you determine how much testing you need to make sure that you've covered all the cases and that the program will respond the way the spec says it will. Personally I would prefer a software that went through this sort of procedure rather than the first public release of some open source program controlling the brakes on my car or the controls on the plane I'm flying in.
Re:I agree with him (Score:2)
An excellent point. It seems as though many of the other threads grasp at this concept without saying it explicitly. In the case of OSS we have people constantly using viewable code. With so many coders making use of said code day in and day out, if something is wrong, it is eventually found and fixed. The proof of reliability can be found in the number of people who have used the code without problems.
Chaotic? (Score:2)
It may be true that lot of free software starts its life chaotically. However, claiming that big, succesful projects are developed chaotically is complete nonsense. The proces of specifying the direction in which free software moves is different from what people working in traditional software developement may be used to. Maybe these new organisatory schemes are difficult for Dr. Spafford to understand, but claiming that developement of "Apache", "Gnome", "KDE"... are chaotic is complete nonsense.
As for the question wether Open-source project can become "thrusted" or not, this depends on only two factors:
If these two are in synch, chances are that Open source project will reach the state where you can trust it much faster than a project coded in traditional way.
Re:That's not what he said. (Score:2)
Re:OH YEAH? (Score:2)
WOOOOOSH!
That's the sound of my point going over your head. Open Source problems are fixed, fast. They tend to be less severe than other "secured" products on the whole. I use products that fix their bugs fast, and don't produce too many of them in shipping releases. Another "trusted" system - NT 3.51, I wouldn't trust to hold my porn, let alone classified government secrets. Yet, strangely enough, Microsoft managed to get it C2 certified.
Certification != trustworthiness. That's my point that Gene forgot, along with empirical evidence.
Re:it's all in the definition (Score:2)
make decisions about how to channel bureaucratic authorisation into a
permissions model, and these kind of matters can involve subtle
security issues.
The kind of disambiguation you describe is very simple minded: it
is simply schematic ambiguity, of which the `explicitly authorised' is
not an instance. Even so, I don't think that `most restrictive
disambiguation' is an effectively applicable criteria.
To put it bluntly, the kind of informal specification you advocate I
think is likely to reduce the visibility of potential security
vulnerabilities.
Re:it's all in the definition (Score:2)
development from the point of view of secure development. I don't
think there are any `contradictions', however.
I'm not sure what to make about the red book criteria: does `in a
trusted facility' mean that if I have some ideas about design of the
code while at home in the shower that the criteria is invalidated? I
can't comment since I am not familiar with its trust model, but it
smacks of `security through obscurity' to me. I doubt that it could
be made to work outside of an organisation like NSA or GCHQ, which is
interesting, but not really the topic under consideration.
I think that the TCSEC criteria are likely consistent with open
source development. What standards happen to prevail in `many' open
source projects really is irrelevant: of course all of what you
describe must take place, and with proper tools. I think I could
imagine a plausible such group of open-source developers.
I have a dim idea that we may have met: did you apply for the MSc a
few years back (1994/5/6?) and then switch to a law course?
Can modern processors be trusted? (Score:2)
Wow. (Score:2)
Can "Linux" be trusted? What do you mean by "Linux"?
If you mean a particular kernel version, as released by Linus... there you have it. Can you trust it? depends on your criteria. Do you *need* to trust it, or can you simply take a certain version and stick with it?
Linux is more about a process than the technology.. open source is about lots of developers working together in a scientific (as opposed to market driven) way to produce better code and better software.
Re:I agree with him (Score:2)
Nothing is stopping a company from making 'Secure Linux' and rigorously applying compliance tests for each new release.
It has absolutely nothing to do with whether something is 'open' or not, it simply has to do with packaging.
Re:Apologizing.... (Score:2)
because there have been so many `out of the box' attacks launched
against them. Eg. it is perfectly practicable to crack codes in
most RSA-based smart cards by analysing their power consumption.
That's not really the point you are making, but it shows a problem
with pure hardware-based systems. It may well be the case that the
highly modular designs in PC systems might be harder to attack with
these kind of attacks due to their complexity, but saying so seems to
be anathema to many in the security industry...
Re:I agree with him (Score:2)
In general, any piece of code htat has to test some output is relying on the programmer that wrote it knowing the answer in advance. I know from experience that unless there is a spec that spells it out, there is a decent chance that somebody will mess up. Of course, there's a decent chance of bugs in the spec too.
Re:Apologizing.... (Score:2)
The whole point here is a system where you can trust that its functionality will conform precisely to the specifications. It's not security or trusting that there aren't back doors, it's trusting that it will do what it's supposed to. That may include security features, but that's not the point.
I'm sorry if this sounds awkward, but a lot of this topic is showing a basic ignorance of some aspects of software engineering by 'open source' fans and / or developers.
Re:Not quite. (Score:2)
Don Knuth's comment is worth bearing in mind, but it's a distraction in many ways. He's saying - or at least appears to be saying - that he has proved that the fundamental design and architecture is correct, but isn't certain that, in the translation of specification to code, he hasn't made a typo or two. That's still possible, but a very different source of bugs.
The problem with the standard 'open source' development model is that it's too chaotic to tightly control adherence to specs, IMO. That's what our original source here seemed to be talking about when he doubted whether 'open source' development could ever produce trusted code.
Your last two paragraphs seem to miss the point somewhat, though. A trusted system needs to have the specification tightly drawn up before a single line of code is created, if we wish to have any serious prospect of trustworthiness. If you try and draw up a 'de facto' specification after the event, it's inevitably going to contain problems as you're merely documenting behaviour that isn't necessarily trustworthy in the first place. Trusted reimplementation may be possible, but I'd still want a new codebase.
BTW, before anyone gets annoyed, the only reason I've been referring to 'open source' development as opposed to open source development is that, in this context, it implies something about the structure of the development team and that's mostly what's relevant. I could produce a project with a tightly controlled, exclusively internal team which decided to publish its source as it was going along. We'd still have source which was open but it wouldn't be conforming to what's normally accepted as the open source development model.
Re:Old advice, new risks. (Score:2)
Seriously, though, if you are interested in a *web server*, my remarks still stand. I don't if Apache has been ported to OS/400 or Multics.
Besides, how many companies, these days, can afford a Multics- based computer (does it still exist?) or an AS/400?
My dad (who was a security officer on a big iron somewhere) used to mention that, even with OS/400, it usually took less than 30 minutes, for a good security consultant, who was probably much better than the average script kiddie, to gain complete access to a machine.
Compare & contrast with what OpenBSD claims on their website: "Two years without a local root exploit".
I am ready to admit that "Big Iron" means much better security than a PC+your choice of OS. That does not mean *good* security, though. Simply better security.