Linux Foundation: Bugs Can Be Made Shallow With Proper Funding 95
jones_supa writes The record amount of security challenges in 2014 undermined the confidence many had in high quality of open source software. Jim Zemlin, executive director of the Linux Foundation, addressed the issue head-on during last week's Linux Collaboration Summit. Zemlin quoted the oft-repeated Linus' law, which states that given enough eyes, all bugs are shallow. "In these cases the eyeballs weren't really looking", Zemlin said. "Modern software security is hard because modern software is very complex," he continued. Such complexity requires dedicated engineers, and thus the solution is to fund projects that need help. To date, the foundation's Core Infrastructure Initiative has helped out the NTP, OpenSSL and GnuPG projects, with more likely to come. The second key initiative is the Core Infrastructure Census, which aims to find the next Heartbleed before it occurs. The census is looking to find underfunded projects and those that may not have enough eyeballs looking at the code today."
The best bug is the one not written (Score:1)
Spending resources on 'finding the next Heartbleed' bug... I fail to see the advantage of finding it by a coordinated search as opposed to someone just stumble on it (as long as the bugs are reported responsibly of course).
Software can't be made secure afterwards, it must be the the primary goal.
Comment removed (Score:4, Insightful)
Re: (Score:2)
The poster you're replying to wasn't me, you insensitive clod.
Now, the poster was correct. The original unix was a mess when it came to security. Things evolve, and our understanding of the problem evolves as well. Something that was designed to be secure 5 years ago and passed scrutiny then could very well be swiss cheese today. You HAVE to be able to add security - the alternative is rewriting from scratch every time.
Re: (Score:1)
The 'original UNIX' was a mess compared to what?
Before multics nobody had a fucking clue about security.
Re: (Score:2)
We went to lunch afterward, and I remarked to Dennis that easily half the code I was writing in Multics was error recovery code. He said, "We left all that stuff out. If there's an error, we have this routine called panic, and when it is called, the machine crashes, and you holler down the hall, 'Hey, reboot it.'"
Still happens ...
Re: (Score:1)
Which says nothing about security.
Re: (Score:2)
mod parent up, damn. The things people say around here.
Re: (Score:2)
Re: (Score:1)
Firstly, Multics, while good, was only different from Unix in degree, not kind. Multics also included a panic routine and would call it when it was unable to continue.
HODIE NATUS EST RADICI FRATER ring any bells? (Just one example).
Secondly, have you been using any Multics systems lately? I wonder why not.
Thirdly, which meaning of "security" are you using today? You seem to be chopping and changing.
Re: (Score:1)
Except that pretty much noone spends that time or resources to do that. It's more fun to continue adding features into the doomed architecture. Or start over... again.
If you design a software with a certain feature set insecurely, it's often difficult to keep those features when re-goaling for security.
A depressingly large majority of all software hasn't been coded with best-knowledge tools and APIs in mind. Not even those of the time of writing, but particularly not the one of the current time!
Linux was better when there was little funding. (Score:4, Interesting)
I've been using Linux for an awfully long time, since the mid 1990s (Yggdrasil, then Debian). Over time, as Linux has gotten more and funding, it has gotten worse and worse. I initially switched to Linux because it generally just worked, and it worked better than many of the alternatives. But now it's just getting fucking horrible. I mean, look at systemd. Normal users, and especially power users, don't want it. It just causes problem after problem for many people. Yet we have corporate interests and corporate-funded developers forcing it on us, even forcing it into community-oriented distros like Debian. GNOME and Firefox are other great examples of community-based open source projects that got co-opted by money and ruined, to the most recent versions of both being almost totally unusable. On the other hand, we see projects that get less commercial interest, like Slackware and Xfce, producing the most usable and reliable open source software systems around. Linux was better when there wasn't so much money floating around. Back then it was about creating great software, and doing things right. Now it's about everything but that.
Re: (Score:1)
jimmie status=triggered
Re: (Score:1)
As a programmer, I find systemd horrible. It breaks compatibility with Unix.
Even the old GNU programs have largely broken UNIX compatibility a long time ago.
Re: (Score:3)
As a programmer, I find systemd horrible. It breaks compatibility with Unix. It's a nightmare that will shrink the open source landscape to just linux. The rest of us must now reinvent basics like toolkits, web browsers, etc because Linux zealots want to take over the world.
Or you could just bang around on this [menuetos.net].
Pre-emptive multitasking with 1000hz scheduler, multiprocessor, multithreading, ring-3 protection
Responsive GUI with resolutions up to 1920x1080, 16 million colours
Free-form, transparent and skinnable application windows, drag'n drop
SMP multiprocessor support with currently up to 8 cpus
IDE: Editor/Assembler for applications
USB 2.0 HiSpeed Classes: Storage, Printer, Webcam Video and TV/Radio support
USB 1.1 Keyboard and Mouse support
TCP/IP stack with Loopback & Ethernet drivers
Email/ftp/http/chess clients and ftp/mp3/http servers
Hard real-time data fetch
Fits on a single floppy, boots also from CD and USB drives
Since the blurb was writtten, browser, digital tv, webcam, movies, etc. have been added.
Or you can play around with Plan9 [bell-labs.com] - runs by itself or as an application atop linux, windows, etc.
Re:Linux was better when there was little funding. (Score:5, Insightful)
As Heinlein noted, TANSTAAFL [wikipedia.org], just like there's no such thing as free beer. Everything has a cost. Even free software.
And when you have such a fragmented ecosystem, the attack surface is going to be huge (after all, an OS is more than just a kernel), and the idea that "with enough eyes all bugs are shallow" is patently false. So it turns out that open source has been to a large extent relying on the same "security through obscurity" model. This was fine a decade ago, but the competition have stepped up their game and can afford to throw money and bodies at the job without begging.
The solution would be to do a code freeze for 2-3 years while the developers of the various projects audit their code and the ways other projects interact with their code - not just for security problems, but to get rid of bloat and cruft. That's not going to happen, because it makes too much sense. Everyone wants the newest shiny.
Linux was definitely better when there were fewer distros. What a mess.
Re: (Score:2)
As Heinlein noted, TANSTAAFL, just like there's no such thing as free beer. Everything has a cost. Even free software.
Free software is free as in beer. If you claim it's not free because of the time to put in, then $100 lying on the pavement in front of you isn't free because you have to take the time t ogo and bend down and pick it up. Making such claims is basically changing the definition of the word "free" to mean something other than what it actually means.
Linux is Free as in Beer because you can get i
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Sadly, the BSDs tend have hardware compatibility issues.
They are great for servers - better than Linux, IMO. But for desktops, and laptops, I don't know if I could recommend a BSD.
Re:Linux was better when there was little funding. (Score:4, Informative)
Yet we have corporate interests and corporate-funded developers forcing it on us, even forcing it into community-oriented distros like Debian.
Debian adopted systemd for reasons outlined here. [slashdot.org] It wasn't a conspiracy. Poettering knew that distros are crucial to the adoption of systemD, so he's made things as easy as possible for them, and given them features they wanted. Essentially systemd makes it easier to write an init script, and since Debian writes a lot of them, they liked that.
Of course, Poettering has been less responsive to other parties (like the kernel devs), but that's another topic.
Re: (Score:3)
Re: (Score:2)
I've been using Linux for an awfully long time, since the mid 1990s (Yggdrasil, then Debian).
Darn noobs! I remember having fun making the MCC Interim distribution work...
Re:Linux was better when there was little funding. (Score:4, Insightful)
No, it hasn't gotten worse. It has gotten responsive to user demands.
Back in the 90s when life was simple, users were simple. Unless you used an Amiga or MacOS, if you played a sound, that was it - no one else could play a sound (MacOS and Amiga had software mixers so you could listen to music AND hear application generated sounds - you could use exclusive mode if you needed it, though).
Likewise, you logged in and you rarely had things starting up just for you.
And your networking options were... single. You either had Ethernet, or a modem, and only one IP per host. And rarely did you move - I mean, if you were on Ethernet, it was assumed you were on the same network permanently, or at least changes were rare.
Nowadays, user demands have gone way up. Audio has to be mixed by the OS because the user may listen to tunes, start yakking on VoIP, and having sound effects played while gaming, all simultaneously. The VoIP call goes over say, a Bluetooth headset or the communications path, while the music and sound effects play through the main speakers. Oh, and no application is to dare use the HDMI port to send audio as it's hooked to a monitor with no speakers. A modern PC can easily have 4 or 5 different ways to play audio.
Likewise, when you log in, you probably have a few per-user services you like to have - either from the environment you're using or other services. It would be a shame if logging in again restarted those services (e.g., you log in locally, then log in remotely over ssh) or if those multiple sessions couldn't communicate with each other (e.g., you make a change remotely, and it fails to propagate through the rest of the logins).
And networks... well, an Ethernet port or WiFi? A user may connect to many different networks in a single day, and have more than a few ways to send a packet around. Perhaps they're hooked to their same network multiple ways - either dual Ethernet, or Ethernet plus WiFi. And maybe the next time the connection is re-established, those ports need to be firewalled because it went from private network to public.
Back in the old days, well, audio was simple because your PC couldn't really do multiple things at once. Networks were generally safe so it didn't matter that you didn't bring up the firewall on the public Ethernet connection. And users didn't run too many things in the background because no one could imagine needing to log into the console AND over ssh simultaneously, or they could just remotely kill the session because there wasn't important stuff to save.
And it's perfectly fine on a server that sits in a rack and never moves until it's powered down and retired. But modern users need this complexity just to manage their normal use case. Sure you can force the user to tell you what kind of network is at the other end, or to re-establish the VPN, but users want computers to do stuff automatically - I mean, why should I tell the computer this coffeeshop WiFi is public over and over again - can't it remember?
Or to reconfigure my VoiIP app because I attach my Bluetooth headset to my computer so it now uses that - why can't it ask for a communications headset, and if one isn't available right now, use the default audio hardware. Then when one suddenly appears (Bluetooth!), automagically use that? Zero reconfiguration, event he app doesn't have to reopen the audio device because the audio core did it internally.
It should be telling that the most popular Linux "distribution" in the world is Android, which has its own init system (like systemd, it manages processes, events, and other things), its own audio
Re: (Score:2)
FreeBSD 4.x, also from the 90s, allowed you to play multiple sounds simultaneously. It used the same OSS code that Linux used ... but they enhanced it to support features Linux never did. Unfortunately, Linux devs continuted with their NIH syndro
Whose Eyes? (Score:5, Insightful)
Even for non-security bugs, the many-eyes hypothesis contains a large dose of wishful thinking, but at least in that case most eyes are looking with the same purpose. When it comes to security, however, it is a race between black-hat and white-hat eyes, and the former only have to win once.
Re: (Score:3)
Re: (Score:3)
Even for non-security bugs, the many-eyes hypothesis contains a large dose of wishful thinking
Not true; Torvalds' observation wasn't what he wished would happen, it what what he'd observed repeatedly on a large and complex project over the period of many years.
That said, I think your disagreement is because, like many, you misunderstand the hypothesis. What Torvalds said wasn't that given enough eyes all bugs are visible, but that they're shallow, meaning easy to track down and fix. The hypothesis doesn't even come into play until the existence of the bug is known.
And, undoubtedly, there are som
Re:Whose Eyes? (Score:4, Informative)
Torvald's didn't say the "many eyes" thing at all. Eric S. Raymond did.
Re: (Score:2)
Torvald's didn't say the "many eyes" thing at all. Eric S. Raymond did.
Really? Wow. I've had that attribution wrong for years. Thanks.
Re: (Score:2)
Re: (Score:2)
It comes from this article [unterstein.net], which if you haven't seen, you might enjoy reading.
Thanks. I actually read The Cathedral & The Bazaar shortly after ESR published it, and have read it several times since. I'm really not sure how the "many eyes" notion got associated with Torvalds in my head.
Re: (Score:2)
Yeah, WTF?
Re: (Score:2)
The [many-eyes] hypothesis doesn't even come into play until the existence of the bug is known.
If that is so, then it doesn't help much with security, where finding exploitable bugs (and doing so before they are exploited) is usually the hard part.
Re: (Score:2)
The [many-eyes] hypothesis doesn't even come into play until the existence of the bug is known.
If that is so, then it doesn't help much with security, where finding exploitable bugs (and doing so before they are exploited) is usually the hard part.
Precisely. It's not that the hypothesis is wrong, it's just that it doesn't apply.
This doesn't reduce the value of open source for security software, because while it gives both white hats and black hats a great deal of help with finding vulnerabilities, the nature of security research means that the white hat side benefits more. Open source software, developed in public, also makes it more difficult for the likes of the NSA to insert back doors, because it's not just a matter of paying (or threatening) s
Re: (Score:2)
Re: (Score:2)
Shallow, WTF (Score:2)
Bugs can be made shallow?
On Linux, bugs are only skin deep
Why have bugs at all?
Re: (Score:2)
Bugs can be made shallow?
Sure. Just put on your cockroach-killer shoes (you know, the ones with the pointy toes to get 'em in the corners) and start stomping.
Unfortunately, you can't eliminate programmer errors the same way.
(programmer errors are not "bugs". they didn't mysteriously creep into the code on their own when nobody was looking. saying "it's a bug" is just a way to avoid responsibility for a mistake, and leads to both a slack attitude and a feeling of non-responsibility).
Second Linus Law: Curse the bugs out (Score:4, Funny)
Re:Second Linus Law: Curse the bugs out (Score:4, Insightful)
Maybe Linus isn't cursing at the developers with enough frequency or intensity?
It seems the kernel is rarely the problem, so I'd say the amount of cursing is just right. The problem is Linus doesn't run all these other projects.
Re: (Score:1, Troll)
If by by "kill" mean "improve" then yes.
Re: (Score:1)
Re-engineer the OS to include ROMs? (Score:2)
Re: (Score:2)
This has been tried with DRM. I remember those game CDs coming with bad sectors intentionally written to make copying difficult, and software products which came with a specially crafted parallel port dongle to add hardware protection.
None worked.
The solution you're proposing makes life more difficult only to regular users who would need to order chips and slam them into a motherboard to upgrade their operating system. Not to mention a bug which would creep into the read-only part of the OS. At least now yo
Re: (Score:2)
How many people would be harmed if some basic components of XP had been burned into ROM?
Everyone who had one, because they would be found to have security vulnerabilities (see here for an example of exactly that happening [defcon.org]), and then everyone's system would be vulnerable.
Incidentally, Kaspersky was building an OS that does exactly what you suggest [kaspersky.com], so if it works, then maybe we will see more of what you suggested in the future. I'm doubtful though, for reasons mentioned in the previous paragraph.
Re: (Score:2)
Intriguing suggestion, but perhaps based on a false premise that "data, programs and operating system components are equally vulnerable to writes by viruses." That's most certainly not the case even on a Windows platform. System files and folders usually require an admin to modify, and drivers and other OS components typically must be signed drivers to update. On "trusted computing platforms", there's even more security on what can even boot on the machine. A virus should only have privileges based upo
Re: (Score:2)
that's some of the idea behind http://en.wikipedia.org/wiki/W... [wikipedia.org]
OpenBSD does this even for kernel with x86-64
Modern software... (Score:2)
Modern software security is hard because modern software is complex.
Doesn't that just about say it all? More eyes don't solve complexity issues, only more brains and better architecture.
Re: (Score:3)
Doesn't that just about say it all? More eyes don't solve complexity issues, only more brains and better architecture.
I think that if you do some research - at least if you limit yourself to human subjects - you will find there's a strong correlation between number of eyes and number of brains so "more eyes" implies "more brains". And if you can settle the age-old discussing of whether encapsulation, abstractions and design patterns reduce or increase complexity you should the IT Peace Prize.
Too late... (Score:2)
.
Maybe a more cost-efficient approach to spending the Foundation's money would be to determine how and why the bugs get into the code in the first place, and reduce their occurrence as early in the development cycle as possible.
The earlier in the development cycle a bug is eliminated, the cheaper it is to eliminate the bug.
Re: (Score:3)
By the time something becomes "core infrastructure", it's usually not in a condition where a rewrite is at all advisable. You have an existing code base that's seen lots of real world usage and presumably works well most of the time, what you need is testing, cleanup, sanity-checking, error handling and formal verification that it performs as intended. And it's particularly important that you review obscure functionality like the heartbeat TLS extension that lead to the heartbleed bug, that you put many eye
ooooo (Score:1)
But then it's not FOSS anymore? How will they resolve this massive ethical dilemma?
Fund systemd? So redhat makes more billions? (Score:2)
Tempting offer, but I think I'll pass.
Good testing takes more than 50% of time and res (Score:2)
There is a way to properly test software. But it is insanely expensive. Real mission critical software (like airborne systems) has standards for code verification that are pretty tough. For example per standard DO-178B [wikipedia.org], required is complete structural coverage analysis; object code analysis; worst case throughput analysis; stack analysis, etc.
There's no way that volunteer programs can find funding for this or human resources to do this. Although many companies do contribute to various open source programs,
Re: (Score:2)
Well, finally today they do. LLVM is good for catching unsafe casts and such, which can hide buffer overflows.
Re: (Score:2)
conculsion based on a myth (Score:2)
undermined the confidence many had in high quality of open source
protip: only naive college-level awareness fanboys thought that. Everyone else was aware of the illusion of security in Linux and knew it was mostly through obscurity that it was not the victim of attacks. It's not just a lot of eyeballs btw, but the right eyeballs. And reviewing shipped code for security is usually the last thing foss people spend their time on.