New Approach To Malware Modifies Linux Kernel 170
Hugh Pickens writes "Professor Avishai Wool has unveiled a program to watch for malware on servers with a modification to the Linux kernel. 'We modified the kernel in the system's operating system so that it monitors and tracks the behavior of the programs installed on it,' says Wool. Essentially, Wool says, his software team has built a model that predicts how software running on a server should work (pdf). If the kernel senses abnormal activity, it stops the program from working before malicious actions occur. 'When we see a deviation, we know for sure there's something bad going on,' Wool explains. Wool cites problems with costly anti-virus protection. 'Our methods are much more efficient and don't chew up the computer's resources.'"
Help! (Score:5, Funny)
Re:Help! (Score:4, Funny)
How is this off-topic? The mods must have been infected!
Re:Help! (Score:5, Funny)
Re: (Score:2)
premise to shutdown (Score:5, Interesting)
Is this not the very premise that caused the Amazon cloud shutdown? A failure to communicate back proper activity illogically deduced that there was an improper activity?
Re:premise to shutdown (Score:5, Funny)
This has greatly increased the online sales of weight-loss products although mostly from browsers identifying themselves as Internet Explorer
Linux users were terminated by their modified kernel after it detected that they were exercising ;)
Re: (Score:2)
In the news today
Thousands of System Administrator machines abruptly stopped working when emails with girl's names in them went to women that did not charge by the hour.
Film will be available when the System Administrator is able to access the streaming media server again.
selinux (Score:5, Interesting)
Great, sounds exactly like what people have been doing with selinux and capabilities. But selinux acknowledges we don't always do the same things with our computers as the next guy... Will this approach be as flexible?
I don't want to boohoo his research, it's probably fine, but the article summary just gets my goat. Malware is a lot more complicated than most anti-malware software authors make them sound, and false positives are the biggest/most complicated problem they have to deal with, especially in automated systems that block like this...
Re:selinux (Score:5, Interesting)
These malware programs today try to hide themselves into deep that you just couldnt find them if you dont know what you are looking for. This system here as i understand it tries to identify what are the normal parameters for a certain program to work. If the program doesnt do or behave like a normal software, then there must be something wrong with it and alarms go off, lights are blinking and all hell brokes lose.
Oh crap, red lights and i hear noises. Oh it's only the cops.
Re:selinux (Score:5, Informative)
Well, from my basic reading of the paper it sounds like it won't have false positives but it also will miss many negatives. Essentially when you build it it'll make a map of what system calls can be made and in what sequence. If an application makes a system call it never calls or never can call in that order because it's been hijacked then this thing will stop it. If you manage to do your nasty business using the system calls it normally uses, it won't. Think of it as a auto-hardening system turning off any syscalls or combinations that the application doesn't use anyway. One of the downsides is that if you know this system is in place, you can probably add dummy syscall patterns to your exploit to match the application's behavior unless it's a syscall it never does. Still, there's little reason assuming an attacker is perfect and this is a worthwhile protection for the cases where it does work.
Re: (Score:3, Insightful)
If an application makes a system call it never calls or never can call in that order because it's been hijacked then this thing will stop it.
The problem is that, for any non-trivial program, it is impossible for a static analyser to decide whether the software will call or not and in what order a specific trap. For instance, that static analyser is not able to tell you in advance at what time the exit syscall will be called.
Re: (Score:3, Insightful)
It can't catch every case, but it can rule out a lot of system calls. If setuid never appears in a program or library's source, it can't be called. If a program calls a bunch of things in an initializer function and then enters a more restricted main loop, the static analyzer should be able to catch that too.
I can't see how this approach could hurt, though of course it won't catch everything.
glibc calls setuid(2) (Score:2)
What about plug-in based systems? (Score:3, Insightful)
How does this thing deal with plug-in/add-on based systems like Firefox or Eclipse, where new capabilities get added to the executable through dlls (or java classes, I guess, in the case of Eclipse? - Although, with regards to Java, I wonder if this system would work at all, since I think the kernel never exactly 'sees' Java programs or classes as executables, but only the JRE, which already has all the system calls built into it?)
Re:What about plug-in based systems? (Score:5, Informative)
How does this thing deal with plug-in/add-on based systems like Firefox or Eclipse, where new capabilities get added to the executable through dlls (or java classes, I guess, in the case of Eclipse? - Although, with regards to Java, I wonder if this system would work at all, since I think the kernel never exactly 'sees' Java programs or classes as executables, but only the JRE, which already has all the system calls built into it?)
It's about servers here, I personally think one should really think thrice before installing plug-ins and add-ons on a server, and rather go browsing on a desktop machine. Regarding Java, I can see your point.
Re:selinux (Score:4, Informative)
Or the application has been legitimately updated to do new things...
Well that part is handled. The call map is made when the application is compiled, so new source equals new map. That's at least one advantage over SELinux, this is completely automatic and it's always correct in that sense. Though if you want to get this with precompiled binaries, you also have to get precompiled maps so the distros have to help you out - it's not something you can set up on existing binaries on your own.
Re: (Score:2)
you also have to get precompiled maps so the distros have to help you out
This was in ggp post, of course you can get maps for every distribution if this takes off.
Re:selinux (Score:5, Informative)
However, you've probably already spotted the major flaws with this approach: the first is that it only works on compiled programs, which strikes me as a serious problem when you're talking about webservers. The second is that it doesn't work on certain classes of programs whose execution pattern is extremely difficult to predict: self-modifying code, highly dynamic code (longjmp is not allowed), etc. Another limitation is that it only works on statically linked libraries. Finally, it is totally dependent on GCC and friends, which could be a problem for it moving forward, and in groups where the intel compiler is preferred.
As for false positives, the entire point of this is that inside of these admittedly limited confines, it has no possibility for false positives. This system is not statistical in nature. It depends entirely upon the program itself to determine correct syscall behavior.
All in all, while it is a long way from a practical security model, it does offer the promise of powerful, accurate protection from certain classes of attacks. When combined with selinux and pcap on a system with a slim attack profile it could help to narrow the gap between being a zero-day compromise and having full protection.
Re: (Score:2)
On the other hand, it's strong where selinux is strong(the applcations etc.. are well known... and the context restricted. While I had hope for an approach that required a kernel mod to work, to actually be able to secure that open moat of computing ... the shared hosting server...
Seems all the work is going in applications server, where the hardware is dedicated and the resources are already plenty... And almost no work is going into securing the already at risk segments... (Think the resold cpanel serv
Re: (Score:3, Informative)
Re: (Score:2)
I think you're talking about "libcap" (a capability library), not "libpcap" (a packet capture library).
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Heuristic scanning v2.0? (Score:2)
I could be wrong but this sounds like the heuristic scanning features that has been in Norton Antivirus and other A/V utilities for almost a decade now, where it searches for out of the norm items and reports or blocks them, such as a program deciding to write to the MBR, or a program using raw disk I/O to write to the hard disk.
Re:Heuristic scanning v2.0? (Score:5, Interesting)
Wow, those "heuristics" sound like a simple blacklist of "virus-like" activities.
No, what this does is cleverer. It creates (at compile time) an automaton representing the system call activity of the program, and if the program tries to make a syscall that does not have a matching edge in the automaton, it kills it. Basically, if there is not a code path that should lead to execution of a certain syscall, the program gets killed.
Just trying to understand (Score:2)
It creates (at compile time) an automaton representing the system call activity of the program
At compile time of the program? So in addition to a modified kernel you need a modified gcc and to compile everything from source or have a specialised distro? It doesn't surprise me that the summary should be lacking such details, but it would be nice if for once it gave a decent overview.
Re:Just trying to understand (Score:5, Informative)
It creates (at compile time) an automaton representing the system call activity of the program
At compile time of the program? So in addition to a modified kernel you need a modified gcc and to compile everything from source or have a specialised distro? It doesn't surprise me that the summary should be lacking such details, but it would be nice if for once it gave a decent overview.
I agree that this was a poor summary but instead of complaining about the summary you could always do something crazy like read the article.
If we don't complain, they won't improve (Score:2)
you could always do something crazy like read the article.
Just stay by your computer and the men in white coats will be with you shortly.
Re: (Score:2)
Well, if something like this took off programs could come with execution fingerprint files or whatever containing this information. However, that just means that instead of downloading a virus-infected executable you'll just end up downloading the virus-infected executable and its infected fingerprint.
The issue with any scanner like this is that you need to start from a known-good state. If you take all the time to maintain it a system like tripwire is virtually impenetrable - even if there is a break-in
Re: (Score:2)
Well, if something like this took off programs could come with execution fingerprint files or whatever containing this information.
That's what I meant by a specialised distro - unless by "take off" you mean that Linus copies it to trunk.
The issue with any scanner like this is that you need to start from a known-good state.
I could be wrong - I haven't RTFA, just some of the comments on this thread - but my understanding is that it's not anti-virus but anti-buffer-overflow. In other words, the instructions which it's stopping aren't in the executable at all. I'm not sure* why this can't be fixed more easily by making the kernel memory management allocate non-adjacent sections for code and data such that a buffer overflow i
Re:Just trying to understand (Score:4, Informative)
I wonder if a fix for the buffer overflow (besides languages that make it harder) would be to separate the stacks used for local variables and return addresses.
The problem is that when a call is made to a function the compiler pushes the return address onto the stack. Then the function allocates space for its own variables on top of the same stack. If one of those variables overflows it can hit the return address. That essentially is a mixing of code and data. If you had two stacks then the processor could trigger an exception if anything writes to one of them except via a call or return. You could probably accomplish this via changes to the compiler without a processor change - the processor will always use the regular stack but a compiler could be designed to maintain a separate stack for local variables. You wouldn't have that read-only protection on the regular stack, but the two would be in different segments making an overflow impossible.
Other tricks that are used are things like canarys - values written onto the stack and then checked before a return - if there was an overflow the canary would not be intact. GCC has an option to do this which works most of the time.
Re: (Score:2, Funny)
Re: (Score:2)
What about threads? They will show different signature each time.
You have two threads; one of them dies.
Re: (Score:2)
It's more clever than that. Imagine a program always called select(), read(), and write() in that order. Through static analysis at compile time, you guarantee that's all the program should be doing. If you observe the program run select(), setfsid(), write(), it can't be running the code it was compiled with, and so it should be killed.
Oh great. (Score:5, Funny)
Doesn't our economy have enough problems? Do we really need to put Linux anti-virus vendors out of business? Next we'll probably drive the ice vendors in Alaska to bankruptcy!
Re:Oh great. (Score:5, Funny)
Re: (Score:2)
It's Ok, the government will bail them out before they drown.
Re: (Score:2)
Joking aside, there are plenty of anti-virus companies with Linux products. AVG and F-Prot are 2 I use regularly (on servers, to protect the Windows machines connected to them).
In a particular client of mine, about 40% of the yearly anti-virus license payments are related to Linux server licenses (the other 60% is, of course, related to the workstations/desktops).
So yeah, while the home market for Linux anti-virus software might be close to nill, on the corporate segment it is fairly significant.
And yes, I
Re: (Score:2)
Question: why don't you go the F/OSS route and use ClamAV?
Re: (Score:2)
Because ClamAV is SLOW.
I have tested Samba servers running both Clamav and F-Prot. The access times on files, when running f-prot.so, was pretty much the same. However, when using clamav.so, it became so slow (relatively) that some programs started having problems.
I still use clamav on non speed critical environments (like e-mail servers). But for file servers, it is simply not an option.
Re: (Score:2)
>Next we'll probably drive the ice vendors in Alaska to bankruptcy!
Naww, there's always a demand for governors lying in their tanning beds sipping iced tea while keeping an eye on those pesky russians.
How it works (Score:4, Interesting)
From the papter: "The resulting model is an automaton that represents the legitimate order of system calls that an application may issue. This automaton is then enforced by Korset's monitoring agent, which is built into the Linux kernel, by simulating every emitted system call."
This is not likely to work for scriptable applications (Apache, Java-based servers, etc.) The order of calls is determined by the script, not the underlying executable.
Re: (Score:2)
From the paper: "The resulting model is an automaton that represents the legitimate order of system calls that an application may issue. This automaton is then enforced by Korset's monitoring agent, which is built into the Linux kernel, by simulating every emitted system call."
Why couldn't malware makers then just insure their programs emulate this profile?
Re: (Score:2)
No, because the whole point of malware is getting a program to do something other than what it's supposed to do, and the profile describes, well, what the program's supposed to do.
Re: (Score:2)
I see now is not a good time use the word insure around Americans!
On topic,
Why couldn't malware makers then just insure their programs emulate this profile?
They could but this
1) makes it a trickier to write the malware
2) limits the scope of damage the malware can do to what the program is doing (much like selinux)
Re: (Score:2)
It will work to a degree: the scripts can be perverted at the script level - in theory java could be caused to run arbitrary java. But this would still protect against lower level exploitation via scripting languages. Arbitrary script but no arbitrary code (unless it can be achieved through legal script).
Also, it would bock in response to anything scripting language that would allow constructing lower level calls.
Mod parent up (Score:2)
This could be a serious problem! (Score:1)
If I stop surfing pr0n will it detect that anomaly and halt my browser?
Will that crash my gnome desktop too?
Oh NO!
Re: (Score:1, Funny)
If I stop surfing pr0n ...
Why deal with hypotheticals that we know will never occur in real life?
Ummmm..... (Score:3, Informative)
....I thought that was the philosophy behind AppArmor (http://en.opensuse.org/Apparmor).
It's been deployed in SuSE products for years.
Re: (Score:2)
Also the BSD's have something similair I've heared (and forgot the same)
Re:Ummmm..... (Score:4, Informative)
....I thought that was the philosophy behind AppArmor (http://en.opensuse.org/Apparmor).
It's been deployed in SuSE products for years.
Apparmor seems to be a relatively sophisticated least-privilege system, i.e. the idea that if a BIND DNS server should never need to (for example) modify the routing table, then it also should not be able to modify the routing table. That way, if an attacker compromises said DNS server, he won't be able to do very much with it that isn't directly related to serving DNS requests (this is why I would personally refer to such a system as damage control, useful for containing/limiting an attacker who has already compromised something). The system discussed in the article is different in that it seems to be less concerned with what specific tasks a program should or should not be doing and more concerned with whether the code that is executed and the way that it is executed is what you would expect from the program's source. That way, if someone exploits i.e. a buffer overflow and inserts their own shellcode, it would deviate from the pattern that you would have expected from the exploited program and this deviation would be detected.
Both can be compared to systems like PaX [grsecurity.net] (kernel) and SSP [ibm.com] (userspace) which are intended to make sure that an attacker will fail to exploit an existing vulnerability, such as an unpatched buffer overflow, in the first place.
I'm definitely not an expert... (Score:5, Interesting)
Linux?? (Score:2, Informative)
I really don't think Linux has problems with malware. I think there is an other operating system having more trouble.
As far as I know virus scanners are used on servers mostly to check data that goes through it (example: email server); this data will however not be executed on the server.
Re: (Score:2)
I really don't think Linux has problems with malware. I think there is an other operating system having more trouble.
Maybe, but we still need to work hard to ensure that that is the case; why do you think that SELinux and stack randomisation have been developed ? Remember: pride becomes before a fall!
Re: (Score:2)
I really don't think Linux has problems with malware.
Yet.
Not a lot of /. readers seem to know this but (whispers) most modern Windows malware doesn't depend on the user having administrative privileges.
IOW, a program which, when executes, deletes all your documents then emails itself to all your friends can exist just as easily on Linux as it can on Windows. About the only thing preventing that right now is that nobody's got around to it and it would still require some effort on the users part because AFAIK no Linux mail client will perform file(1) on attach
Re: (Score:2)
Nothing can stop the dancing bunnies, NOTHING.
So what? (Score:1)
Re: (Score:2)
I think they want to prevent running programs that have had a some kind of overflow and there 'code rewritten' to be able to do things they were never intended to do. And by using source-code analysis this might be an easier way to construct a lot of a program is allowed to do.
Not a good article, but an interesting paper. (Score:5, Insightful)
OK, what this is doing is watching for code injection attacks (buffer overflows, stack smashing, etcetera) by building a model of how each specific application is going to operate, and blocking system calls that the model of the application would never make. It seems like an interesting approach, though it may not be as useful on Windows where there's not such a formal distinction between system calls and other kinds of calls.
It won't do anything about interpreter code injection (eg, SQL injection or shell code injection) or script privilege escalation attacks (eg, ActiveX and other "cross zone" attacks in Internet Explorer), or attacks that involve complete executable code drops.
Still, this is useful and not nearly as dodgy as the article made it sound.
Re: (Score:3, Informative)
if my exchange server is compromised and starts to serve webpages by binding to a port other than port 25, this method would catch it and kill the process in its tracks.
On the other hand, if your exchange server is compromised and starts shipping spam it won't do a damn thing to stop it. And if your exchange server is compromised and loads a DLL using its normal API for loading modules, then it can do anything that module will attack.
You application, which normally never spawns a procses while in a certain
Re: (Score:2)
That's true, but as far as I can see, there is no reason that the same concept couldn't be applied to dynamic linking for most programs. The exception being the programs that construct library names at run-time, what should amount to 1 or 2 executables at an entire system.
Re: (Score:2)
That's true, but as far as I can see, there is no reason that the same concept couldn't be applied to dynamic linking for most programs. The exception being the programs that construct library names at run-time, what should amount to 1 or 2 executables at an entire system.
That exception includes every program that uses or provides a plugin architecture. On Windows, that includes Exchange, IIS, and every program that uses COM and ActiveX, like Microsoft Office and Internet Explorer.
Re: (Score:2)
Re: (Score:2)
Oh yes, it's definitely limited to a subset of applications on any platform. Windows is just particularly hard because things like COM are so pervasive.
Again, that doesn't mean it's useless. It does seem to nail executable code injection attacks pretty effectively.
Re: (Score:2)
The day Microsoft does anything truly smart to prevent malware is the day they switch to a linux kernel!
You say this like you think it's something that will never happen. Google David Cutler, VMS, and NT. Microsoft has learned/copied from other solid OS's before, and I think it's a safe conclusion that they probably will again. Linux's success is not lost on the boys from Redmond.
It bears repeating, MSFT's primary problem with malware is simply that they are the biggest target.
how is this better than grsec or selinux? (Score:2)
Will fork bomb do work still? (Score:2, Interesting)
Re: (Score:2)
It's actually completely useless against a shell script, because if you're running a valid shell script
I might give it a spin (Score:3, Interesting)
Sounds like a good idea to me, I just want to see what the Linux kernel pros think of it...
Re: (Score:3, Interesting)
Re: (Score:2)
Not too specialized, since running servers is a very common use of Linux. Also, all the overhead goes away if it is activated by a flag at compile time, like lots of other Linux functionalities.
Its biggest roadblocks seem to be the usual ones: very recent code that may have bugs, no maintence history (the Linux developers may want to wait a bit more to see if they won't get a bomb), politics...
It's kind of like Vista (Score:4, Interesting)
Sounds from the summary at least (hey, it's slashdot, I haven't read the article) that it's similar in some ways to the service profiling in Vista. The service profiling means that the dev looked at what the service needed to do to be able to run and gave it only those permissions, restricting the damage it could do if it were compromised. This seems to extend that to give the kernel the intelligence to baseline the services itself, and then restrict activity when the baseline activity changes.
Re: (Score:2, Interesting)
These guys make a point of avoiding the labour involved in manually building the profile. (FWIW, I don't know about anything service profiling in Vista -- in fact, I had never heard about it -- but your description also sounds somewhat reminiscent of AppArmor.)
"the kernel in the system's operating system"? (Score:2)
WTF?
Systrace (Score:2)
This actually isn't new. Systrace [umich.edu] has been doing this for years. And it runs on more than just Linux.
Tron (Score:3, Interesting)
Somehow, this technique reminds me of the (obviously rather simplistic) description of the functionality of the Tron program from the movie of the same name. From the script [imsdb.com]:
DILLINGER
[...]
What's the thing you're working on?
ALAN
It's called Tron. It's a security
program itself, actually. Monitors
all the contacts between our system
and other systems... If it finds
anything going on that's not scheduled,
it shuts it down. I sent you a memo
on it.
DILLINGER
Mmm. Part of the Master Control Program?
ALAN
No, it'll run independently.
It can watchdog the MCP as well.
As I understand it (Score:3, Insightful)
this isn't anything specifically to do with malware.
As far as I can see, this verifies that the binary currently running is the same binary that was compiled from a (trusted) source.
When you compile it, it knows (from the source) what the program will and won't do. If the program deviates from that, it dies (as it's been replaced by malware, presumably)
If I'm wrong, please correct me...
Pre-Cybercrime? (Score:2)
The consequences of course have a different quality of impact, this isn't dealing with human lives, but there still might be a lot at stake.
Re: (Score:2)
False negatives would be a much bigger problem, but it's as good an automated technique as anything I've seen before. If the profiles can be distributed with binaries it might make it mainstream.
Re: (Score:2, Insightful)
This is drivel - it assumes that a static binary analysis can be used to predict the dynamic behavior of a non-trivial application, with zero false positives.
Which is why I tagged this with "haltingproblem". If it's impossible to write a terminating program which simply determines whether or not another program terminates, just how likely is it that a program can correctly predict far more complex behavior of another program? Since the halting problem is noncomputabe, we know it can't do it with 100% rel
Re: (Score:2, Informative)
The halting problem is a nice thing, but don't throw it at everything that looks like static analysis.
You can take any program and build a directed graph. The nodes are instructions. You add an arrow a=(x,y) to it, if after executing x the instruction pointer can be at y. No halting problem here. To be more precise the graph is build in O(n) with n being the number of instructions in the program.
Now imagine you leave out most instructions, and consider system calls only. Still no halting problem.
Now if you
Re: (Score:2)
You can take any program and build a directed graph. The nodes are instructions. You add an arrow a=(x,y) to it, if after executing x the instruction pointer can be at y. No halting problem here. To be more precise the graph is build in O(n) with n being the number of instructions in the program.
Nice idea but there are some things that can throw a spanner in the works. Function pointers are one, inheritance with dynamically loadable classes is another.
Re:Completely incorrect basic assumptions (Score:5, Interesting)
You're right. You can't exactly predict the behavior of a program without running it.
But that's not what this package is trying to do. Instead, it's trying to rule out large swaths of the behavior space of a program based on static analysis. Of course there will be false negatives -- i.e., malicious actions that remain undetected. But I don't really see how false positives would be a danger, modulo bugs in the static analyzer.
I imagine this package would be nearly useless for something like firefox, which does many varied tasks. But for programs like exim, or bind, or vsftp -- which do one task over and over again -- the degree of protection should be pretty good because there's a lot these programs don't do.
Re: (Score:2)
Firefox might open /dev/{sounddevice}. And Firefox can't call chroot anyway: it's not running as root.
Your comment illustrates perfectly why application profiles should be created by static analysis and not humans.
Re: (Score:2)
1. Linux has no viruses. Why would we need something like this?
Because Linux has vulnerabilities which can be (and have been in the past) used to take control of a system, and this is an attempt to fight that. All viruses are malware but not all malware are viruses.
2. If they're going to make a list of the behaviour of _every_ program, then that list would be HUGE, and take petabytes. A blacklist is a lot easier to make and keep up.
Unless I'm mistaken, this only needs a whitelist for the software installed on the system, so itwon't be that big and a whitelist is safer from a security point of view (uptime may be problematic, though, depending on the number of false positives).
3. This still wouldn't protect the weakest link: the user. If a virus asks the user :" Add IAmATrojan.exe to the whitelist ", a lot of people wouldn't do it. Heck, if you need even need a normal anti-virus, you're to stupid to work with computers.
Because this is primarily aimed at servers, and if someone w
Re: (Score:2)
"Malware" is an unfortunate choice of words here.
While desktop Linux doesn't have the same malware problems as Windows, we still have problems with random server programs being compromised. This approach is actually, I think, more effective on the server than the desktop.
Firefox, say, has a much larger variety of behavior than bind. Firefox can do anything; bind does the same thing over and over and over.
Since Firefox does more varied stuff, this system call profiling approach would see more of its behavior
Re: (Score:2)
if you need even need a normal anti-virus, you're to stupid to work with computers.
What if you want to download a freeware program to perform a task, but want to know if it's infected? What if your system has a zero day exploit and has been infected without you knowing? Anti-virus scanners are unfortunately a necessity when it comes to using pre-compiled binaries.
If you are never going to connect to the net or removable storage, and only use software that you have written yourself, then yes - anti-virus is unecessary
Re: (Score:2)
Run it under a dummy user account.
Anti-virus doesn't protect you from zero-days either. If you want to check for infections, your best bet is to use some kind of tripwire software (with signed hashes stored and checked offline).
Re: (Score:2)
You link to some random asshat (probably yourself) who links to a dead link. Nice try.
Sorry to blow your illusion, but that's just pathetic.
Re: (Score:2)
while you are true that there are viruses for Linux and it is a smaller target, they are not JUST as vulnerable, the entire UNIX base (small programs that do little and user privilege restrictions) make UNIX systems much more secure from the start. Its also pretty much impossible to infect a well secured system (SELINUX + PAX + hardened toolchain) and this seams like an extra layer to provide automated selinux-like functionality.
Re: (Score:2)
face it, there aren't any gnu/linux viruses in the wild, despite the fact that gnu/linux has a majority share of webserver
Re: (Score:2)
Uhhh... no. A monolithic kernel still uses a user space/kernel space separation. They're monitoring stuff that's happening in the user space by what system calls those userland applications make (to the kernel).
If you considered a Linux kernel to be an entire operating system, you wouldn't be able to get a whole lot done on your computer. Think system libraries etc.
Re: (Score:2)
Operating System is not there to do lots of things for you. It is there to allow you to run all other applications and libraries etc in your computer hardware. And if you even try to mention that there is no difference about monolith kernel and microkernel structure, I say you should first say what is wrong with all these links.
"This is a crucial, but subtle, point. The operating system is that portion of the software that runs in kernel mode or supervisor mode. It is protected from user tampering by the ha
Re: (Score:2)
Hmm... you call me as idiot because I did not mention that I know the monolith kernel has the kernel space and user space (kernel mode, userland) separation.
But at least these speaks against your own comment.
http://tinyurl.com/532kb8 [tinyurl.com]
http://tinyurl.com/mum9x [tinyurl.com]
http://tinyurl.com/qhuhg [tinyurl.com]
http://tinyurl.com/3uaq48 [tinyurl.com]
I am not sure but if few computer science professors speaks about against your "OS = kernel + userland, with no distinction between a microkernel and a monolithic kernel" I s
Re: (Score:2)
I was wrong. Your original comment marked you as being ignorant.
It's your reply that reveals your abject stupidity. Even if your links supported your ridiculous assertion, you'd be arguing by appeals to authority. Since they don't, you're arguing by appeals to a lack of reading comprehension.
Re: (Score:2)
Thanks for your comment. :-)
Re: (Score:2)
I think the difference is that CyberSecure's model is generated from usage, while this package's model is generated with static analysis.
I'm leery about usage-based profiling. What if there's a perfectly legitimate code path that the program just doesn't take while you're generating the profile? The program could be killed for something innocuous. I don't want that danger.
The nice thing about static analysis is that it's always correct!
Re: (Score:2, Informative)
Their method is to automagically profile the software when it's compiled. That's obviously something antivirus software doesn't do.
Antivirus software generally doesn't know how a particular piece of legitimate software is supposed to behave. The idea here is that the approach these guys are taking is exactly to try to build a profile for the normal behaviour of the application (or some parts of it, namely the pattern of syscalls). The analysis would be done at compile time, and when the application is being