Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Cloud Linux

New Operating System Seeks To Replace Linux In the Cloud 335

New submitter urdak writes "At CloudOpen in New Orleans, KVM veterans Avi Kivity and Dor Laor revealed their latest venture, a new open-source (BSD license) operating system named OSv. OSv can run existing Linux programs and runtime environments such as a JVM, but unlike Linux, OSv was designed from the ground up to run efficiently on virtual machines. For example, OSv avoids the traditional (but slow) userspace-kernel isolation, as on the cloud VMs normally run a single application. OSv is also much smaller than Linux, and breaks away from tradition by being written in C++11 (the language choice is explained in in this post)."
This discussion has been archived. No new comments can be posted.

New Operating System Seeks To Replace Linux In the Cloud

Comments Filter:
  • by Anonymous Coward on Tuesday September 17, 2013 @04:21PM (#44877817)

    No security either.

    • by jedidiah ( 1196 ) on Tuesday September 17, 2013 @04:30PM (#44877933) Homepage

      It boggles the mind that anyone would suggest something like this and then use the excuse of "well we only run on app on a box". That's such amateur hour nonsense. It's like running your cloud apps on classic MacOS or an Amiga.

      Just because you've only got "one app", it doesn't mean that you've only got one process.

      It sounds like running your 2013 server apps on an OS from 1985 but "in the cloud".

      [shake head]

      • by gl4ss ( 559668 )

        well only that app's should be accessible from outside.. if you're worried about breaking out from that app, what are you going to get into that you wouldn't get into in linux? unlimited access to /dev/? who cares when it's sandboxed? I mean, I don't see any need to run anything else on it - it could _really_ be running just one app, if you had your email inbox in there you would be doing it wrong. to get something you wouldn't get on a normal linux box you would need to break out of the vm, no? and that so

      • by vidnet ( 580068 )

        Just because you've only got "one app", it doesn't mean that you've only got one process.

        If you have multiple, semi-related tools, you currently wouldn't run them as different threads in the same process. Why put all your eggs in one basket, having to restart them all at once, letting one rewrite the memory of another, when starting a new process is so cheap?

        Now, if you have multiple, semi-related tools, you wouldn't run them as different processes in the same VM. Why put all your eggs in one basket, having

      • Re: (Score:2, Informative)

        by Anonymous Coward

        John C. Dvorak stated in 1996: "The AmigaOS remains one of the great operating systems of the past 20 years, incorporating a small kernel and tremendous multitasking capabilities the likes of which have only recently been developed in OS/2 and Windows NT."

        So be careful when linking Amiga with classic Mac...

        • The important thing that the Amiga OS lacked, when compared to a more "modern" operating system was memory protection. Simply because it lacked the necessary hardware to enforce it.

          Sadly, there was no provision for implementing any separation even when the necessary hardware was available, which it was on some of the later/high-end models.

          • It was also single-user, was it not?

            • by znark ( 77857 ) on Tuesday September 17, 2013 @08:08PM (#44879799) Homepage

              It was also single-user, was it not?

              That is correct. Single-user designs were the norm with personal computers of the era. There are some ways around this (, for example) but they're sort of limited. [cs.tut.fi]

              The lack of memory protection is due to the first models being designed around the plain Motorola 68000 CPU, which lacks a memory-management unit (MMU). Later models were available with beefier and more feature-rich processors from the 680x0 series, some of the including an MMU. You could also buy add-on “turbo cards” (processor cards taking over the functions of the main CPU, effectively replacing it with a faster one.) But by then it was too late. The OS relies heavily on shared libraries and message passing in flat, shared, unprotected memory space.

              Otherwise, the Amiga hardware platform and AmigaOS – the first model/version having been released in 1985 – included concepts such as preemptive multitasking, windowed GUI toolkit in the system ROM (no “text mode” at all), overlapping “screens” in different resolutions and bit depths, hardware blitter and DMA-based I/O (including multichannel sampled stereo sound), drivers for devices and filesystems, the “AutoConfig” system for add-on devices (fulfilling the same role as PnP did later in the Wintel world), 8-bit ISO Latin-1 character encoding as standard, windowed command-line shells, shell scripting, inter-process scripting (ARexx), an OS-provided framework of multimedia “datatypes” (handlers/decoders/players for common file types), scalable fonts, clipboard, speech synthesizer as a standard OS feature, etc.

              Ignoring Linux and OS/2 for a moment, in some ways it felt the Wintel camp only caught up ten years later when Windows 95 was released to the masses, and at that point, both the OS and the “IBM-compatible” PC hardware platform were still missing some key features and novel ideas that made the AmigaOS so great and elegant in its day.

      • It boggles the mind that anyone would suggest something like this and then use the excuse of "well we only run on app on a box". That's such amateur hour nonsense. It's like running your cloud apps on classic MacOS or an Amiga.

        Their premise is that you'll write your app as a series of separate servers. Then you'll deal with inter-server security instead of inter-process security. If two processes are basically parts of the same program anyway, you can run them on the same server so they can share memory.

        I think it's a silly idea, but it's not an inherently bad one. It might well make sense for some kinds of workloads. Until we get back single system image clustering on Linux (there was OpenMOSIX) this might help some people.

      • by Nyder ( 754090 )

        It boggles the mind that anyone would suggest something like this and then use the excuse of "well we only run on app on a box". That's such amateur hour nonsense. It's like running your cloud apps on classic MacOS or an Amiga.

        Just because you've only got "one app", it doesn't mean that you've only got one process.

        It sounds like running your 2013 server apps on an OS from 1985 but "in the cloud".

        [shake head]

        http://jsmess.textfiles.com/ [textfiles.com]

        That emulates a bunch of old computers, even 1985 ones. Runs on the cloud. Just need some server apps for any of them now...

      • If you read through the slides the core OS is only capable of running a single thread. What this looks like to me is a highly specialized OS that relies entirely on the hypervisor to do everything for it. Its basically a specialized version of Java with enough glue and support to run on a bare hypervisor. They started with Java and just implemented everything it needs to run on a hypervisor. Assuming the JVM is fully implemented it should greatly increase a Java apps performance because its gotten rid of al
  • by evanh ( 627108 ) on Tuesday September 17, 2013 @04:26PM (#44877873)

    If the BSD licence was as useful as GPL then Linux would never have grown in the first place.

    • by mcl630 ( 1839996 )

      Correct me if I'm wrong, but isn't BSD more permissive than GPL?

      • by meta-monkey ( 321000 ) on Tuesday September 17, 2013 @04:50PM (#44878181) Journal

        Yes, but not in a way that promotes growth. The BSD license is a trap for the grumpy kids who don't want to share their toys at recess.

        BSD: You can do whatever you want with your modifications to this code, including close them.
        GPL: You can do whatever you want with your modifications to this code, except close them.

        One of these creates a positive feedback loop in which small, incremental improvements from coders who share increase exponentially. The other creates a negative feedback loop in which the improvements from those who don't share are locked away and lost. I'll leave it to you to figure out which is which.

        • by Anonymous Coward on Tuesday September 17, 2013 @05:29PM (#44878581)

          One of these creates a positive feedback loop in which small, incremental improvements from coders who share increase exponentially. The other creates a negative feedback loop in which the improvements from those who don't share are locked away and lost. I'll leave it to you to figure out which is which.

          The problem with this claim is that you're simply lying by omission. (Well, there's another problem: hyperbole. "Exponentially"? Hah.)

          GPL: the positive feedback loop is damped by the unattractiveness of the license to many potential contributors, particularly GPLv3. Fewer participants equals less resources spent developing the project.

          BSD: the claimed negative feedback loop almost doesn't exist. Many of the entities (the same ones who have issues with GPLv3) whom you GPL zealots assume would just take everything private actually tend not to. Why? Because the reason they're using open source in the first place is to reduce their own workload, and maintaining a private fork of a public codebase turns out to be a lot of work. If you want to take changes from the public version, you're in permanent merge hell (because nobody in the outside world knows or cares about your local changes). If you want to fully fork and ignore the public version, now you're responsible for maintaining everything on your own. In most cases it's substantially less work to contribute your changes back to the public version.

          Basically the only time this actually happens in the BSD-licensed world is when someone decides "to hell with it, we don't care how much we have to spend, we're going to go all the way private".

          (All of the above is equally true of GPL-licensed code. GPL zealots are only assuming that their preferred license is required to create sharing. In reality, productive sharing is always an outcome of shared interests between all the parties involved, not the license.)

          • by Microlith ( 54737 ) on Tuesday September 17, 2013 @06:28PM (#44879117)

            And you've set your entire argument up in a biased fashion too.

            GPL: the positive feedback loop is damped by the unattractiveness of the license to many potential contributors, particularly GPLv3. Fewer participants equals less resources spent developing the project.

            A claim which is possible but largely lacking in evidence.

            BSD: the claimed negative feedback loop almost doesn't exist.

            Additionally unfounded. Given that BSD sources can be downloaded, modified, and their changes never see the light of day the loss of information is virtually guaranteed. Not to say it doesn't happen with the GPL, but it's actually a legal risk to allow it to happen.

            Because the reason they're using open source in the first place is to reduce their own workload, and maintaining a private fork of a public codebase turns out to be a lot of work.

            And yet there are plenty of vendors who would do just that if it suited their purposes. Your argument for staying upstream is entirely logical, but then, many corporations are not run in a logical fashion.

            Basically the only time this actually happens in the BSD-licensed world is when someone decides

            Would you ever know?

            GPL zealots

            Nothing worse than responding to a reasonable post with an invective.

            In reality, productive sharing is always an outcome of shared interests between all the parties involved, not the license.)

            But only so far as it suits their business interests. The GPLv3 was created to further advance its goals which have always been to ensure the software is free and that the recipient of the software is never encumbered by whomever they receive it from.

        • Oh, the irony. Let's conveniently ignore the fact that OSv is based on FreeBSD.

        • Not accurate. You can do whatever you want with GPL code, and you can keep your mods closed, as long as you do not distribute your 'closed mods'. If you distribute binary, you must provide the source.
  • Zing (Score:3, Interesting)

    by abies ( 607076 ) on Tuesday September 17, 2013 @04:28PM (#44877895)

    I wonder how well it will fare against Zing (http://www.azulsystems.com/products/zing/faq)
    Azul decided to go with route of extending vanilla linux with some kernel modules to provide extensions for most critical things, rather then replacing entire system and making custom jvm to utilize these extensions. I have a feeling that it is a lot better approach than using custom OS with plain jvm which is not aware of extra capabilities (if there are any...).

  • by stox ( 131684 ) on Tuesday September 17, 2013 @04:29PM (#44877905) Homepage

    pretty much accomplishes the same thing with even less overhead and without adding yet another layer of software.

    • I was just thinking the same thing... If you're running linux with KVM as your hypervisor... Why is this better than Red Hat's Open Shift? That way you get your cake and you get to eat it too.
    • With a container you still have the kernel-user boundary to cross on each syscall. I guess with OSv you could potentially optimise the whole OS, libc and application down into a single binary. Yes, it's interesting how this is developing -- Docker as well. Having lightweight containers or OSes like this is really taking off. I guess there may also be a bit more guaranteed isolation running it with OSv, less risk of a kernel bug leaving an exploit path, i.e. the isolation is at a different layer.

  • a C++ kernel (Score:4, Interesting)

    by OrangeTide ( 124937 ) on Tuesday September 17, 2013 @04:32PM (#44877959) Homepage Journal

    Another refreshing feature of OSv is that is written in C++.
    It's been 40 years since Unix was (re)written in C, and the time has
    come for something better.
    C++ is not about writing super-complex type hierarchies (as some people
    might have you believe). Rather, it allowed us to write shorter code
    with less boiler-plate repetition and less chances for bugs. It allowed
    us to more easily reuse quality code and data structures. And using
    newly standardized C++11 features, we were able to write safe concurrent
    code with standard language features instead of processor-specific
    hacks. And all of this with zero performance overheads - most of C++'s
    features, most notably templates, are compile-time features which result
    in no run-time overhead compared to C code.

    You end up taking the bad with the good. And some features in C++ are worth avoiding when you're outside of a nice big userspace runtime. Like exception handling, especially for classes that use multiple inheritance.

    L4 is a microkernel and hypervisor designed specifically to run an OSlib in a virtual environment with very little overhead. It seems to me that some things about L4 are very compatible with visualization, being that most drivers in L4 operate as a virtualized environment rather than a process.

    • Re:a C++ kernel (Score:5, Interesting)

      by Stele ( 9443 ) on Tuesday September 17, 2013 @04:47PM (#44878145) Homepage

      Fortunately, with C++ you aren't required to use any particular feature, and don't pay a penalty for anything you don't use.

      Furthermore, the alleged performance penalties that a lot of C programmers think exist in C++ actually don't.

  • by Animats ( 122034 ) on Tuesday September 17, 2013 @04:37PM (#44878015) Homepage

    This is a language-support library. It replaces the C runtime system, and the bottom levels of the Java runtime system. For environments where a virtual machine is running one program, or a family of tightly related programs, that's all you really need. The real operating system is the hypervisor underneath and the remote management tools that run the cluster.

    Linux, with millions of lines of code, just isn't doing much inside a VM. It's not managing the memory. It's not handling real devices. It's not handling real interrupts. It may not even be managing any file systems - in cloud environments, those are usually out on some storage area network. It's just a big fat pig of an OS that needs to be fed patches and attention to keep it going, while not doing any useful work.

    Within the virtual machine, there are no security boundaries. This may be a problem if more than one application is running in the VM. But if you only have one big program with many threads running, the OS isn't doing anything for you in security anyway.

    • by TechyImmigrant ( 175943 ) on Tuesday September 17, 2013 @04:42PM (#44878077) Homepage Journal

      So it's like DOS running on a VM. Yay!

      • But it's "in the cloud"!

        "I felt a great disturbance in the Force, as if millions of voices suddenly cried out in terror, and were suddenly silenced. I fear something terrible has happened."

      • Except it's multitasking, and can handle more than 640kB of memory without arcane magic.

    • by Dimwit ( 36756 )

      A very large number of cloud servers out there are running a bunch of Java applications inside a single application server inside a single JVM. The entire Linux kernel, virtual filesystem, daemons, user commands, etc, are just along for the ride. Having a barebones operating system that is just enough to run a JVM application server would fill a need for a lot of people. It's not a panacea and it's not the right choice for most virtual servers out there, but for some it makes a whole lot of sense.

      I find it

      • by TopSpin ( 753 )

        The entire Linux kernel, virtual filesystem, daemons, user commands, etc, are just along for the ride.

        A JVM relies heavily on a kernel for scheduling, storage (journalling, RAID, LVM, etc.,) network stack (IP stack, filtering, bridging, etc.) and virtual memory management, at least. All of those capabilities must exist; they weren't created because someone was naive. Either they land in some library used by the guest JVM or they land in the hypervisor.

        This isn't to say the now 40 year old IBM LPAR model is wrong. It clearly works, and VMWare is independently evolving into the same thing. But there are

    • by tftp ( 111690 ) on Tuesday September 17, 2013 @05:14PM (#44878435) Homepage

      The thing they have "invented" is called RTOS. Typically, an RTOS is a simple kernel that is not using any memory protection features. That's how they started, at least - but over time some RTOS got separation between userspace and kernel space. VxWorks, for example, offered that option back in the year 2000.

      Separation of one and the other is not just needed to protect from hackers. It creates a stable, reliable supervisor ring (hello, SVC command from IBM/360!) that can do whatever it wants to the userspace, whereas the userspace can't do anything to the supervisor. This allows the kernel to start, stop and monitor userspace applications, guaranteeing system integrity. If you don't have that, any bug, anywhere, can create unpredictable and undetectable faults within the system - and you will never know until the thing crashes horribly, which will eventually happen.

  • by MetricT ( 128876 ) on Tuesday September 17, 2013 @04:47PM (#44878137)
  • since the NSA is so good at cracking encryption and likes to spy on everybody and the US Govt is fascist in nature that means all the good ideas for new products will be stolen and given to government cronies in the private sector
  • If you're having performance issues, then C++ would offer a more efficient solution. Why jump through all these hoops to boost Java performance? Just use C++ and get twice the performance instantly with Linux. I tend to agree with the AC that the language issue is overblown. With practice programming is about the same level of difficulty with most languages. C++ does a pretty good job at compile time checking, interfaces directly with the system calls and offers nearly all the performance you can get f

    • If you go around telling your clients that they are idiots for using Java and should just switch to C++, you'll be out of business pretty quickly, regardless of whether it's true or not.

      As to why they would want to use Java... it's because there are far more reasonably god Java developers out there, and they are cheaper, too.

    • by radish ( 98371 )

      If you're having performance issues, then C++ would offer a more efficient solution. Why jump through all these hoops to boost Java performance? Just use C++ and get twice the performance instantly with Linux.

      That isn't true (in the general case) and it hasn't been for many, many years - can't tell if you're trolling or just ignorant. There are plenty of operations for which a JVM is actually faster than a C process (for example, Java new() is faster than malloc() [ibm.com]), and Hotspot runtime optimization has acce

      • Java new() is faster than malloc because it's caching memory. C obcache is faster than Java new() and if you extend C or C++ with garbage collection / memory caching instead of making slow kernel calls to allocate more memory then C and C++ win over Java. You picked the WORST possible speed comparison.

        Further, Runtime Optimization DOES NOT have more information about how the code will be used. Many hosting environments have GCC (even crappy shared hosts), with which you can build your program --MARCH=

      • There are plenty of operations for which a JVM is actually faster than a C process ... and Hotspot runtime optimization has access to a lot more information about how code is actually being used than static compile time optimization - the difference that makes can be remarkable.

        Very interesting, and oft cited as a defense of Java's speed. Now show me the benchmarks.

        there are a great many applications out there for which converting to C++ (for example) would not give any kind of performance boost (and may even be slower)

        Actual experiments and measurements?

    • Why jump through all these hoops to boost Java performance?

      Because Sun's marketing department was better than AT&T's marketing department.

  • This or something like this will be huge. I remember arguing against linux. Why would anyone want to install that? It's crap written by some college kid. I was extremely wrong, but I learned quickly after Solaris kept getting crappier and crappier.

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...