Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Businesses Software Linux

Docker Moves Beyond Containers With Unikernel Systems Purchase (thenewstack.io) 69

joabj writes: Earlier today, Docker announced that it had purchased the Cambridge, U.K.-based Unikernel Systems, makers of the OCaml-based MirageOS, a unikernel or "virtual library-based operating system." Unikernels go beyond containers in stripping virtualization down to the bare essentials in that they only include the specific OS functionality that the application actually needs. Their design builds on decades of research into modular OS design. Although unikernels can be complex to deploy for developers, Docker aims to make the process as standardized as possible, for easier deployment.
This discussion has been archived. No new comments can be posted.

Docker Moves Beyond Containers With Unikernel Systems Purchase

Comments Filter:
  • Could we make things a bit more complicated? Virtualization I understand. I don't understand the need for Docker. It is just another layer that will break. Maybe I am just getting old, but this seems very complicated and prone to breakage with all the layers.
    • Re:Complicated (Score:5, Insightful)

      by jiriw ( 444695 ) on Thursday January 21, 2016 @09:21PM (#51347945) Homepage

      Virtualization is 'expensive', as each virtual server running on the host operating system has it's own operating system, each running their own kernel, having their own generic support libraries, doing its own memory management, hardware access and interrupt management (to/on virtual devices emulated by the host operating system), etc.

      Chroot is 'inexpensive', but it only offers a thin veneer of file system separation.

      Docker lies somewhere in between. It has its self contained file system with all the generic support libraries (user land) needed for the application, but hardware resources are managed by the single kernel of the host operating system. This does give rise to a restriction not present for true virtual machines; all Docker containers on one host system must use the same kernel (interfaces): those of the kernel of the host system. Actually, the kernel has some special modified interfaces to make certain the applications in the docker container can't access (data or processes of) other docker containers (unless permitted) or the host operating system, and for those applications it still 'feels' like they are running their own copy of the operating system. But, for example, all processes running in all Docker containers on one host system are part of the processes list of the host kernel, there is only one memory manager; that of the host system, etc.

      Now there is that newfangled Unikernel kid... What I understand of it is that, in comparison with a Docker container, the support libraries / userland is stripped bare so only the symbols/functions remain that are actually in use by the applications that run in it. But the 'kernel' bit in Unikernel would suggest also parts of kernel functionality is transferred to the container and I would suspect parts not in use by the actual applications in there would not be included. The question is, how much of the host kernel can you transfer to the containers? Certain things should be done 'at the top', if only to prevent containers from hogging critical system resources and such and still being able to do certain system diagnostics at the host os level...

      I should read more about it. It seems to be interesting stuff.

      • by Anonymous Coward

        "Expensive" is a loaded word. Unused CPU cycles and memory bytes are worthless, i.e. a system running at 5% load costs the same as a system running at 10% load.

        • If your server is running at 5% load under peak conditions, then you wasted money buying a lot more server than you needed.

          Reducing overhead is relevant precisely when system demand is at it's peak - for a large operation a reduction in overhead may translate fairly directly to a similar reduction in the number of physical servers needed, with offer proportional cost savings.

          • by Znork ( 31774 )

            And if docker means you're going to spend more time managing OS level instances, then any savings on hardware are often eaten within weeks of deployment. Shared systems are a PITA if there's any need to coordinate between multiple users of those systems at all.

            The fact is, I suspect the only way to avoid triggering manpower costs while implementing docker currently in a large company is basically to deploy it on a one-container-per-OS-image basis. Basically as an application packaging method. Which of cours

      • by swb ( 14022 )

        It sounds almost like someone used a smart linker to link in the operating system with the application to create a standalone, appliance-like application with its own integrated kernel.

        It kind of makes some sense, if you think about the sort of general trend of "appliance" servers like FreeNAS or Pfsense that are actually customized OSs with applications on top but have functionally complete user interfaces that don't really expose the underlying operating system.

        It would be great if there was a way to auto

        • FreeNAS has that. It's jails. I run almost everything in its own jail on my FreeNAS box.

          I still haven't seen what Docker brings to the table other than "Hey, I don't like that wheel, look at mine".

          JID IP Address Hostname Path
          1 - armory_1 /mnt/keg/jails/armory_1
          2 - couchpotato_1 /mnt/keg/jails/couchpotato_1
          3 - gogs_1 /mnt/keg/ja

          • Re: (Score:1, Interesting)

            by Anonymous Coward

            Docker allows you to run an executable in a container with no setup and provides resource allocation, two features which jails don't support. Like jails, docker can also use an existing container as an overlay on the base system. Docker can dynamically limit CPU utilization and memory, and it supports prioritization including disk IO. You have full control over the container including stopping execution, freezing the contents, and restarting later.

            At this point, jail is a poor man's docker, and FreeBSD suff

        • Re:Complicated (Score:5, Informative)

          by dns_server ( 696283 ) on Friday January 22, 2016 @03:34AM (#51348903)

          This is something different.
          Take the linux kernel, split it into modules such as tcp/ip stack etc.
          Now you can create a binary that includes the part of the os that you need that you can include with your binary.
          You then link the kernel and your application so you have one binary with the bits of the os you need as well as your program and nothing else.
          No init system, no other processes just the os.
          You can then run this inside xen as the target so you don't need the hardware support.

          • You can then run this inside xen as the target so you don't need the hardware support.

            Indeed! Of course you can just write it in something with a C API underneath and make calls to socket() and connect(). No need to link in the whole of Chrome for example, just to get an editor...

            It's just funny that the OS is being reinvented one level down and will inevitably grow back up to where it is now, but looking different.

          • by swb ( 14022 )

            I think that's mostly what I described as "linking the executable with the operating system" although perhaps less succinct. It's what I meant to say. :-).

            The problem with this, though is the Unix nature of not having a single application + OS, but having functionality spread over many applications. This may make it more complicated to use this type of a system unless it can somehow accommodate multiple applications into a single binary.

            It goes against Unix religion, but I always kind of wished applicatio

      • All of our commonly used, general purpose, end user operating systems suck. Separation, compartmentalization simply isn't there. Everyone knows the advantages of proper compartmentalization: easy portability, unparalleled security, a mighty robustness against system-crashing errors and reboots, configuration flexibility and avoidance of dependency conflicts, the ability to easily prioritize system resource usage, the ability to roll back snapshots without rebooting the entire system, sharing or cloning bas
        • by swb ( 14022 )

          I think HVM will be here to stay and continue to be widely adopted because its one of the few ways to solve the problems associated with dependencies on the crappy OS underneath. And sometimes the dependency on the crappy OS underneath is really about supporting the crappy application on top which doesn't support a newer version of the OS.

          I almost never see a heterogeneous OS environment, at best variation within a given OS family's version (eg, Windows 2003, 2008, 2012) and quite often multiple operating

      • So it's a poorly implemented jail.

    • The layers add abstraction, compartmentalization, portability and the ability to roll back snapshots without rebooting. If you understand virtualization then I'm sure you can understand how this can greatly increase security and robustness in general, at least in principle.

      Now, if you don't understand why someone would use an LXC-based solution like Docker instead of a fully virtualized HVM machine then you probably haven't seen them in practice. They are native speed, and you don't have to mess around w
    • by Anonymous Coward

      It's BSD jails or Solaris containers all over again. Imagine a chroot with its own IP address and you're basically there.

      Docker also has its own system for autogenerating the content of a container - you can write something vaguely related to a Makefile saying e.g. "start with a minimal debian stable, copy these files into it, and run these commands as root inside". The result of that can be stored as an image; those images can optionally be used as the base for yet more specialized images. What you use the

      • So what about it couldn't have been accomplished with tcsh script and a jail or even better PC-BSD plugins?

        • by Junta ( 36770 )

          Mainly convenience and networking effects.

          The instructions to start a new instance of a docker image someone else has shared is one line: docker pull ; docker run
          The instructions to create a new stock docker image are relatively short, like a simple makefile.

          Those short instructions, the existence of dockerhub, and hype combine to have a lot of samples of images to run.

          Of course, that image infrastructure is pretty fast and loose, limited ability to verify that the image arrived intact from the uploader, a

    • Actually it's not complicated it's just the cycle of reinvention and calling old concepts new things.

      If you fire up a VM to do one task, why bother to run a full OS under that VM, why not run a single program. And, well if you're running one program on the OS, why bother with a whole kernel, why not just run the program on the "bare metal".

      And, er, well it's not really bare metal. It sort of looks like bare metal except the hypervisors provide a uniform interface to pseudo-hadware on the whole so you don't

    • Could we make things a bit more complicated? Virtualization I understand. I don't understand the need for Docker.

      Docker allows you to "packetize" your applications into their own secure containers in a standard format, with standard lifecycles. It makes it easier to distribute and deploy systems. And in cases when you want to provision environments, it gives you greater granularity.

      In terms of provisioning virtualized environments (say a server that collocates systems A, B and C), without something like Docker, you have to provision an entire system for each combination/version of A, B, and C. That creates a burden

      • To add to what I said, thinks BSD jails or Solaris "zones". If you see the reasoning behind any of these two, you get the reasoning behind Docker. Again, it all depends on the work requirements.
  • by Required Snark ( 1702878 ) on Thursday January 21, 2016 @11:29PM (#51348387)
    Yep, unikernels will solve all you system problems. It will be just as wonderful as living in the Emerald City in the Land of Oz.

    A unikernel, therefore, is an indivisible unit of computing logic. As a microservice, it carries the promise of unlimited scalability. ... And as an architecture, there’s a certain elegance in unity.

    Not only will it solve your deployment problems, provide scalability for free, and eliminate all system security issues, it will do your shopping, wash your car, clothes and dishes, pay your bills and taxes, balance your checkbook, and walk the dog, even if you don'thave a dog!

    Overhype much?

  • Docker aims to make the process as standardized as possible, for easier deployment.

    You know what we actually need? Tools to make using selinux easier, "for easier deployment". I'm given to understand that there are tools that are supposed to watch activity and build you a profile, but I couldn't even get the tools to work.

  • The dusty file-cabinet creaks open revealing the secret plan.

    Shadowy figure one: "It's taken a few more years than we originally expected, but the day is drawing near. All this virtualization, exo-kernel, uni-kernel crap is just garbage compared to this baby!"
    Shadowy figure two: "But we will need to make it 64 bits..."

    Shadowy figure one: "No problem, that'll take a few weeks, plus we can fit ALL of it on one die with plenty of cache. GDP, IP, plus we always planned the IO Processor to be x86 compatible, s

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...