Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Software Virtualization IT Linux

Docker Turns 1: What's the Future For Open Source Container Tech? 65

darthcamaro (735685) writes "Docker has become one of the most hyped open-source projects in recent years, making it hard to believe the project only started one year ago. In that one year, Docker has now gained the support of Red Hat and other major Linux vendors. What does the future hold for Docker? Will it overtake other forms of virtualization or will it just be a curiosity?"
This discussion has been archived. No new comments can be posted.

Docker Turns 1: What's the Future For Open Source Container Tech?

Comments Filter:
  • Re:Container tech (Score:3, Informative)

    by Anonymous Coward on Friday March 21, 2014 @09:56PM (#46548539)

    Yes, but it makes it much easier to use. It also adds and API and event model as well as the ability to push and pull container images into a public or private registry. Add to that a growing ecosystem and you have a very interesting building block.

  • by Anonymous Coward on Friday March 21, 2014 @10:02PM (#46548571)

    The idea of docker is cool but the implementation needs works. It's pretty complicated to understand compared to say VMware or VirtualBox. Especially the versioning stuff, it's really annoying. It's like combining git or svn and virtual machines. You get the obscure weird architecture of a version control system combined with the configuration complexity of a VM. It's pretty confusing even for seasoned professionals.

  • Subjects suck. (Score:5, Informative)

    by aardvarkjoe ( 156801 ) on Friday March 21, 2014 @10:30PM (#46548705)

    Since nobody else is commenting, I guess that I'm not the only one that had never heard of Docker.

    The story doesn't bother to summarize what Docker is. Or even give a link to an explanation. That may not be completely unreasonable, because it's hard to find any understandable information on the main website either. Apparently a "container" is a method of delivering an application that is geared towards VMs and cloud computing, but that's about all I got out of it.

  • by gmuslera ( 3436 ) on Friday March 21, 2014 @10:47PM (#46548785) Homepage Journal

    ... but rationalizing it. Sometimes you just need to run more or less isolated single apps, not for a full blown OS. In a lot of usage scenarios is far more efficient, (both in disk/memory/cpu usage and app density) and probably more flexible. In others full OS virtualization or running on dedicated hardware may be the best option.

    It also brings a virtualization-like approach for apps in the cloud. You can have cointainerized apps in aws, google apps and many others, something like having a vm inside a vm.

    Is not the only solution of its kind. Google is heavily using containers in Omega [theregister.co.uk] (you can try their container stack with lmctfy [github.com]), you can use openvz, lxc, or solaris zones or bsd jails. But the way that docker mixes containers (not just lxc by 0.9) with union fs, making them portable and to have inheritance, is a touch of genius.

    The missing pieces are being added by different projects. CoreOS [coreos.com] as a dedicated OS for containers (that coupled with etcd and fleet could become a big player in a near future), OpenStack/OpenShift bringing manageability, and maybe someone could bring to the table what Omega does with google containers.

  • Re:Subjects suck. (Score:5, Informative)

    by subreality ( 157447 ) on Saturday March 22, 2014 @01:28AM (#46549307)

    It's a high-level interface to LXC (similar to Solaris Containers, or FreeBSD Jails). If you're not familiar with those, think of it as a combination of:
      chroot (virtualized filesystem root)
      git (version control where a hash-id guarantees an exact environment)
      virtual machines (virtualized networking, process tables)
      make (you make a config file describing an image to start from, then all the things to do to set up your application / build environment / whatever)

    If you are building a complex product you can write a short Dockerfile which will:
      Start with 8dbd9e392a96 - a bare-bones Ubuntu 12.04 image
      apt-get install git gcc make libc6-dev

    You now have a completely reproducible build machine - Docker builds it and gives you back a hashref. You run it with the right arguments (basically: a path to where your source code is, plus a command to run) and it builds your project reliably (you always have a clean container exactly the way it was when you built it) and quickly (unlike a snapshotted VM there's no need to boot it - in a split second the container comes up and it's running your makefile). More importantly, everyone else working on your project can clone that tag and get /exactly/ your environment, and two years from now people won't be scratching their heads trying to reproduce the build server.

    Now let's say you're shipping your product - you're a web company, so you have to package it up for the operations guys to deploy. It used to be you would give a long list of dependencies (unreliable, and kind of a pain for the user); more recently you'd ship a VM image (big, resource-heavy, but at least it escapes dependency hell); with Docker you build an image, publish it on an internal server and give the hashref to the ops guys. They clone it (moderate-sized, resource-friendly) and they get your app with everything required to run it correctly exactly the way QA was running it.

    As it's being run they can periodically checkpoint the filesystem state, much like snapshotting a VM. If something goes wrong it's easy to roll back and start up the previous version.

    It's a young project and there are still some rough edges, but the benefits are significant. I think in a few years doing builds without a container will be looked at the same way as coding without source control.

To the systems programmer, users and applications serve only to provide a test load.

Working...