Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Software Linux

Docker 1.0 Released 88

Posted by Soulskill
from the it's-done-for-arbitrary-values-of-done dept.
Graculus writes: "Docker, the company that sponsors the Docker.org open source project, is gaining allies in making its commercially supported Linux container format a de facto standard. Linux containers are a way of packaging up applications and related software for movement over the network or Internet. Once at their destination, they launch in a standard way and enable multiple containers to run under a single host operating system. 15 months and 8,741 commits after the earliest version was made public, Docker 1.0 has been released."
This discussion has been archived. No new comments can be posted.

Docker 1.0 Released

Comments Filter:
  • I thought this was about pant ... which should be at LEAST 2.0.

    • by freeze128 (544774)
      I don't know if I would want to wear open-source pants....
      • I cannot help but wonder how the Levi Strauss folks are going to think about how their product is now Open Source, and versioned.
  • I can download a file from the internet and it will install and run on my computer!?

    Why haven't I heard about this before??

    Seriously, maybe explain why this is important for the old ones among us?

    (grabs bifocals and oatmeal)

    • by Darinbob (1142669)

      I went to the web site to learn more. I still don't know what it is. I suspect it's a venture capital extraction method.

      • I went to the web site to learn more. I still don't know what it is. I suspect it's a venture capital extraction method.

        Nothing wrong with that. I'd like to extract some myself.

        However, the short of it is that Docker containers are a lot like Solaris Zones. They give much the same freedom as having lots of VMs, but without the overhead that a normal VM requires in terms of memory or filesystem space. Plus they allow resource load-balancing. So it's a fairly trivial thing using Docker to run 25 Apache servers on the same box without them interfering with each other.

    • by gajop (1285284)

      Contains also offer security.
      I've used it to run tests safely on student submitted code (server: https://bitbucket.org/gajop/au... [bitbucket.org], docker images: https://github.com/gajop/gradi... [github.com] and https://github.com/gajop/gradi... [github.com]).
      It's done automatically for practice tests (for when students would submit their solutions online), so I don't even look at the source.
      I know it's not guaranteed to offer 100% security as they could potentially break out of the container, but it takes care of most attempts or just mistakes (

  • This is the second time Docker has appeared on Slashdot and, as before, nobody knows what it is. Is this news for nerds or a sales pitch?
    • Re: What is this? (Score:2, Informative)

      by Anonymous Coward

      All the real nerds know about it.

    • From what I understand, it creates a VM that can be sent to, and consume the resources of, any machine that's also running the docker software. You can control this remotely. It's an isolated environment so the application cannot interact with the host system, so it secures the hardware. So, lets say you have a bitcoin mining app (random example) and hundreds of computers all over. Rather that installing it on each one, you can just send your application over to each one using this Docker thing and each pro

      • Re:What is this? (Score:5, Informative)

        by gmuslera (3436) on Tuesday June 10, 2014 @09:33PM (#47208499) Homepage Journal

        The point is that don't create a VM. Containers runs applications in their own isolated (as in filesystem, memory, processes, network, users, etc) environment, but just one kernel, no hard reservation of memory or disk, it consumes resources pretty much like native apps.Another difference is at it just need the linux kernel, it runs where a linux kernel (modern enough, 2.6.38+) run, including inside VMs, so you can run them on amazon, google app engine, linode and a lot more.

        What docker adds over LXC (Linux Containers) is using a copy-on-write filesystem (so if i get the filesystem for i.e. ubuntu for an app, and another application also tries to use the filesystem of ubuntu, the extra disk use is just what both changed, also cached disk works for both), using cgroups to be able to limit what resources the container can use, and a whole management system for deploying, managing, sharing, packaging and constructing. It enables you to i.e. build a container for some service (with all the servers it need to run, with the filesystem of the distribution you need, exposing just the ports you want to give services on), pack it, and use it as a single unit, deploying it in the amount of servers you want without worrying about conflicting libraries, required packages, or having the right distribution.

        If you think that is something academical, Google heavily use containers in their cloud, creating 2 billon containers per week. They have their own container technology (LMCTFY, Let Me Contain That For You) but has been adopting lately Docker, and contributing not just code but also a lot of tools to manage containers in a cloud.

    • Re:What is this? (Score:5, Informative)

      by Anonymous Coward on Tuesday June 10, 2014 @08:39PM (#47208253)

      What is Docker?
      Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications.

      How is this different from Virtual Machines?
      Virtual Machines
      Each virtualized application includes not only the application - which may be only 10s of MB - and the necessary binaries and libraries, but also an entire guest operating system - which may weigh 10s of GB.
      The Docker Engine container comprises just the application and its dependencies. It runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.

      https://www.docker.com/whatisdocker/ [docker.com]

    • by Threni (635302)

      Only because that write-up above is so poor. There's no reason it couldn't have been explained properly, is there? I know we don't have proper writers working at Slashdot but surely there's is some sort of a functioning brain between "first-time submitter #239402394032" and the "publish this story" button, otherwise we might as well just be reading whatever pops on on twitter with a #0-day-news tag.

  • "Linux containers are a way of packaging up applications and related software for movement over the network or Internet."

    Rewritten not to be shitty:

    "Linux containers are a way of packaging up applications and related software."

    • "Linux containers are a way of packaging up applications and related software for movement over the network or Internet."

      Rewritten not to be shitty:

      "Linux containers are a way of packaging up applications and related software."

      For movement over the network or Internet.

      One of the key attributes of a Docker image is that's it's a commodity. Their logo resembles a container freight vessel for a very good reason.

      We've had the ability to package applications for years. That's what things like debs and RPMs are all about. A Docker instance isn't merely a package, it's a complete ready-to-run filesystem image with resource mapping that allows it to be shipped and/or replicated over a wide number of container hosts, then launched withou

  • by Omegaman (703) on Tuesday June 10, 2014 @08:49PM (#47208297)

    Docker is a lot of things, all rolled up into one so it is difficult to describe without leaving out some detail. What is important to one devops person might be unimportant to another. I have been testing docker for the past few months and there are a couple of things about it that I like quite a bit.

    I have to explain a couple of things that I like about it before I get to the one that I really like.

    1) It has a repository of very bare bones images for ubuntu, redhat, busybox. Super bare bones, because docker only runs the bare minimum to start with and you build from that.

    2) You pull down what you want to work with, and then you figuratively jump into that running image and you can set up that container with what you want it to do.

    3) (this is what I really like) That working copy becomes a "diff" of the original base image. You can then save out that working image back to the repository. You can then jump on another machine, and pull down that "diff" image (but you don't even really have to think of it as a "diff", you can just think of it as your new container. docker handles all the magic of it behind the scenes. So if you are familiar with git, it provides a git like interface to managing your server images.

    It does a lot more than what I describe above, but it is one of the things I was most impressed with.

    • by ArsonSmith (13997)

      You can almost think of it as a new compiler system that outputs a self contained application that needs to know almost nothing about the underlying system. Similar to a virtual machine appliance, but designed to be the way it is and not an addition to platform.

      You can compile software and create a container that includes everything needed to run that app as part of your continuous delivery environment then deploy the docker artifact to integration testing, qa testing and then to production as the exact sa

  • How is it different?

    • by siDDis (961791) on Wednesday June 11, 2014 @06:45AM (#47210909)

      It's the same thing as BSD Jails, however there is one big difference with Docker. A container/jail can be shipped to another system running a completely different kernel. This means you can create an Ubuntu 10.04 container and run it on an Ubuntu 14.04 host or RHEL 7 host.
      With BSD Jails, you can only ship your jails to the same system unless you spend enough time fiddling around so you can basically do the same thing. Luckily the Docker team is already adding BSD Jails support.

    • by jon3k (691256)
      It's similar, it's a linux container technology. It also uses a couple of newish features in the kernel to give you a little more control over your containers (namely, cgroups and namesakes). But if you're familiar with containers then you already know what docker is, at the basic level.
  • From the Docker site [docker.com]:

    Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, da

  • Try clicking "Try it!" on their web page. Your container is lost at sea :/.
  • From the summary this seems like most OSX software: simply an icon with everything inside that you only need to drag to your Applications folder (or in the case of the OSX app store, the iconthat is downloaded). I've always liked this ultra-intuitive installation process.
  • The quality of comments on are are further proof of how far downhill /. has fallen. It's just depressing.

    A couple questions pop to mind:

    1. Security--how do containers, whether LXC/Docker, Jails, etc compare to true virtualization? For example, pfSense strongly argues against using virtualization in production machines not only for being slower, but for possible security risks--and a container would be even less secure than that. As an extreme scenario, what's to keep one Docker program from messing with

    • by jon3k (691256)

      The quality of comments on are are further proof of how far downhill /. has fallen. It's just depressing.

      Ironically, this is exactly what your post made me think. It's 2014 and someone on Slashdot is asking what the performance and security considerations of virtualization are? Really? Every one of your questions is answered within about 30 seconds of Googling and reading.

    • by GrahamJ (241784)

      The quality of comments on are are further proof of how far downhill /. has fallen. It's just depressing.


      "zomg it sounds kinda sorta like something I've heard of before, it must suck! Thousands of devs who actually know something about it, including Google, are all wrong!!!!!1!!one"

    • by Lennie (16154)

      RedHat added SELinux support.

      Pretty certain they'll make it nice and secure.

  • So, it bundles up a binary and all of the shared libraries necessary for that binary, so that you don't end up in dependency hell. Great, except for what happens when the next OpenSSL vulnerability is announced, and suddenly you need to replace every container which has its own copy of OpenSSL, instead of the one shared system copy.

  • or a centrally managed JVM. It's a little run-time environment that works on any OS. This is not a new idea but a different language. They don't specify what the app in the container is. A better platform independent solution would be very useful.

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn