Giter Club home page Giter Club logo

docker-api's Introduction

docker-api

Gem Version Code Climate

This gem provides an object-oriented interface to the Docker Engine API. Every method listed there is implemented. At the time of this writing, docker-api is meant to interface with Docker version 1.4.*

If you're interested in using Docker to package your apps, we recommend the dockly gem. Dockly provides a simple DSL for describing Docker containers that install as Debian packages and are controlled by upstart scripts.

Installation

Add this line to your application's Gemfile:

gem 'docker-api'

And then run:

$ bundle install

Alternatively, if you wish to just use the gem in a script, you can run:

$ gem install docker-api

Finally, just add require 'docker' to the top of the file using this gem.

Usage

docker-api is designed to be very lightweight. Almost no state is cached (aside from id's which are immutable) to ensure that each method call's information is up to date. As such, just about every external method represents an API call.

At this time, basic podman support has been added via the podman docker-compatible API socket.

Starting up

Follow the installation instructions, and then run:

$ sudo docker -d

This will daemonize Docker so that it can be used for the remote API calls.

Host

If you're running Docker locally as a socket, there is no setup to do in Ruby. If you're not using a socket or have changed the path of the socket, you'll have to point the gem to your socket or local/remote port. For example:

Docker.url = 'tcp://example.com:5422'

Two things to note here. The first is that this gem uses excon, so any of the options that are valid for Excon.new are also valid for Docker.options. Second, by default Docker runs on a socket. The gem will assume you want to connect to the socket unless you specify otherwise.

Also, you may set the above variables via ENV variables. For example:

$ DOCKER_URL=unix:///var/docker.sock irb
irb(main):001:0> require 'docker'
=> true
irb(main):002:0> Docker.url
=> "unix:///var/docker.sock"
irb(main):003:0> Docker.options
=> {}
$ DOCKER_URL=tcp://example.com:1000 irb
irb(main):001:0> require 'docker'
=> true
irb(main):003:0> Docker.url
=> "tcp://example.com:1000"
irb(main):004:0> Docker.options
=> {}

SSL

When running docker using SSL, setting the DOCKER_CERT_PATH will configure docker-api to use SSL. The cert path is a folder that contains the cert, key and cacert files. docker-api is expecting the files to be named: cert.pem, key.pem, and ca.pem. If your files are named different, you'll want to set your options explicity:

Docker.options = {
    client_cert: File.join(cert_path, 'cert.pem'),
    client_key: File.join(cert_path, 'key.pem'),
    ssl_ca_file: File.join(cert_path, 'ca.pem'),
    scheme: 'https'
}

If you want to load the cert files from a variable, e.g. you want to load them from ENV as needed on Heroku:

cert_store = OpenSSL::X509::Store.new
certificate = OpenSSL::X509::Certificate.new ENV["DOCKER_CA"]
cert_store.add_cert certificate

Docker.options = {
  client_cert_data: ENV["DOCKER_CERT"],
  client_key_data: ENV["DOCKER_KEY"],
  ssl_cert_store: cert_store,
  scheme: 'https'
}

If you need to disable SSL verification, set the DOCKER_SSL_VERIFY variable to 'false'.

Global calls

All of the following examples require a connection to a Docker server. See the Starting up section above for more information.

require 'docker'
# => true

# docker command for reference: docker version
Docker.version
# => { 'Version' => '0.5.2', 'GoVersion' => 'go1.1' }

# docker command for reference: docker info
Docker.info
# => { "Debug" => false, "Containers" => 187, "Images" => 196, "NFd" => 10, "NGoroutines" => 9, "MemoryLimit" => true }

# docker command for reference: docker login
Docker.authenticate!('username' => 'docker-fan-boi', 'password' => 'i<3docker', 'email' => '[email protected]')
# => true

# docker command for reference: docker login registry.gitlab.com
Docker.authenticate!('username' => 'docker-fan-boi', 'password' => 'i<3docker', 'email' => '[email protected]', 'serveraddress' => 'https://registry.gitlab.com/v1/')
# => true

Images

Just about every method here has a one-to-one mapping with the Images section of the API. If an API call accepts query parameters, these can be passed as an Hash to it's corresponding method. Also, note that Docker::Image.new is a private method, so you must use .create, .build, .build_from_dir, build_from_tar, or .import to make an instance.

require 'docker'
# => true

# Pull an Image.
# docker command for reference: docker pull ubuntu:14.04
image = Docker::Image.create('fromImage' => 'ubuntu:14.04')
# => Docker::Image { :id => ae7ffbcd1, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Insert a local file into an Image.
image.insert_local('localPath' => 'Gemfile', 'outputPath' => '/')
# => Docker::Image { :id => 682ea192f, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Insert multiple local files into an Image.
image.insert_local('localPath' => [ 'Gemfile', 'Rakefile' ], 'outputPath' => '/')
# => Docker::Image { :id => eb693ec80, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Add a repo name to Image.
# docker command for reference: docker tag <IMAGE.ID> base2
image.tag('repo' => 'base2', 'force' => true)
# => ["base2"]

# Add a repo name and tag an Image.
# docker command for reference: docker tag <IMAGE.ID> base2:latest
image.tag('repo' => 'base2', 'tag' => 'latest', force: true)
# => ["base2:latest"]

# Get more information about the Image.
# docker command for reference: docker inspect <IMAGE.ID>
image.json
# => {"id"=>"67859327bf22ef8b5b9b4a6781f72b2015acd894fa03ce07e0db7af170ba468c", "comment"=>"Imported from -", "created"=>"2013-06-19T18:42:58.287944526-04:00", "container_config"=>{"Hostname"=>"", "User"=>"", "Memory"=>0, "MemorySwap"=>0, "CpuShares"=>0, "AttachStdin"=>false, "AttachStdout"=>false, "AttachStderr"=>false, "PortSpecs"=>nil, "Tty"=>false, "OpenStdin"=>false, "StdinOnce"=>false, "Env"=>nil, "Cmd"=>nil, "Dns"=>nil, "Image"=>"", "Volumes"=>nil, "VolumesFrom"=>""}, "docker_version"=>"0.4.0", "architecture"=>"x86_64"}

# View the history of the Image.
image.history
# => [{"Id"=>"67859327bf22", "Created"=>1371681778}]

# Push the Image to the Docker registry. Note that you have to login using
# `Docker.authenticate!` and tag the Image first.
# docker command for reference: docker push <IMAGE.ID>
image.push
# => Docker::Image { @connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} }, @info = { "id" => eb693ec80, "RepoTags" => ["base2", "base2/latest"]} }

# Push individual tag to the Docker registry.
image.push(nil, tag: "tag_name")
image.push(nil, repo_tag: 'registry/repo_name:tag_name')

# Given a command, create a new Container to run that command in the Image.
# docker command for reference: docker run -ti <IMAGE.ID> ls -l
image.run('ls -l')
# => Docker::Container { id => aaef712eda, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Remove the Image from the server.
# docker command for reference: docker rmi -f <IMAGE.ID>
image.remove(:force => true)
# => true

# Export a single Docker Image to a file
# docker command for reference: docker save <IMAGE.ID> my_export.tar
image.save('my_export.tar')
# => Docker::Image { :id => 66b712aef, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Return the raw image binary data
image.save
# => "abiglongbinarystring"

# Stream the contents of the image to a block:
image.save_stream { |chunk| puts chunk }
# => nil

# Given a Container's export, creates a new Image.
# docker command for reference: docker import some-export.tar
Docker::Image.import('some-export.tar')
# => Docker::Image { :id => 66b712aef, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# `Docker::Image.import` can also import from a URI
Docker::Image.import('http://some-site.net/my-image.tar')
# => Docker::Image { :id => 6b462b2d2, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# For a lower-level interface for importing tars, `Docker::Image.import_stream` may be used.
# It accepts a block, and will call that block until it returns an empty `String`.
File.open('my-export.tar') do |file|
  Docker::Image.import_stream { file.read(1000).to_s }
end

# Create an Image from a Dockerfile as a String.
Docker::Image.build("from base\nrun touch /test")
# => Docker::Image { :id => b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Create an Image from a Dockerfile.
# docker command for reference: docker build .
Docker::Image.build_from_dir('.')
# => Docker::Image { :id => 1266dc19e, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Create an Image from a file other than Dockerfile.
# docker command for reference: docker build -f Dockerfile.Centos .
Docker::Image.build_from_dir('.', { 'dockerfile' => 'Dockerfile.Centos' })
# => Docker::Image { :id => 1266dc19e, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Create an Image from a Dockerfile and stream the logs
Docker::Image.build_from_dir('.') do |v|
  if (log = JSON.parse(v)) && log.has_key?("stream")
    $stdout.puts log["stream"]
  end
end

# Create an Image from a tar file.
# docker command for reference: docker build - < docker_image.tar
Docker::Image.build_from_tar(File.open('docker_image.tar', 'r'))
# => Docker::Image { :id => 1266dc19e, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Load all Images on your Docker server.
# docker command for reference: docker images
Docker::Image.all
# => [Docker::Image { :id => b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => 8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }]

# Get Image from the server, with id
# docker command for reference: docker images <IMAGE.ID>
Docker::Image.get('df4f1bdecf40')
# => Docker::Image { :id => eb693ec80, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Check if an image with a given id exists on the server.
Docker::Image.exist?('ef723dcdac09')
# => true

# Load an image from the file system
Docker::Image.load('./my-image.tar')
# => ""

# An IO object may also be specified for loading
File.open('./my-image.tar', 'rb') do |file|
  Docker::Image.load(file)
end
# => ""

# Export multiple images to a single tarball
# docker command for reference: docker save my_image1 my_image2:not_latest > my_export.tar
names = %w( my_image1 my_image2:not_latest )
Docker::Image.save(names, 'my_export.tar')
# => nil

# Return the raw image binary data
names = %w( my_image1 my_image2:not_latest )
Docker::Image.save(names)
# => "abiglongbinarystring"

# Stream the raw binary data
names = %w( my_image1 my_image2:not_latest )
Docker::Image.save_stream(names) { |chunk| puts chunk }
# => nil

# Search the Docker registry.
# docker command for reference: docker search sshd
Docker::Image.search('term' => 'sshd')
# => [Docker::Image { :id => cespare/sshd, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => johnfuller/sshd, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => dhrp/mongodb-sshd, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => rayang2004/sshd, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => dhrp/sshd, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => toorop/daemontools-sshd, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => toorop/daemontools-sshd-nginx, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => toorop/daemontools-sshd-nginx-php-fpm, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => mbkan/lamp, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => toorop/golang, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => wma55/u1210sshd, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => jdswinbank/sshd, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }, Docker::Image { :id => vgauthier/sshd, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }]

Containers

Much like the Images, this object also has a one-to-one mapping with the Containers section of the API. Also like Images, .new is a private method, so you must use .create to make an instance.

require 'docker'

# Create a Container.
container = Docker::Container.create('Cmd' => ['ls'], 'Image' => 'base')
# => Docker::Container { :id => 492510dd38e4, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Get more information about the Container.
container.json
# => {"ID"=>"492510dd38e4da7703f36dfccd013de672b8250f57f59d1555ced647766b5e82", "Created"=>"2013-06-20T10:46:02.897548-04:00", "Path"=>"ls", "Args"=>[], "Config"=>{"Hostname"=>"492510dd38e4", "User"=>"", "Memory"=>0, "MemorySwap"=>0, "CpuShares"=>0, "AttachStdin"=>false, "AttachStdout"=>false, "AttachStderr"=>false, "PortSpecs"=>nil, "Tty"=>false, "OpenStdin"=>false, "StdinOnce"=>false, "Env"=>nil, "Cmd"=>["ls"], "Dns"=>nil, "Image"=>"base", "Volumes"=>nil, "VolumesFrom"=>""}, "State"=>{"Running"=>false, "Pid"=>0, "ExitCode"=>0, "StartedAt"=>"0001-01-01T00:00:00Z", "Ghost"=>false}, "Image"=>"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", "NetworkSettings"=>{"IpAddress"=>"", "IpPrefixLen"=>0, "Gateway"=>"", "Bridge"=>"", "PortMapping"=>nil}, "SysInitPath"=>"/usr/bin/docker", "ResolvConfPath"=>"/etc/resolv.conf", "Volumes"=>nil}

# Start running the Container.
container.start
# => Docker::Container { :id => 492510dd38e4, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Stop running the Container.
container.stop
# => Docker::Container { :id => 492510dd38e4, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Restart the Container.
container.restart
# => Docker::Container { :id => 492510dd38e4, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Pause the running Container processes.
container.pause
# => Docker::Container { :id => 492510dd38e4, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Unpause the running Container processes.
container.unpause
# => Docker::Container { :id => 492510dd38e4, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Kill the command running in the Container.
container.kill
# => Docker::Container { :id => 492510dd38e4, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Kill the Container specifying the kill signal.
container.kill(:signal => "SIGHUP")
# => Docker::Container { :id => 492510dd38e4, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Return the currently executing processes in a Container.
container.top
# => [{"PID"=>"4851", "TTY"=>"pts/0", "TIME"=>"00:00:00", "CMD"=>"lxc-start"}]

# Same as above, but uses the original format
container.top(format: :hash)
# => {
#      "Titles" => ["PID", "TTY", "TIME", "CMD"],
#      "Processes" => [["4851", "pts/0", "00:00:00", "lxc-start"]]
#    }

# To expose 1234 to bridge
# In Dockerfile: EXPOSE 1234/tcp
# docker run resulting-image-name
Docker::Container.create(
  'Image' => 'image-name',
  'HostConfig' => {
    'PortBindings' => {
      '1234/tcp' => [{}]
    }
  }
)

# To expose 1234 to host with any port
# docker run -p 1234 image-name
Docker::Container.create(
  'Image' => 'image-name',
  'ExposedPorts' => { '1234/tcp' => {} },
  'HostConfig' => {
    'PortBindings' => {
      '1234/tcp' => [{}]
    }
  }
)

# To expose 1234 to host with a specified host port
# docker run -p 1234:1234 image-name
Docker::Container.create(
  'Image' => 'image-name',
  'ExposedPorts' => { '1234/tcp' => {} },
  'HostConfig' => {
    'PortBindings' => {
      '1234/tcp' => [{ 'HostPort' => '1234' }]
    }
  }
)

# To expose 1234 to host with a specified host port and host IP
# docker run -p 192.168.99.100:1234:1234 image-name
Docker::Container.create(
  'Image' => 'image-name',
  'ExposedPorts' => { '1234/tcp' => {} },
  'HostConfig' => {
    'PortBindings' => {
      '1234/tcp' => [{ 'HostPort' => '1234', 'HostIp' => '192.168.99.100' }]
    }
  }
)

# To set container name pass `name` key to options
Docker::Container.create(
  'name' => 'my-new-container',
  'Image' => 'image-name'
)

# Stores a file with the given content in the container
container.store_file("/test", "Hello world")

# Reads a file from the container
container.read_file("/test")
# => "Hello world"

# Export a Container. Since an export is typically at least 300M, chunks of the
# export are yielded instead of just returning the whole thing.
File.open('export.tar', 'w') do |file|
  container.export { |chunk| file.write(chunk) }
end
# => nil

# Inspect a Container's changes to the file system.
container.changes
# => [{'Path'=>'/dev', 'Kind'=>0}, {'Path'=>'/dev/kmsg', 'Kind'=>1}]

# Copy files/directories from the Container. Note that these are exported as tars.
container.copy('/etc/hosts') { |chunk| puts chunk }

hosts0000644000000000000000000000023412100405636007023 0ustar
127.0.0.1       localhost
::1             localhost ip6-localhost ip6-loopback
fe00::0         ip6-localnet
ff00::0         ip6-mcastprefix
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters
# => Docker::Container { :id => a1759f3e2873, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Wait for the current command to finish executing. If an argument is given,
# will timeout after that number of seconds. The default is one minute.
container.wait(15)
# => {'StatusCode'=>0}

# Attach to the Container. Currently, the below options are the only valid ones.
# By default, :stream, :stdout, and :stderr are set.
container.attach(:stream => true, :stdin => nil, :stdout => true, :stderr => true, :logs => true, :tty => false)
# => [["bin\nboot\ndev\netc\nhome\nlib\nlib64\nmedia\nmnt\nopt\nproc\nroot\nrun\nsbin\nselinux\nsrv\nsys\ntmp\nusr\nvar", []]

# If you wish to stream the attach method, a block may be supplied.
container = Docker::Container.create('Image' => 'base', 'Cmd' => ['find / -name *'])
container.tap(&:start).attach { |stream, chunk| puts "#{stream}: #{chunk}" }
stderr: 2013/10/30 17:16:24 Unable to locate find / -name *
# => [[], ["2013/10/30 17:16:24 Unable to locate find / -name *\n"]]

# If you want to attach to stdin of the container, supply an IO-like object:
container = Docker::Container.create('Image' => 'base', 'Cmd' => ['cat'], 'OpenStdin' => true, 'StdinOnce' => true)
container.tap(&:start).attach(stdin: StringIO.new("foo\nbar\n"))
# => [["foo\nbar\n"], []]

# Similar to the stdout/stderr attach method, there is logs and streaming_logs

# logs will only return after the container has exited. The output will be the raw output from the logs stream.
# streaming_logs will collect the messages out of the multiplexed form and also execute a block on each line that comes in (block takes a stream and a chunk as arguments)

# Raw logs from a TTY-enabled container after exit
container.logs(stdout: true)
# => "\e]0;root@8866c76564e8: /\aroot@8866c76564e8:/# echo 'i\b \bdocker-api'\r\ndocker-api\r\n\e]0;root@8866c76564e8: /\aroot@8866c76564e8:/# exit\r\n"

# Logs from a non-TTY container with multiplex prefix
container.logs(stdout: true)
# => "\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u00021\n\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u00022\n\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u00023\n\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u00024\n\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u00025\n\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u00026\n\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u00027\n\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u00028\n\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u00029\n\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u000310\n"

# Streaming logs from non-TTY container removing multiplex prefix with a block printing out each line (block not possible with Container#logs)
container.streaming_logs(stdout: true) { |stream, chunk| puts "#{stream}: #{chunk}" }
stdout: 1
stdout: 2
stdout: 3
stdout: 4
stdout: 5
stdout: 6
stdout: 7
stdout: 8
stdout: 9
stdout: 10
# => "1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\n10\n"

# If the container has TTY enabled, set `tty => true` to get the raw stream:
command = ["bash", "-c", "if [ -t 1 ]; then echo -n \"I'm a TTY!\"; fi"]
container = Docker::Container.create('Image' => 'ubuntu', 'Cmd' => command, 'Tty' => true)
container.tap(&:start).attach(:tty => true)
# => [["I'm a TTY!"], []]

# Obtaining the current statistics of a container
container.stats
# => {"read"=>"2016-02-29T20:47:05.221608695Z", "precpu_stats"=>{"cpu_usage"=> ... }

# Create an Image from a Container's changes.
container.commit
# => Docker::Image { :id => eaeb8d00efdf, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Commit the Container and run a new command. The second argument is the number
# of seconds the Container should wait before stopping its current command.
container.run('pwd', 10)
# => Docker::Image { :id => 4427be4199ac, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Run an Exec instance inside the container and capture its output and exit status
container.exec(['date'])
# => [["Wed Nov 26 11:10:30 CST 2014\n"], [], 0]

# Launch an Exec instance without capturing its output or status
container.exec(['./my_service'], detach: true)
# => Docker::Exec { :id => be4eaeb8d28a, :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Parse the output of an Exec instance
container.exec(['find', '/', '-name *']) { |stream, chunk| puts "#{stream}: #{chunk}" }
stderr: 2013/10/30 17:16:24 Unable to locate find / -name *
# => [[], ["2013/10/30 17:16:24 Unable to locate find / -name *\n"], 1]

# Run an Exec instance by grab only the STDOUT output
container.exec(['date'], stderr: false)
# => [["Wed Nov 26 11:10:30 CST 2014\n"], [], 0]

# Pass input to an Exec instance command via Stdin
container.exec(['cat'], stdin: StringIO.new("foo\nbar\n"))
# => [["foo\nbar\n"], [], 0]

# Get the raw stream of data from an Exec instance
command = ["bash", "-c", "if [ -t 1 ]; then echo -n \"I'm a TTY!\"; fi"]
container.exec(command, tty: true)
# => [["I'm a TTY!"], [], 0]

# Wait for the current command to finish executing. If an argument is given,
# will timeout after that number of seconds. The default is one minute.
command = ["bash", "-c", "if [ -t 1 ]; then echo -n \"Set max seconds for exec!!\"; fi"]
container.exec(command, wait: 120)
# => [["Set max seconds for exec!"], [], 0]

# Delete a Container.
container.delete(:force => true)
# => nil

# Update the container.
container.update("CpuShares" => 50000")

# Request a Container by ID or name.
Docker::Container.get('500f53b25e6e')
# => Docker::Container { :id => , :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }

# Request all of the Containers. By default, will only return the running Containers.
Docker::Container.all(:all => true)
# => [Docker::Container { :id => , :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }]

JSON encoded values

For JSON encoded values, nothing is done implicitly, meaning you need to explicitly call to_json on your parameter before the call. For example, to request all of the Containers using a filter:

require 'docker'

# Request all of the Containers, filtering by status exited.
Docker::Container.all(all: true, filters: { status: ["exited"] }.to_json)
# => [Docker::Container { :id => , :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }]

# Request all of the Container, filtering by label_name.
Docker::Container.all(all: true, filters: { label: [ "label_name"  ]  }.to_json)
# => [Docker::Container { :id => , :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }]

# Request all of the Container, filtering by label label_name that have the value label_value_.
Docker::Container.all(all: true, filters: { label: [ "label_name=label_value"  ]  }.to_json)
# => [Docker::Container { :id => , :connection => Docker::Connection { :url => tcp://localhost, :options => {:port=>2375} } }]

This applies for all parameters that are requested to be JSON encoded by the docker api.

Events

require 'docker'

# Action on a stream of events as they come in
Docker::Event.stream { |event| puts event; break }
Docker::Event { :status => create, :id => aeb8b55726df63bdd69d41e1b2650131d7ce32ca0d2fa5cbc75f24d0df34c7b0, :from => base:latest, :time => 1416958554 }
# => nil

# Action on all events after a given time (will execute the block for all events up till the current time, and wait to execute on any new events after)
Docker::Event.since(1416958763) { |event| puts event; puts Time.now.to_i; break }
Docker::Event { :status => die, :id => 663005cdeb56f50177c395a817dbc8bdcfbdfbdaef329043b409ecb97fb68d7e, :from => base:latest, :time => 1416958764 }
1416959041
# => nil

These methods are prone to read timeouts. Docker.options[:read_timeout] will need to be made higher than 60 seconds if expecting a long time between events.

Connecting to Multiple Servers

By default, each object connects to the connection specified by Docker.connection. If you need to connect to multiple servers, you can do so by specifying the connection on #new or in the utilizing class method. For example:

require 'docker'

Docker::Container.all({}, Docker::Connection.new('tcp://example.com:2375', {}))

Rake Task

To create images through rake, a DSL task is provided. For example:

require 'rake'
require 'docker'

image 'repo:tag' do
  image = Docker::Image.create('fromImage' => 'repo', 'tag' => 'old_tag')
  image = Docker::Image.run('rm -rf /etc').commit
  image.tag('repo' => 'repo', 'tag' => 'tag')
end

image 'repo:new_tag' => 'repo:tag' do
  image = Docker::Image.create('fromImage' => 'repo', 'tag' => 'tag')
  image = image.insert_local('localPath' => 'some-file.tar.gz', 'outputPath' => '/')
  image.tag('repo' => 'repo', 'tag' => 'new_tag')
end

Not supported (yet)

License

This program is licensed under the MIT license. See LICENSE for details.

docker-api's People

Contributors

ahazem avatar albertogg avatar albinos avatar bfulton avatar dlackty avatar envygeeks avatar grosser avatar hawknewton avatar ikatz-drizly avatar jakolehm avatar jgwmaxwell avatar kaspernj avatar lmars avatar ls-todd-lunter avatar maschwenk avatar mastahyeti avatar mattheworiordan avatar nacyot avatar nahiluhmot avatar okalex avatar ozbillwang avatar smgt avatar tlunter avatar tomlea avatar trevor-vaughan avatar tylerhunt avatar ucarion avatar vexus2 avatar wader avatar yelvert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-api's Issues

Errors using build in version 1.10.1

Starting in version 1.10.1 I see the following in docker running in debug mode when trying to perform a build command against Docker version 0.8.1:

2014/03/17 19:28:06 POST /v1.10/build?rm=false
[error] api.go:998 Error: Multipart upload for build is no longer supported. Please upgrade your docker client.
[error] api.go:105 HTTP Error: statusCode=500 Multipart upload for build is no longer supported. Please upgrade your docker client.

If I revert to 1.9.x of the docker-api client this problem stops.

This error doesn't seem to occur when operating against Docker 0.9 however given the minor version increment of the gem from 1.9.x to 1.10.x I would have expected it to be backward compatible. This makes moving forward to 1.10.x difficult as it means anyone using the new version has to have docker >= 0.9

Thoughts on why this may be?

hangs if docker not running at port

This script hangs on the call to get():

require 'docker'
Docker.url = "http://google.com:4243"
cnts = Docker::Util.parse_json(Docker.connection.get('/containers/json', {}))
puts cnts

Image::Create from registry: "Create response did not contain an Id"

With docker-api 1.7.4 and docker 0.7.1

Docker::Image.create fromImage: "image" , tag: "latest"

Results in an exception

Docker::Error::UnexpectedResponseError: Create response did not contain an Id
from /usr/lib/ruby/gems/1.9.1/gems/docker-api-1.7.4/lib/docker/image.rb:119:in `create'

Allow search of local images

Since .search queries only the Docker Index, it would be helpful to have a method that will also search through local images. You can retrieve all local images via Docker::Images.all and match a query from that data set, but it would be nice to push that functionality up into the API rather than rolling our own.

Inconsistent container/image delete/remove methods

To delete a container the method is Docker::Container#delete. To delete an image it's Docker::Image#remove. The docker API HTTP method for both operations is "DELETE", and the documentation describes both as a remove operation. While I don't think it matters much what the method name is, I do think it should be consistent between both containers and images.

add support for using the same docker host ENV variable

Currently the docker-api (this project) uses the DOCKER_URL to find the docker host. If I am already using docker on os x this information is already set a different variable and it seems redundant to have DOCKER_URL when DOCKER_HOST would do the same thing.

Can you update the code to use DOCKER_URL by default and fallback to DOCKER_HOST if DOCKER_URL is not present.

DOCKER_HOST=tcp://localhost:4243 (http://docs.docker.io/en/latest/installation/mac/)
DOCKER_URL=tcp://localhost:4243

Update excon dependency

I am unable to use this gem aloneside fog v1.22 as they rely on incompatible version of excon, can you update your excon dep to >= 0.32?

How to use in "detached mode"

Hello,
I am testing with this library, and cannot seem to find a way to use the -d mode (cli docker run -d) to detach so that the command run within the container can be a daemon (web server, application server, etc) Any assistance would be greatly appreciated!

Example code:
server = Docker::Container.create(
:Hostname => name,
:Image => image,
:Cmd => ['/usr/sbin/sshd -D'],
:PortSpecs => ['22']
)
server.start

Thanks!
Greg

Why is the 'name' option a string when most (all?) others are symbols?

When trying to name a container you have to pass the 'name' parameter into the opts{} hash as a string, but all other options are symbols, is there a reason this couldn't be made consistent? I'm happy to submit a PR & you could even make it support both for a period of time.

in lib/docker/container.rb:

  def self.create(opts = {}, conn = Docker.connection)
    name = opts.delete('name')
    query = {}
    query['name'] = name if name
    resp = conn.post('/containers/create', query, :body => opts.to_json)
    hash = Docker::Util.parse_json(resp) || {}
    new(conn, hash)
  end

Confusion with some container methods

I am certain i am missing a crucial piece of documentation somewhere, but when i do something like:

Docker::Container.all(:all => true)

Which returns:

Docker::Container { :id => f9557322025a6b470dbad7d73f5e3fa7edb882c0b8e420a2ba821a8afa55de75, :connection => Docker::Connection { :url => unix:///, :options => {:socket=>"/var/run/docker.sock"} } }

I get back an array with what looks like individual hash elements, but strangely they are string representations of hashes.

I saw the to_s method you set in containers.rb.. and am curious how one goes about getting back actual json or direct hash objects from the API. Additionally, I seem to only get back the container id and socket url info, not what one would normally see if they ran 'docker ps -a' from the shell command line on a docker host.

I know i am missing something important here, but need some help finding this magical piece of info :)

Trying to build an image with hostname set

I know, I know... but my Dockerfile's RUN command (not shown) actually depends on the hostname. So... trying this:

image = Docker::Image.build "from my_base_image", "Hostname" => "foobox"

and yet, the container's hostname appears to still be auto-generated:

image.json
 => {"id"=>"0f4170b06c150aa1000d70f913f56a67fedc26d05b5724b6e034e4225a420d97", "parent"=>"d20240c81001808519f321ab4df96992850012882f9dd7640ede2e4a30d86b31", "created"=>"2014-04-07T20:10:33.437269935Z", "container"=>"5c052c022bcce24fdcdc7fd797540e4d902e8151f21ab6050c9fa2895f315b53", "container_config"=>{"Hostname"=>"a4df64ba4be2", "Domainname"=>"", "User"=>"", "Memory"=>0, "MemorySwap"=>0, "CpuShares"=>0, "AttachStdin"=>false, "AttachStdout"=>false, "AttachStderr"=>false, "PortSpecs"=>nil, "ExposedPorts"=>nil, "Tty"=>false, "OpenStdin"=>false, "StdinOnce"=>false, "Env"=>["HOME=/", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "DEBIAN_FRONTEND=noninteractive"], "Cmd"=>["/bin/sh", "-c", "apt-get -qq -y install puppet"], "Dns"=>nil, "Image"=>"d20240c81001808519f321ab4df96992850012882f9dd7640ede2e4a30d86b31", "Volumes"=>nil, "VolumesFrom"=>"", "WorkingDir"=>"", "Entrypoint"=>nil, "NetworkDisabled"=>false, "OnBuild"=>[]}, "docker_version"=>"0.9.1", "author"=>"", "config"=>{"Hostname"=>"a4df64ba4be2", "Domainname"=>"", "User"=>"", "Memory"=>0, "MemorySwap"=>0, "CpuShares"=>0, "AttachStdin"=>false, "AttachStdout"=>false, "AttachStderr"=>false, "PortSpecs"=>nil, "ExposedPorts"=>nil, "Tty"=>false, "OpenStdin"=>false, "StdinOnce"=>false, "Env"=>["HOME=/", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "DEBIAN_FRONTEND=noninteractive"], "Cmd"=>["/bin/bash"], "Dns"=>nil, "Image"=>"d20240c81001808519f321ab4df96992850012882f9dd7640ede2e4a30d86b31", "Volumes"=>nil, "VolumesFrom"=>"", "WorkingDir"=>"", "Entrypoint"=>nil, "NetworkDisabled"=>false, "OnBuild"=>[]}, "architecture"=>"amd64", "os"=>"linux", "Size"=>37092125}

Support Docker running as a socket

In Docker 0.6 the default mode is no longer daemonized. It'd be great if the client library supported specifying a socket as well as the server.

License missing from gemspec

Some companies will only use gems with a certain license.
The canonical and easy way to check is via the gemspec,

via e.g.

spec.license = 'MIT'
# or
spec.licenses = ['MIT', 'GPL-2']

Even for projects that already specify a license, including a license in your gemspec is a good practice, since it is easily
discoverable there without having to check the readme or for a license file. For example, it is the field that rubygems.org uses to display a gem's license.

For example, there is a License Finder gem to help companies ensure all gems they use
meet their licensing needs. This tool depends on license information being available in the gemspec. This is an important enough
issue that even Bundler now generates gems with a default 'MIT' license.

If you need help choosing a license (sorry, I haven't checked your readme or looked for a license file), github has created a license picker tool.

In case you're wondering how I found you and why I made this issue, it's because I'm collecting stats on gems (I was originally looking for download data) and decided to collect license metadata,too, and make issues for gemspecs not specifying a license as a public service :).

I hope you'll consider specifying a license in your gemspec. If not, please just close the issue and let me know. In either case, I'll follow up. Thanks!

p.s. I've written a blog post about this project

Docker::Image#push does not allow to pushing tags other the the first one listed in RepoTags

I would expect to be able to push multiple tags such as 0.12.4 and latest to a private registry. So if I have an image object I would do

image.push(nil, tag: 'latest')
image.push(nil, tag: '0.12.4')

The push method always uses the first tag found in RepoTags. You can not pass in the :tag parameter because it is always overwritten by the line

opts = options.merge(:tag => tag)

Perhaps that line should be changed to

opts = {:tag => tag}.merge(options)

pull images or starting containers from registry hosted images

Hello,

I've been able to build images and push them over to quay.io

Now I'd like to start containers from those images on another Docker instance. I'd think my approach would be

  1. Redefine Docker.url (point to the new Docker instance)
  2. pull images down from registry
  3. Start containers

If my approach is sane, how would #pull be used via the API?

Run tests against live Docker

Especially now that OSX is supported 😸, it's desirable to be able to run tests against a live version of Docker, see #83.

We should use an env var to drive the decision, and experiment with adding Travis CI build matrix support for a couple versions of Docker that aren't in the vcr cassettes.

Search result data is lost when Docker::Image objects are materialized

It's nice that the library creates Docker::Image objects for me when executing Image.search, but apart from the name of the image, the actual search result JSON is essentially discarded.

hashes.map { |hash| new(connection, hash['name']) }

As the constructor defines a third parameter, info, it's possible to retain that search result data by passing the entire hash as the argument. I've reopened the class in my own project and done so with success.

hashes.map { |hash| new(connection, hash['name'], hash) }

This makes it possible to do

star_count = image.info['star_count']
description = image.info['description']
is_official = image.info['is_official']
is_trusted = image.info['is_trusted']

It doesn't appear there are ramifications to this elsewhere, and I'd happily submit a patch if I'm not being short-sited.

Destroy an image by its repository

puts image.info["Repository"] image.info["Tag"]
# repo1/app latest
image.tag repo: "repo2/app"
image.delete
# => 409 conflict

My issues is the following, I want to rename an image, to do that, I will retag it then ask to delete the old tag.

If an image has two tags (from the example above, when you do

DELETE /images/repo1%2Fapp%3Alatest
=> [{"Untagged":"39e0e4c41dceeae8b6c7fa41339ebc00e36e6ac92701ff77945e305cf2d401d2"}]

it only destroys the tag, it can only be done by hand currently, which is not really beautiful.

Docker.connection.delete "/images/repo1%2Fapp%3Alatest"

"Authentication is required." when using image.build_from_dir with private registry

Hi,

I have been trying to use the Docker::Image.build_from_dir to build an image from a dockerfile on an ubuntu virtual machine that resides on my mac. The thing is, the dockerfile starts with a FROM command that pulls an image stored in a private registry.

I have tried adding a .dockercfg file with the registry's credentials to the ubuntu guest box, but it did not make any difference, the weird thing is, when I tried to build the image from the dockerfile inside the virtual machine it worked as expected (without asking for authentication, but only after adding .dockercfg).

However, using the build_from_dir function I always get the following:

Couldn't find id: {"stream":"Step 0 : FROM registry.example.com/repo:latest\n"}
{"status":"Pulling repository registry.example.com/repo”}
{"errorDetail":{"message":"Authentication is required."},"error":"Authentication is required."}

I have also tried using Docker.creds as suggested in #55 with no luck.

What do you think could be the issue? I mean, it seems to build the image without issues when used directly from the docker-cli, so I think it might have something to do with docker-api in the end.

The docker-api gem version is 1.10.9. And the Docker version(s) are as follows:

Client version: 0.10.0
Client API version: 1.10
Go version (client): go1.2.1
Git commit (client): dc9c28f
Server version: 0.10.0
Server API version: 1.10
Git commit (server): dc9c28f
Go version (server): go1.2.1
Last stable version: 0.10.0

Docker::Image.push broken with 0.8

After building an image and tagging it into a repository, push fails with

rake aborted!
Docker::Error::ArgumentError
.../lib/ruby/gems/2.0.0/gems/docker-api-1.8.0/lib/docker/image.rb:40:in `push'

because RepoTags is not used anymore.

container.stop doesn't accept timeout option

Hi,

I've tried to let container.stop wait for a timeout before killing it but its not behing accepted.

container.stop({'timeout' => '10'})

Am i doing something wrong? or is it a bug? ( Gem version 1.9.1 btw )

How to delete the temporary image made by `insert_local`?

Sorry to be pestering you guys with so many questions, but I'm not sure where else I could inquire.

It seems as though insert_local creates a temporary container, presumably used to insert the files into the new image. However, only the resulting image is ever returned to the user, as far as I can tell -- how do I get that temporary container?

In case what I'm saying isn't very clear, hopefully this irb session speaks for itself:

vagrant@precise64:/vagrant$ irb
2.0.0-p353 :001 > require 'docker'
true
2.0.0-p353 :002 > Docker::Image.all.count
11
2.0.0-p353 :003 > Docker::Container.all(all: true).count
0
2.0.0-p353 :004 > Docker::Image.all.first.insert_local('localPath' => 'Dockerfile', 'outputPath' => '/')
#<Docker::Image:0x00000002bd3da8 @connection=#<Docker::Connection:0x00000002901580 @url="unix:///", @options={:socket=>"/var/run/docker.sock"}>, @id="f5cb6cd8488c", @info={}>
2.0.0-p353 :005 > Docker::Image.all.count
12
2.0.0-p353 :006 > Docker::Container.all(all: true).count
1

(insert_local increments the image count by one (and returns that new image), but it also increments the container count -- this discrepant container is what I'd like to delete)

I'm asking because I'd rather not have the result of docker ps -a be totally cluttered with these temp containers, which I create quite a few of for a little project I'm working on.

timeouts

Hi,

lately im running against this issue allot:

/home/jenkins/.rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/docker-api-1.9.1/lib/docker/connection.rb:52:in `rescue in request': read timeout reached (Docker::Error::TimeoutError)

Any idea's on increasing the timeout value?

Cheers

InternalServerError (500) on simultaneous requests

I'm getting the following error when I try to create 2 or more containers at the same time using https://github.com/mperham/sidekiq:

Expected(200..204) <=> Actual(500 InternalServerError)

The sidekiq worker looks like this:

class TestWorker
  include Sidekiq::Worker
  sidekiq_options :retry => false

  def perform(index)
    container = Docker::Container.create(
      'Image' => 'busybox',
      'Cmd' => ["date"]
    )

    logger.info { container.json }
  end
end

and the stacktrace looks like this:

2013-11-06T00:38:06Z 22330 TID-owmegh960 TestWorker JID-2963d74016c3cb695f4f6997 INFO: start
2013-11-06T00:38:06Z 22330 TID-owmed52sk TestWorker JID-674fb203ab4d13403e74ded7 INFO: start
2013-11-06T00:38:06Z 22330 TID-owmegh960 TestWorker JID-2963d74016c3cb695f4f6997 INFO: fail: 0.158 sec
2013-11-06T00:38:06Z 22330 TID-owmegh960 WARN: {"retry"=>false, "queue"=>"default", "class"=>"TestWorker", "args"=>[1], "jid"=>"2963d74016c3cb695f4f6997", "enqueued_at"=>1383698286.6713398}
2013-11-06T00:38:06Z 22330 TID-owmegh960 WARN: Expected(200..204) <=> Actual(500 InternalServerError)
2013-11-06T00:38:06Z 22330 TID-owmegh960 WARN: /home/vagrant/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/docker-api-1.7.0/lib/docker/connection.rb:42:in `rescue in request'
/home/vagrant/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/docker-api-1.7.0/lib/docker/connection.rb:36:in `request'
/home/vagrant/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/docker-api-1.7.0/lib/docker/connection.rb:49:in `block (2 levels) in <class:Connection>'
/home/vagrant/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/docker-api-1.7.0/lib/docker/container.rb:125:in `create'
/app/app/workers/test_worker.rb:13:in `perform'

[...]

2013-11-06T00:38:06Z 22330 TID-owmed52sk TestWorker JID-674fb203ab4d13403e74ded7 INFO: {"ID"=>"7f83868a600db43031c5729181bac00a2dccaca9b59cf5427a0a683b295edfb4", "Created"=>"2013-11-06T00:38:06.87068055Z", "Path"=>"date", "Args"=>[], "Config"=>{"Hostname"=>"7f83868a600d", "Domainname"=>"", "User"=>"", "Memory"=>0, "MemorySwap"=>0, "CpuShares"=>0, "AttachStdin"=>false, "AttachStdout"=>false, "AttachStderr"=>false, "PortSpecs"=>nil, "ExposedPorts"=>nil, "Tty"=>false, "OpenStdin"=>false, "StdinOnce"=>false, "Env"=>nil, "Cmd"=>["date"], "Dns"=>nil, "Image"=>"busybox", "Volumes"=>nil, "VolumesFrom"=>"", "WorkingDir"=>"", "Entrypoint"=>nil, "NetworkDisabled"=>false, "Privileged"=>false}, "State"=>{"Running"=>false, "Pid"=>0, "ExitCode"=>0, "StartedAt"=>"0001-01-01T00:00:00Z", "FinishedAt"=>"0001-01-01T00:00:00Z", "Ghost"=>false}, "Image"=>"e9aa60c60128cad1", "NetworkSettings"=>{"IPAddress"=>"", "IPPrefixLen"=>0, "Gateway"=>"", "Bridge"=>"", "PortMapping"=>nil, "Ports"=>nil}, "SysInitPath"=>"/usr/bin/docker", "ResolvConfPath"=>"/etc/resolv.conf", "HostnamePath"=>"/var/lib/docker/containers/7f83868a600db43031c5729181bac00a2dccaca9b59cf5427a0a683b295edfb4/hostname", "HostsPath"=>"/var/lib/docker/containers/7f83868a600db43031c5729181bac00a2dccaca9b59cf5427a0a683b295edfb4/hosts", "Name"=>"/gray_cow0", "Volumes"=>nil, "VolumesRW"=>nil}
2013-11-06T00:38:06Z 22330 TID-owmed52sk TestWorker JID-674fb203ab4d13403e74ded7 INFO: done: 0.2 sec

Stuff I observed while fiddling around with this problem for the past 2 hours:

  • One worker runs without a problem, 2 or more cause errors
  • One worker always finishes (see last line in the stacktrace)
  • Adding sleep rand() at the beginning of the worker causes different start times when running multiple workers... resulting in perfectly fine execution

I'm really stuck and not quite sure how to debug the problem.
Any help is appreciated.

Regression in #build_from_dir method

After update I get next exception:

No such file or directory @ unlink_internal - /tmp/out20140512-14799-3tjzjc
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:1454:in `unlink'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:1454:in `block in remove_file'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:1459:in `platform_support'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:1453:in `remove_file'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:791:in `remove_file'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:569:in `block in rm'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:568:in `each'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:568:in `rm'
/home/mikdiet/.rvm/gems/ruby-2.1.1/gems/docker-api-1.10.10/lib/docker/image.rb:192:in `build_from_dir'

Image build fails

Hi guys,

Currently using docker-api 1.10.9 against docker 0.10.0 and i'm having some serious issues lately using it.
Main issues are when i invoke the image build action based on a dockerfile using
image = ::Docker::Image.build(dockerfile_for(host), { :rm => true })

Where the dockerfile_for(host) function creates a dockerfile string.

The error i'm getting now is:

/home/jenkins/.rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/docker-api-1.10.6/lib/docker/util.rb:52:in `extract_id': Couldn't find id: {"stream":"Step 0 : FROM jordansissel/system:centos-6.4\n"}
 (Docker::Error::UnexpectedResponseError)
{"errorDetail":{"message":"invalid character 'u' after top-level value"},"error":"invalid character 'u' after top-level value"}
    from /home/jenkins/.rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/docker-api-1.10.6/lib/docker/image.rb:168:in `build'

Any idea's ?

I also sometimes get the error that the ID can't be extracted because the docker-api sent wrong json data to docker.

Grabbing the output of Image#build_from_dir

In the case of a long running build of an image it would be really useful to have access to the standard out stream.

Looking at the code I couldn't see a way of doing this. Is this possible?

Have #insert_local allow for multiple files?

Though as it is I can just call #insert_local a few times to insert files into an image, it'd be nice if #insert_local could take an array of filenames too, so I could do

image.insert_local('localPath' => [ 'Gemfile', 'Rakefile' ], 'outputPath' => '/')

instead of something like

image.insert_local('localPath' => 'Gemfile', 'outputPath' => '/').insert_local('localPath' => 'Rakefile', 'outputPath' => '/')

Which will also create an unnecessary image.

Also, and this is somewhat unrelated, why does #insert_local strictly take strings for argument keys? I feel as though using symbols would be more Ruby-ish, but I'm sure there's a reason this wasn't done.

Exposing a port to the host machine using docker 0.6.6

Hi. The following code worked fine on Docker 0.6.4, but seems to be breaking on 0.6.6:

c = Docker::Container.create({
   'Cmd' => ['/root/pynb/start.sh'], 
   'Image' => msg["image"], 
   'PortSpecs' => ['8888']
})
c.start   # Note that network settings aren't established until the container is started
...    
container_port = c.json["NetworkSettings"]["PortMapping"]["Tcp"]["8888"]

When I run "docker ps", I can see the command worked, but there is no longer a port mapped to the host:

     PORTS               NAMES
23499b81401f        odewahn/learning-data-science:latest   /root/pynb/start.sh    40 seconds ago      Up 37 seconds       8888/tcp            olive_deer    

Basically, it seems docker-api is is no longer forwarding the ports to the host machine in this new version (unless I'm missing something).

Also, thanks for this great gem.

Allow to push in a private registry.

Currently, the Docker::Image#push method only allow to push on Docker public Index. It may be interesting to be able to push on any registry.

Should #include_local respect 'rm' => true

The following psuedo-code may show that my use of 'rm' => true is not working as intended:

build_image = Docker::Image.build('from base\n', {rm: true})
puts build_image.inspect
files_image = build_image.insert_local 'localPath' => '/tmp/foo_dir', 'outputPath' => '/tmp/foo_dir', 'rm' => true
puts files_image.inspect

The output:

Docker::Image { :id => 5541feba8708, :info => {"id"=>"5541feba8708"}, :connection => Docker::Connection { :url => http://relvpc22:4243, :options => {} } }
Docker::Image { :id => 28e9765330bb, :info => {"id"=>"28e9765330bb"}, :connection => Docker::Connection { :url => http://relvpc22:4243, :options => {} } }

If my use of 'rm' => true with #include_local were working, I don't think the 5541feba8708 image would still exist. However, docker images shows that it does.

Am I onto something or perhaps I'm confused?

Escaping json incorrectly

Hello: your code is escaping the json, which is sending mal-formed json to the docker API.

We are trying to set the ExposedPorts in the create, which you need to set now with 0.6.5 in order to bind to a publicly exposed port on the real host:

container=Docker::Container.create('Cmd' => ['/usr/sbin/sshd','-D'], 'Image' => 'dillera/centos-sshd',  'ExposedPorts' => '{"22/tcp": {}}')

But when we send this, something is escaping that "22/tcp":

{"Cmd"=>["/usr/sbin/sshd", "-D"], "Image"=>"dillera/centos-sshd", "ExposedPorts"=>"{\"22/tcp\": {}}"}

and that is being rejected by the API as invalid JSON, which is it.

How can we stop this from being escaped?

How to insert directories into an image?

I've been able to use #insert_local in order to add files into my image. Now I'm looking for a way to copy complete directories over. I tried refactoring this method to use Docker::Util#create_dir_tar when localPath was detected as a directory; otherwise, use Docker::Util#create_tar for a file. That attempt has not resulted in anything usable but I wanted to share the idea in order to hear opinions.

Also...

The build_from_dir method does not appear to actually transfer the directory over despite its use of Docker::Util#create_dir_tar. Perhaps I am misunderstanding the intended use of the build_from_dir method...

Is anyone using the API to copy complete directories?

Timeout creating an image from a remote registry

Sometime, it's a bit long to pull an image from registry, how to extend/disable the duration before timeout ?

If it's not possible, it would be interesting to implement it, to let the user the choice.

Thanks

Image push, tag "latest".

Hello,

I'm encountering a small issue which obliges me tu use docker command line for something.

These two lines don't have the same result.

@image.push
`docker push #{@image.info["Repository"]}`

The API request are respectively:

POST /v1.6/images/<image_id>/push
POST /v1.6/images/<image_repo>/push

My use case is this one. I've already an image of "myrepo" taggued "latest".
When I push my newly created image of "myrepo" (which is locally latest) with docker-api, the tag latest is not updated in the repository. As a result another server which is getting myrepo:latest from the registry still has the old version.

However, by using docker command line with the repo name in the URL, the tag is correctly updated and finally, the other server is able to get the newly taggued latest

This issue is not specifically linked to docker-api, I think docker API may be better designed, however it may be usefull to update the gem to do that ?

Thank you !

Getting info from images

In #43 it was remarked that Docker only gives specific information about images if they're accessed through Image.all. I think it'd be great to have methods #repository, #tag, #created, #size, and #virtual_size, or at the very least a consistent way of getting a complete set of the information stored in info.

To find the information, we could just do something like:

image = # ...
Docker::Image.all.find { |img| img.id == image.id }.info

Are there any serious performance costs to .all? If not, I don't see why this couldn't be added.

Support POST body for Docker::Container.commit

The "latest" docs are wrong, but the "master" docs have been fixed to show that to specify the "runtime config" of an image during container commit, it should be supplied in the POST body: http://docs.docker.io/en/master/api/docker_remote_api_v1.6/#create-a-new-image-from-a-container-s-changes

There isn't currently a way to do so via docker-api, and I can't seem to figure out how to get connection.post to include a POST body (or this'd be a PR instead of an issue). :)

Docker::Image broken on docker v0.8.0

The latest Docker version seems to ship a broken v1.6 API: /images/json returns empty ID values.

Using v1.7 seems to fix this, at least for Docker::Image.

Maybe it would make sense to test against different Docker versions on Travis CI?

$ docker -v
Docker version 0.8.0, build cc3a8c8

$ curl http://localhost:4243/v1.6/images/json
[{"Created":1384460541,"ID":"","Repository":"travis","Size":0,"Tag":"ruby","VirtualSize":4847453689}]

Get output of #attach in sequential order

The current version of Container#attach returns output as an array containing standard out and standard error separately. I'm not sure if this is possible to do with the way the Docker API is set up, but is there a way to still be able to get standard out and standard error in the order they were printed as a single string?

Outgoing API requests logger

Is there a way to log all HTTP requests that hit docker API ? Not sure if its possible to do in the latest release.

Ideally functionality like this would be great:

require "docker"
require "logger"

# Setup logger
Docker.logger = Logger.new(STDOUT)

# Create container
Docker::Container.create(options)

Logger then will print something like this:

I, [2013-12-19T20:07:10.512166 #60271]  INFO -- : [:post, "/containers/create", {}, {:body=>"{\"Cmd\":[\"sleep\",\"10\"],\"Image\":\"base\"}"}]

Container#attach with logs: true prevents #attach from returning nothing?

I can't currently reproduce this issue, but I was hoping someone else may have experienced this and can explain to me what's going on.

In my app, tests that related to using Docker would occasionally fail; I had code like this:

messages = container.attach

stdout = messages[0].join
stderr = messages[1].join
output = stdout + stderr

On occasion output would be an empty string, though most of the time it was the expected value. This was quite the heisenbug -- when I re-ran the specs the bug would go away.

I've now changed my call to #attach to

messages = container.attach(logs: true)

And now output is always the correct value.

I've looked at this for awhile and I'm pretty sure it's not a coincidence. Is there any reason why logs: true would be doing anything? Is it possible that logs: true is immediately producing output, thus preventing Excon or docker-api from closing the stream? That's the only tentative explanation I can offer.

I'll try to create a reproducible example as soon as I have the time.

Port mappings while creating a new container

Hi ,

I am trying to run a container as follows :

c = Docker::Container.create('Cmd' => ['service supervisord start'] , 'Image' => 'base_image','name' => "foo", "HostConfig" => {"PortBindings"=>{"8080/tcp"=>[{"HostIp"=>"0.0.0.0", "HostPort"=>"8080"}]}})

but it throws the following error :
Docker::Error::ServerError: Expected(200..204) <=> Actual(500 InternalServerError)
from /var/lib/gems/1.9.1/gems/docker-api-1.7.5/lib/docker/connection.rb:44:in rescue in request' from /var/lib/gems/1.9.1/gems/docker-api-1.7.5/lib/docker/connection.rb:36:inrequest'
from /var/lib/gems/1.9.1/gems/docker-api-1.7.5/lib/docker/connection.rb:51:in block (2 levels) in <class:Connection>' from /var/lib/gems/1.9.1/gems/docker-api-1.7.5/lib/docker/container.rb:128:increate'
from (irb):66

What's the right way of doing this ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.