Giter Club home page Giter Club logo

containerd's Introduction

containerd banner light mode containerd banner dark mode

PkgGoDev Build Status Nightlies Go Report Card CII Best Practices Check Links

containerd is an industry-standard container runtime with an emphasis on simplicity, robustness, and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc.

containerd is a member of CNCF with 'graduated' status.

containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users.

architecture

Announcements

Now Recruiting

We are a large inclusive OSS project that is welcoming help of any kind shape or form:

  • Documentation help is needed to make the product easier to consume and extend.
  • We need OSS community outreach/organizing help to get the word out; manage and create messaging and educational content; and help with social media, community forums/groups, and google groups.
  • We are actively inviting new security advisors to join the team.
  • New subprojects are being created, core and non-core that could use additional development help.
  • Each of the containerd projects has a list of issues currently being worked on or that need help resolving.
    • If the issue has not already been assigned to someone or has not made recent progress, and you are interested, please inquire.
    • If you are interested in starting with a smaller/beginner-level issue, look for issues with an exp/beginner tag, for example containerd/containerd beginner issues.

Getting Started

See our documentation on containerd.io:

To get started contributing to containerd, see CONTRIBUTING.

If you are interested in trying out containerd see our example at Getting Started.

Nightly builds

There are nightly builds available for download here. Binaries are generated from main branch every night for Linux and Windows.

Please be aware: nightly builds might have critical bugs, it's not recommended for use in production and no support provided.

Kubernetes (k8s) CI Dashboard Group

The k8s CI dashboard group for containerd contains test results regarding the health of kubernetes when run against main and a number of containerd release branches.

Runtime Requirements

Runtime requirements for containerd are very minimal. Most interactions with the Linux and Windows container feature sets are handled via runc and/or OS-specific libraries (e.g. hcsshim for Microsoft). The current required version of runc is described in RUNC.md.

There are specific features used by containerd core code and snapshotters that will require a minimum kernel version on Linux. With the understood caveat of distro kernel versioning, a reasonable starting point for Linux is a minimum 4.x kernel version.

The overlay filesystem snapshotter, used by default, uses features that were finalized in the 4.x kernel series. If you choose to use btrfs, there may be more flexibility in kernel version (minimum recommended is 3.18), but will require the btrfs kernel module and btrfs tools to be installed on your Linux distribution.

To use Linux checkpoint and restore features, you will need criu installed on your system. See more details in Checkpoint and Restore.

Build requirements for developers are listed in BUILDING.

Supported Registries

Any registry which is compliant with the OCI Distribution Specification is supported by containerd.

For configuring registries, see registry host configuration documentation

Features

Client

containerd offers a full client package to help you integrate containerd into your platform.

import (
  "context"

  containerd "github.com/containerd/containerd/v2/client"
  "github.com/containerd/containerd/v2/pkg/cio"
  "github.com/containerd/containerd/v2/pkg/namespaces"
)


func main() {
	client, err := containerd.New("/run/containerd/containerd.sock")
	defer client.Close()
}

Namespaces

Namespaces allow multiple consumers to use the same containerd without conflicting with each other. It has the benefit of sharing content while maintaining separation with containers and images.

To set a namespace for requests to the API:

context = context.Background()
// create a context for docker
docker = namespaces.WithNamespace(context, "docker")

containerd, err := client.NewContainer(docker, "id")

To set a default namespace on the client:

client, err := containerd.New(address, containerd.WithDefaultNamespace("docker"))

Distribution

// pull an image
image, err := client.Pull(context, "docker.io/library/redis:latest")

// push an image
err := client.Push(context, "docker.io/library/redis:latest", image.Target())

Containers

In containerd, a container is a metadata object. Resources such as an OCI runtime specification, image, root filesystem, and other metadata can be attached to a container.

redis, err := client.NewContainer(context, "redis-master")
defer redis.Delete(context)

OCI Runtime Specification

containerd fully supports the OCI runtime specification for running containers. We have built-in functions to help you generate runtime specifications based on images as well as custom parameters.

You can specify options when creating a container about how to modify the specification.

redis, err := client.NewContainer(context, "redis-master", containerd.WithNewSpec(oci.WithImageConfig(image)))

Root Filesystems

containerd allows you to use overlay or snapshot filesystems with your containers. It comes with built-in support for overlayfs and btrfs.

// pull an image and unpack it into the configured snapshotter
image, err := client.Pull(context, "docker.io/library/redis:latest", containerd.WithPullUnpack)

// allocate a new RW root filesystem for a container based on the image
redis, err := client.NewContainer(context, "redis-master",
	containerd.WithNewSnapshot("redis-rootfs", image),
	containerd.WithNewSpec(oci.WithImageConfig(image)),
)

// use a readonly filesystem with multiple containers
for i := 0; i < 10; i++ {
	id := fmt.Sprintf("id-%s", i)
	container, err := client.NewContainer(ctx, id,
		containerd.WithNewSnapshotView(id, image),
		containerd.WithNewSpec(oci.WithImageConfig(image)),
	)
}

Tasks

Taking a container object and turning it into a runnable process on a system is done by creating a new Task from the container. A task represents the runnable object within containerd.

// create a new task
task, err := redis.NewTask(context, cio.NewCreator(cio.WithStdio))
defer task.Delete(context)

// the task is now running and has a pid that can be used to setup networking
// or other runtime settings outside of containerd
pid := task.Pid()

// start the redis-server process inside the container
err := task.Start(context)

// wait for the task to exit and get the exit status
status, err := task.Wait(context)

Checkpoint and Restore

If you have criu installed on your machine you can checkpoint and restore containers and their tasks. This allows you to clone and/or live migrate containers to other machines.

// checkpoint the task then push it to a registry
checkpoint, err := task.Checkpoint(context)

err := client.Push(context, "myregistry/checkpoints/redis:master", checkpoint)

// on a new machine pull the checkpoint and restore the redis container
checkpoint, err := client.Pull(context, "myregistry/checkpoints/redis:master")

redis, err = client.NewContainer(context, "redis-master", containerd.WithNewSnapshot("redis-rootfs", checkpoint))
defer container.Delete(context)

task, err = redis.NewTask(context, cio.NewCreator(cio.WithStdio), containerd.WithTaskCheckpoint(checkpoint))
defer task.Delete(context)

err := task.Start(context)

Snapshot Plugins

In addition to the built-in Snapshot plugins in containerd, additional external plugins can be configured using GRPC. An external plugin is made available using the configured name and appears as a plugin alongside the built-in ones.

To add an external snapshot plugin, add the plugin to containerd's config file (by default at /etc/containerd/config.toml). The string following proxy_plugin. will be used as the name of the snapshotter and the address should refer to a socket with a GRPC listener serving containerd's Snapshot GRPC API. Remember to restart containerd for any configuration changes to take effect.

[proxy_plugins]
  [proxy_plugins.customsnapshot]
    type = "snapshot"
    address =  "/var/run/mysnapshotter.sock"

See PLUGINS.md for how to create plugins

Releases and API Stability

Please see RELEASES.md for details on versioning and stability of containerd components.

Downloadable 64-bit Intel/AMD binaries of all official releases are available on our releases page.

For other architectures and distribution support, you will find that many Linux distributions package their own containerd and provide it across several architectures, such as Canonical's Ubuntu packaging.

Enabling command auto-completion

Starting with containerd 1.4, the urfave client feature for auto-creation of bash and zsh autocompletion data is enabled. To use the autocomplete feature in a bash shell for example, source the autocomplete/ctr file in your .bashrc, or manually like:

$ source ./contrib/autocomplete/ctr

Distribution of ctr autocomplete for bash and zsh

For bash, copy the contrib/autocomplete/ctr script into /etc/bash_completion.d/ and rename it to ctr. The zsh_autocomplete file is also available and can be used similarly for zsh users.

Provide documentation to users to source this file into their shell if you don't place the autocomplete file in a location where it is automatically loaded for the user's shell environment.

CRI

cri is a containerd plugin implementation of the Kubernetes container runtime interface (CRI). With it, you are able to use containerd as the container runtime for a Kubernetes cluster.

cri

CRI Status

cri is a native plugin of containerd. Since containerd 1.1, the cri plugin is built into the release binaries and enabled by default.

The cri plugin has reached GA status, representing that it is:

See results on the containerd k8s test dashboard

Validating Your cri Setup

A Kubernetes incubator project, cri-tools, includes programs for exercising CRI implementations. More importantly, cri-tools includes the program critest which is used for running CRI Validation Testing.

CRI Guides

Communication

For async communication and long-running discussions please use issues and pull requests on the GitHub repo. This will be the best place to discuss design and implementation.

For sync communication catch us in the #containerd and #containerd-dev Slack channels on Cloud Native Computing Foundation's (CNCF) Slack - cloud-native.slack.com. Everyone is welcome to join and chat. Get Invite to CNCF Slack.

Security audit

Security audits for the containerd project are hosted on our website. Please see the security page at containerd.io for more information.

Reporting security issues

Please follow the instructions at containerd/project

Licenses

The containerd codebase is released under the Apache 2.0 license. The README.md file and files in the "docs" folder are licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

Project details

containerd is the primary open source project within the broader containerd GitHub organization. However, all projects within the repo have common maintainership, governance, and contributing guidelines which are stored in a project repository commonly for all containerd projects.

Please find all these core project documents, including the:

information in our containerd/project repository.

Adoption

Interested to see who is using containerd? Are you using containerd in a project? Please add yourself via pull request to our ADOPTERS.md file.

containerd's People

Contributors

abhi avatar akihirosuda avatar cpuguy83 avatar crosbymichael avatar dcantah avatar dependabot[bot] avatar dims avatar dmcgowan avatar dnephin avatar estesp avatar fuweid avatar gabriel-samfira avatar iceber avatar ijc avatar jessvalarezo avatar jterry75 avatar ktock avatar kzys avatar mikebrow avatar mlaventure avatar mxpv avatar random-liu avatar rata avatar samuelkarp avatar stevvooe avatar tbble avatar thajeztah avatar tonistiigi avatar yanxuean avatar zhsj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

containerd's Issues

Doesn't build on armhf

I get following when trying to build on odroid/raspberry, go version 1.5.1:

cd containerd && go build -tags "libcontainer" -o ../bin/containerd

github.com/docker/containerd/linux

../linux/linux.go:629: cannot convert r.Memory.Limit (type uint64) to type int64
../linux/linux.go:630: cannot convert r.Memory.Reservation (type *uint64) to type int64
../linux/linux.go:631: cannot convert r.Memory.Swap (type *uint64) to type int64
../linux/linux.go:632: cannot convert r.Memory.Kernel (type *uint64) to type int64
../linux/linux.go:633: cannot convert r.Memory.Swappiness (type *uint64) to type int64
../linux/linux.go:634: cannot convert r.CPU.Shares (type *uint64) to type int64
../linux/linux.go:635: cannot convert r.CPU.Quota (type *uint64) to type int64
../linux/linux.go:636: cannot convert r.CPU.Period (type *uint64) to type int64
../linux/linux.go:637: cannot convert r.CPU.RealtimeRuntime (type *uint64) to type int64
../linux/linux.go:638: cannot convert r.CPU.RealtimePeriod (type *uint64) to type int64
../linux/linux.go:638: too many errors
make: *
* [daemon] Error 2

ctr: build fails on darwin

It would be useful to keep the CLI building and working on darwin from the day 0. It might have a slight effect on how the code should be organized and what additional abstractions are required.

make client
mkdir -p bin/
cd ctr && go build -o ../bin/ctr
/Users/jbd/src/github.com/docker/containerd/ctr
# github.com/opencontainers/specs
../vendor/src/github.com/opencontainers/specs/config.go:26: undefined: User
# github.com/opencontainers/runc/libcontainer
../vendor/src/github.com/opencontainers/runc/libcontainer/container.go:86: undefined: State
../vendor/src/github.com/opencontainers/runc/libcontainer/container.go:106: undefined: Stats
../vendor/src/github.com/opencontainers/runc/libcontainer/factory.go:24: undefined: Container
../vendor/src/github.com/opencontainers/runc/libcontainer/factory.go:33: undefined: Container
../vendor/src/github.com/opencontainers/runc/libcontainer/process.go:90: undefined: NewConsole
make: *** [client] Error 2

build is broken

$ go build -tags=libcontainer github.com/docker/containerd/linux
# github.com/docker/containerd/linux
./linux.go:705: cannot use r.Network.ClassID (type *uint32) as type string in assignment

make chanotify to work with interface{} keys

It a lot useful to have interface{} keys rather than string keys in Add. It allows users to export a symbol and use it as an identifier without worrying about the uniqueness.

package a

var Key = struct{}{}

package main
func main() {
    // ...
    n.Add(ch, a.Key)
}

Key from the a namespace will be guaranteed to be unique. We use a similar pattern in x/net/context.

- func (s *Notifier) Add(ch <-chan struct{}, id string) 
+ func (s *Notifier) Add(ch <-chan struct{}, id interface{}) 

Can't start container with containerd

I tried a lot but still can't start a container with containerd, so I came here to ask for help.

Reproduce step:

1.start containerd with root user:

$ containerd --debug

2.make a rootfs following runc instructions with redis image. Test with runc, it can successfully start.

root@darcy-HP:/containers/redis# ls
config.json  rootfs  runtime.json
# runc start
# ps
  PID TTY          TIME CMD
    1 ?        00:00:00 sh
    5 ?        00:00:00 ps
# exit
root@darcy-HP:/containers/redis# runc -v
runc version 0.3

3.try to start container with ctr, it failed silently

root@darcy-HP:/containers/redis# ctr containers start -a redis /containers/redis
root@darcy-HP:/containers/redis# ctr containers
ID                  PATH                STATUS              PROCESSES
root@darcy-HP:/containers/redis# 

4.containerd error log:

ERRO[0227] containerd: get exit status                   error=containerd: process has not exited
DEBU[0227] containerd: process exited                    pid=init status=-1

Containerd Version(containerd -v && ctr -v)

0.0.5

Platform

Ubuntu 15.04 with kernel: Linux darcy-HP 3.19.0-23-generic #24-Ubuntu SMP Tue Jul 7 18:52:55 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Containerd should pass debug flags to runc

runc has --debug option. When docker daemon is invoked in debug mode, containerd should passthrough this option to runc. Also, the log stream for runc (which is different from the daemon logs) should be specified using --log. This can be a log file with PID suffix in /tmp.

This is essential for debugging containers.

cc @tonistiigi

containerd modes

Checkpoint and restore container if it is supported so that you can upgrade containerd with running containers.

Shutdown mode and other maintenance modes should be added to handle state changes for the daemon

cgroups resources need update

running a container with the default {runtime,config}.json from runc spec fails because swappiness is set by default to -1

$ sudo ./bin/ctr containers start test2 `pwd`
[ctr] rpc error: code = 2 desc = "unmarshal /home/runcom/src/github.com/docker/containerd/runtime.json: json: cannot unmarshal number -1 into Go value of type uint64"

Fix depends on opencontainers/runtime-spec#233 which will have nullable values for other cgroup values as well

supervisor: document thread safeness

supervisor package requires godoc to note which methods are safe for concurrent use and which are not. The package requires the user the read the code in order to obtain this information which is often tedious and error-prone.

daemon crash

been spawning 10k containers in a bash loop (command was just top), at around 2.5k I got:

2015/12/18 10:39:39 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport: write unix /run/containerd/containerd.sock->@: write: bad file descriptor"

containerd crashed, containers list is empty after the crash, don't know if it's related to the code or to my system

supervisor: race in (*worker).Start

  1. Start containerd
  2. go run hack/benchmark.go

containerd crashes with the following:

fatal error: concurrent map read and map write

goroutine 25 [running]:
runtime.throw(0xb41cc0, 0x21)
    /home/jbd/go/src/runtime/panic.go:530 +0x90 fp=0xc82045fda0 sp=0xc82045fd88
runtime.mapaccess1_faststr(0x8d52c0, 0xc820216030, 0xc8205d8158, 0x3, 0x0)
    /home/jbd/go/src/runtime/hashmap_fast.go:202 +0x5b fp=0xc82045fe00 sp=0xc82045fda0
github.com/docker/containerd/supervisor.(*worker).Start(0xc820127a70)
    /home/jbd/src/github.com/docker/containerd/supervisor/worker.go:50 +0x345 fp=0xc82045ff98 sp=0xc82045fe00
runtime.goexit()
    /home/jbd/go/src/runtime/asm_amd64.s:1998 +0x1 fp=0xc82045ffa0 sp=0xc82045ff98
created by main.daemon
    /home/jbd/src/github.com/docker/containerd/containerd/main.go:193 +0x237

Allow configuring containerd timeout when waiting for a container to start

The OCI specs allows for start hooks to be specified. Since those are arbitrary binaries they could take a significant amount of time rendering the current timeout use to wait for the pid file to appear (i.e. 15) to be too short.

I'd propose adding an option to allow a user to set this timeout either daemon wide (i.e. for all containers) or per container. The second option would require a change in the grpc protocol though.

Update repo description

"Standalone Container Daemon" is a bit vague...

I rather like the description in the the readme: "A daemon to control runC". It's concrete and instantly makes you understand what it is.

containerd modes

If we run multitenant we need to add daemon states that will handle different modes like stop all new container start events if we are migrating the containers to another containerd instance after an upgrade.

Compilation errors on linux 64 bit

Facing compiling errors with latest code, Am I missing something?

$ make      
mkdir -p bin/
cd ctr && go build -o ../bin/ctr
cd containerd && go build -tags "libcontainer" -o ../bin/containerd
# github.com/docker/containerd/linux
../linux/linux.go:629: cannot convert r.Memory.Limit (type *uint64) to type int64
../linux/linux.go:630: cannot convert r.Memory.Reservation (type *uint64) to type int64
../linux/linux.go:631: cannot convert r.Memory.Swap (type *uint64) to type int64
../linux/linux.go:632: cannot convert r.Memory.Kernel (type *uint64) to type int64
../linux/linux.go:633: cannot convert r.Memory.Swappiness (type *uint64) to type int64
../linux/linux.go:634: cannot convert r.CPU.Shares (type *uint64) to type int64
../linux/linux.go:635: cannot convert r.CPU.Quota (type *uint64) to type int64
../linux/linux.go:636: cannot convert r.CPU.Period (type *uint64) to type int64
../linux/linux.go:637: cannot convert r.CPU.RealtimeRuntime (type *uint64) to type int64
../linux/linux.go:638: cannot convert r.CPU.RealtimePeriod (type *uint64) to type int64
../linux/linux.go:638: too many errors
make: *** [daemon] Error 2

Go and kernel version

$ go version
go version go1.5.1 linux/amd64
$ uname -r
3.19.0-42-generic

containerd: panic on container start

Env: linux/arm, Raspbian

The bundle to reproduce the panic is at https://github.com/rakyll/ocibundles/tree/master/armv7/blink

Start the bundle.

ctr containers start blink /containers/armv7/blink

containerd panics with the following:

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x24c50c]

goroutine 35 [running, locked to thread]:
github.com/docker/containerd/linux.(*libcontainerRuntime).createCgroupConfig(0x109e6cb0, 0x109e3920, 0x5, 0x10b060b0, 0x10a056e0, 0x7, 0x8, 0x1, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/linux/linux.go:691 +0x61c
github.com/docker/containerd/linux.(*libcontainerRuntime).createLibcontainerConfig(0x109e6cb0, 0x109e3920, 0x5, 0x10a05180, 0x1f, 0x109f2b00, 0x10b060b0, 0x0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/linux/linux.go:585 +0xcec
github.com/docker/containerd/linux.(*libcontainerRuntime).Create(0x109e6cb0, 0x109e3920, 0x5, 0x10a05180, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x109e9470, ...)
    /home/pi/src/github.com/docker/containerd/linux/linux.go:396 +0x210
github.com/docker/containerd/supervisor.(*StartEvent).Handle(0x109e6cd8, 0x109f2a80, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/supervisor/create.go:11 +0xc0
github.com/docker/containerd/supervisor.(*commonEvent).Handle(0x109de720)
    /home/pi/src/github.com/docker/containerd/supervisor/event.go:78 +0xa8
github.com/docker/containerd/eventloop.(*ChanLoop).Start.func1()
    /home/pi/src/github.com/docker/containerd/eventloop/eventloop.go:41 +0xd8
sync.(*Once).Do(0x10a41774, 0x109e6da0)
    /home/pi/go/src/sync/once.go:44 +0x118
created by github.com/docker/containerd/eventloop.(*ChanLoop).Start
    /home/pi/src/github.com/docker/containerd/eventloop/eventloop.go:43 +0x8c

goroutine 1 [IO wait]:
net.runtime_pollWait(0x75b89018, 0x72, 0x109140b0)
    /home/pi/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0x109e94b8, 0x72, 0x0, 0x0)
    /home/pi/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x109e94b8, 0x0, 0x0)
    /home/pi/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x109e9480, 0x0, 0x75b89110, 0x10915e70)
    /home/pi/go/src/net/fd_unix.go:408 +0x21c
net.(*UnixListener).AcceptUnix(0x10a417c0, 0x109519e8, 0x0, 0x0)
    /home/pi/go/src/net/unixsock_posix.go:304 +0x4c
net.(*UnixListener).Accept(0x10a417c0, 0x0, 0x0, 0x0, 0x0)
    /home/pi/go/src/net/unixsock_posix.go:314 +0x3c
google.golang.org/grpc.(*Server).Serve(0x109eba10, 0x75b88040, 0x10a417c0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:224 +0x18c
main.daemon(0x10a41320, 0xb, 0x10a445c0, 0x1f, 0x10a41330, 0xf, 0xa, 0x0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/containerd/main.go:211 +0x640
main.main.func2(0x10a22a20)
    /home/pi/src/github.com/docker/containerd/containerd/main.go:90 +0x118
github.com/codegangsta/cli.(*App).Run(0x10a2c140, 0x1090a0f0, 0x1, 0x1, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/vendor/src/github.com/codegangsta/cli/app.go:181 +0xe00
main.main()
    /home/pi/src/github.com/docker/containerd/containerd/main.go:100 +0xec

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
    /home/pi/go/src/runtime/asm_arm.s:1036 +0x4

goroutine 7 [chan receive]:
github.com/docker/containerd/api/grpc/server.(*apiServer).Events(0x109e6dc0, 0x77e3ac, 0x763cb4b8, 0x1090a7c8, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/api/grpc/server/server.go:227 +0x98
github.com/docker/containerd/api/grpc/types._API_Events_Handler(0x4e8d40, 0x109e6dc0, 0x763cb428, 0x10956c90, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/api/grpc/types/api.pb.go:1001 +0x148
google.golang.org/grpc.(*Server).processStreamingRPC(0x109eba10, 0x75b47110, 0x109f0780, 0x1090c300, 0x10a41800, 0x751e58, 0x109fcea0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:421 +0x2f4
google.golang.org/grpc.(*Server).handleStream(0x109eba10, 0x75b47110, 0x109f0780, 0x1090c300, 0x109fcea0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:493 +0xdb4
google.golang.org/grpc.(*Server).Serve.func2.1.1(0x109eba10, 0x75b47110, 0x109f0780, 0x1090c300, 0x109fcea0, 0x10acb6c0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:278 +0x3c
created by google.golang.org/grpc.(*Server).Serve.func2.1
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:280 +0x4b4

goroutine 50 [select, locked to thread]:
runtime.gopark(0x5e3a40, 0x1091a78c, 0x50b5c8, 0x6, 0x45c18, 0x2)
    /home/pi/go/src/runtime/proc.go:185 +0x148
runtime.selectgoImpl(0x1091a78c, 0x0, 0xc)
    /home/pi/go/src/runtime/select.go:392 +0x14d4
runtime.selectgo(0x1091a78c)
    /home/pi/go/src/runtime/select.go:212 +0x10
runtime.ensureSigM.func1()
    /home/pi/go/src/runtime/signal1_unix.go:227 +0x428
runtime.goexit()
    /home/pi/go/src/runtime/asm_arm.s:1036 +0x4

goroutine 21 [syscall]:
os/signal.loop()
    /home/pi/go/src/os/signal/signal_unix.go:22 +0x14
created by os/signal.init.1
    /home/pi/go/src/os/signal/signal_unix.go:28 +0x30

goroutine 22 [chan receive]:
github.com/rcrowley/go-metrics.(*meterArbiter).tick(0x7699d0)
    /home/pi/src/github.com/docker/containerd/vendor/src/github.com/rcrowley/go-metrics/meter.go:221 +0x48
created by github.com/rcrowley/go-metrics.NewMeter
    /home/pi/src/github.com/docker/containerd/vendor/src/github.com/rcrowley/go-metrics/meter.go:40 +0x1ac

goroutine 23 [chan receive]:
github.com/docker/containerd/supervisor.(*statsCollector).run(0x109eb980)
    /home/pi/src/github.com/docker/containerd/supervisor/stats_collector.go:167 +0x84
created by github.com/docker/containerd/supervisor.newStatsCollector
    /home/pi/src/github.com/docker/containerd/supervisor/stats_collector.go:105 +0x160

goroutine 24 [chan receive]:
github.com/docker/containerd/supervisor.(*worker).Start(0x109e6d30)
    /home/pi/src/github.com/docker/containerd/supervisor/worker.go:40 +0x8c
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:185 +0x1f8

goroutine 25 [chan receive]:
github.com/docker/containerd/supervisor.(*worker).Start(0x109e6d38)
    /home/pi/src/github.com/docker/containerd/supervisor/worker.go:40 +0x8c
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:185 +0x1f8

goroutine 26 [chan receive]:
github.com/docker/containerd/supervisor.(*worker).Start(0x109e6d40)
    /home/pi/src/github.com/docker/containerd/supervisor/worker.go:40 +0x8c
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:185 +0x1f8

goroutine 27 [chan receive]:
github.com/docker/containerd/supervisor.(*worker).Start(0x109e6d48)
    /home/pi/src/github.com/docker/containerd/supervisor/worker.go:40 +0x8c
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:185 +0x1f8

goroutine 28 [chan receive]:
github.com/docker/containerd/supervisor.(*worker).Start(0x109e6d50)
    /home/pi/src/github.com/docker/containerd/supervisor/worker.go:40 +0x8c
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:185 +0x1f8

goroutine 29 [chan receive]:
github.com/docker/containerd/supervisor.(*worker).Start(0x109e6d58)
    /home/pi/src/github.com/docker/containerd/supervisor/worker.go:40 +0x8c
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:185 +0x1f8

goroutine 30 [chan receive]:
github.com/docker/containerd/supervisor.(*worker).Start(0x109e6d60)
    /home/pi/src/github.com/docker/containerd/supervisor/worker.go:40 +0x8c
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:185 +0x1f8

goroutine 31 [chan receive]:
github.com/docker/containerd/supervisor.(*worker).Start(0x109e6d68)
    /home/pi/src/github.com/docker/containerd/supervisor/worker.go:40 +0x8c
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:185 +0x1f8

goroutine 32 [chan receive]:
github.com/docker/containerd/supervisor.(*worker).Start(0x109e6d70)
    /home/pi/src/github.com/docker/containerd/supervisor/worker.go:40 +0x8c
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:185 +0x1f8

goroutine 33 [chan receive]:
github.com/docker/containerd/supervisor.(*worker).Start(0x109e6d78)
    /home/pi/src/github.com/docker/containerd/supervisor/worker.go:40 +0x8c
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:185 +0x1f8

goroutine 34 [chan receive]:
main.startSignalHandler(0x109ee780)
    /home/pi/src/github.com/docker/containerd/containerd/reap_linux.go:24 +0x1ec
created by main.daemon
    /home/pi/src/github.com/docker/containerd/containerd/main.go:197 +0x3e4

goroutine 52 [semacquire]:
sync.runtime_Semacquire(0x109e28ec)
    /home/pi/go/src/runtime/sema.go:43 +0x24
sync.(*WaitGroup).Wait(0x109e28e0)
    /home/pi/go/src/sync/waitgroup.go:126 +0xc0
google.golang.org/grpc.(*Server).Serve.func2(0x75b47110, 0x10a020a0, 0x109eba10)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:282 +0xc4
created by google.golang.org/grpc.(*Server).Serve
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:286 +0xa2c

goroutine 57 [chan receive]:
github.com/docker/containerd/api/grpc/server.(*apiServer).Events(0x109e6dc0, 0x77e3ac, 0x763cb4b8, 0x109de6e0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/api/grpc/server/server.go:227 +0x98
github.com/docker/containerd/api/grpc/types._API_Events_Handler(0x4e8d40, 0x109e6dc0, 0x763cb428, 0x109e4f60, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/api/grpc/types/api.pb.go:1001 +0x148
google.golang.org/grpc.(*Server).processStreamingRPC(0x109eba10, 0x75b47110, 0x10912460, 0x109f2880, 0x10a41800, 0x751e58, 0x10a04ee0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:421 +0x2f4
google.golang.org/grpc.(*Server).handleStream(0x109eba10, 0x75b47110, 0x10912460, 0x109f2880, 0x10a04ee0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:493 +0xdb4
google.golang.org/grpc.(*Server).Serve.func2.1.1(0x109eba10, 0x75b47110, 0x10912460, 0x109f2880, 0x10a04ee0, 0x10915e80)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:278 +0x3c
created by google.golang.org/grpc.(*Server).Serve.func2.1
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:280 +0x4b4

goroutine 37 [chan receive]:
github.com/docker/containerd/api/grpc/server.(*apiServer).Events(0x109e6dc0, 0x77e3ac, 0x763cb4b8, 0x1090a790, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/api/grpc/server/server.go:227 +0x98
github.com/docker/containerd/api/grpc/types._API_Events_Handler(0x4e8d40, 0x109e6dc0, 0x763cb428, 0x10956ba0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/api/grpc/types/api.pb.go:1001 +0x148
google.golang.org/grpc.(*Server).processStreamingRPC(0x109eba10, 0x75b47110, 0x10a020a0, 0x109eea80, 0x10a41800, 0x751e58, 0x10a44fe0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:421 +0x2f4
google.golang.org/grpc.(*Server).handleStream(0x109eba10, 0x75b47110, 0x10a020a0, 0x109eea80, 0x10a44fe0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:493 +0xdb4
google.golang.org/grpc.(*Server).Serve.func2.1.1(0x109eba10, 0x75b47110, 0x10a020a0, 0x109eea80, 0x10a44fe0, 0x109e28e0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:278 +0x3c
created by google.golang.org/grpc.(*Server).Serve.func2.1
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:280 +0x4b4

goroutine 9 [select]:
google.golang.org/grpc/transport.(*http2Server).controller(0x10912460)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/transport/http2_server.go:613 +0x578
created by google.golang.org/grpc/transport.newHTTP2Server
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/transport/http2_server.go:133 +0x6f4

goroutine 40 [semacquire]:
sync.runtime_Semacquire(0x10acb6cc)
    /home/pi/go/src/runtime/sema.go:43 +0x24
sync.(*WaitGroup).Wait(0x10acb6c0)
    /home/pi/go/src/sync/waitgroup.go:126 +0xc0
google.golang.org/grpc.(*Server).Serve.func2(0x75b47110, 0x109f0780, 0x109eba10)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:282 +0xc4
created by google.golang.org/grpc.(*Server).Serve
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:286 +0xa2c

goroutine 10 [IO wait]:
net.runtime_pollWait(0x75b88fa0, 0x72, 0x109140b0)
    /home/pi/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0x109549b8, 0x72, 0x0, 0x0)
    /home/pi/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x109549b8, 0x0, 0x0)
    /home/pi/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).Read(0x10954980, 0x109c6860, 0x9, 0x9, 0x0, 0x763c7030, 0x109140b0)
    /home/pi/go/src/net/fd_unix.go:232 +0x1c4
net.(*conn).Read(0x1090a890, 0x109c6860, 0x9, 0x9, 0x0, 0x0, 0x0)
    /home/pi/go/src/net/net.go:172 +0xc8
io.ReadAtLeast(0x75b470c8, 0x1090a890, 0x109c6860, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
    /home/pi/go/src/io/io.go:298 +0xdc
io.ReadFull(0x75b470c8, 0x1090a890, 0x109c6860, 0x9, 0x9, 0x75b89268, 0x0, 0x0)
    /home/pi/go/src/io/io.go:316 +0x5c
golang.org/x/net/http2.readFrameHeader(0x109c6860, 0x9, 0x9, 0x75b470c8, 0x1090a890, 0x0, 0x0, 0x0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/vendor/src/golang.org/x/net/http2/frame.go:227 +0x80
golang.org/x/net/http2.(*Framer).ReadFrame(0x109c6840, 0x0, 0x0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/vendor/src/golang.org/x/net/http2/frame.go:395 +0xbc
google.golang.org/grpc/transport.(*framer).readFrame(0x109fd4c0, 0x0, 0x0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/transport/http_util.go:450 +0x38
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0x10912460, 0x109fd540)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/transport/http2_server.go:242 +0x4e4
google.golang.org/grpc.(*Server).Serve.func2(0x75b47110, 0x10912460, 0x109eba10)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:281 +0xb8
created by google.golang.org/grpc.(*Server).Serve
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:286 +0xa2c

goroutine 54 [semacquire]:
sync.runtime_Semacquire(0x109e2a4c)
    /home/pi/go/src/runtime/sema.go:43 +0x24
sync.(*WaitGroup).Wait(0x109e2a40)
    /home/pi/go/src/sync/waitgroup.go:126 +0xc0
google.golang.org/grpc.(*Server).Serve.func2(0x75b47110, 0x10a02410, 0x109eba10)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:282 +0xc4
created by google.golang.org/grpc.(*Server).Serve
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:286 +0xa2c

goroutine 55 [chan receive]:
github.com/docker/containerd/api/grpc/server.(*apiServer).Events(0x109e6dc0, 0x77e3ac, 0x763cb4b8, 0x1090a888, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/api/grpc/server/server.go:227 +0x98
github.com/docker/containerd/api/grpc/types._API_Events_Handler(0x4e8d40, 0x109e6dc0, 0x763cb428, 0x10957c50, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/api/grpc/types/api.pb.go:1001 +0x148
google.golang.org/grpc.(*Server).processStreamingRPC(0x109eba10, 0x75b47110, 0x10a02410, 0x109f2480, 0x10a41800, 0x751e58, 0x10a04800, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:421 +0x2f4
google.golang.org/grpc.(*Server).handleStream(0x109eba10, 0x75b47110, 0x10a02410, 0x109f2480, 0x10a04800)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:493 +0xdb4
google.golang.org/grpc.(*Server).Serve.func2.1.1(0x109eba10, 0x75b47110, 0x10a02410, 0x109f2480, 0x10a04800, 0x109e2a40)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:278 +0x3c
created by google.golang.org/grpc.(*Server).Serve.func2.1
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:280 +0x4b4

goroutine 58 [chan receive]:
github.com/docker/containerd/api/grpc/server.(*apiServer).CreateContainer(0x109e6dc0, 0x75b891d8, 0x10a050c0, 0x109ecd80, 0x0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/api/grpc/server/server.go:47 +0x344
github.com/docker/containerd/api/grpc/types._API_CreateContainer_Handler(0x4e8d40, 0x109e6dc0, 0x75b891d8, 0x10a050c0, 0x10a05160, 0x0, 0x0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/api/grpc/types/api.pb.go:905 +0xe8
google.golang.org/grpc.(*Server).processUnaryRPC(0x109eba10, 0x75b47110, 0x10912460, 0x109f2980, 0x10a41800, 0x7527f0, 0x10a050a0, 0x0, 0x0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:350 +0x6d0
google.golang.org/grpc.(*Server).handleStream(0x109eba10, 0x75b47110, 0x10912460, 0x109f2980, 0x10a050a0)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:489 +0xd34
google.golang.org/grpc.(*Server).Serve.func2.1.1(0x109eba10, 0x75b47110, 0x10912460, 0x109f2980, 0x10a050a0, 0x10915e80)
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:278 +0x3c
created by google.golang.org/grpc.(*Server).Serve.func2.1
    /home/pi/src/github.com/docker/containerd/vendor/src/google.golang.org/grpc/server.go:280 +0x4b4

ctr: wrong relative path

$ pwd
/home/pi/ocibundles/armv7/blink

$ sudo ctr containers start . blink
[ctr] rpc error: code = 2 desc = "JSON specification file at /home/pi/ocibundles/armv7/blink/blink/config.json not found"

What is expected is reading the bundle from /home/pi/ocibundles/armv7/blink/config.json not from /home/pi/ocibundles/armv7/blink/blink/config.json.

Related to docker-archive@facfce3.

Non-streaming API for stats

With current streaming api it is hard for a client to produce a stream of stats for many containers because containerd already produces stats stream for a single container and client would need to then synchronize these streams.

Simpler API would be:

rpc GetStats(StatsRequest) returns (StatsResponse) {}
message StatsRequest {
    repeated string id = 1;
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.