fussybeaver / bollard Goto Github PK
View Code? Open in Web Editor NEWDocker daemon API in Rust
License: Apache License 2.0
Docker daemon API in Rust
License: Apache License 2.0
Hey all, not sure if this is the right place to ask, but the maintainers of this crate seem familiar with the Docker engine API, and I figured I'd ask :)
(If there is somewhere else I should go to find the answer to this question, I'm happy to RTFM, but figured ya'll would be impacted by similar questions).
I'm finding myself frequently supplying arguments to the Docker CLI, i.e., docker run --gpus all
, and wondering "what arguments to bollard would trigger the equivalent operation?" I'm aware that this is essentially a "CLI vs docker engine API" question, but other than reverse-engineering the docker CLI source, is there a quick way ya'll have seen the translation from one to the other?
I'd be happy to add some documentation about this for debugging purposes, once I have a path forward.
As an example, uncovering that
docker run --rm
Is equivalent to:
create_container with HostConfig.auto_remove = true + start_container.
Was actually non-trivial, but having a quick way to see how one maps to the other is really useful for navigating the bollard API.
pretty much title, i don't see why it needs to be a stream instead of a standard future.
Bollard seems to be serializing label
filter in ListImagesOptions
differently from what docker API requires.
let mut filters = HashMap::new();
filters.insert("label".to_string(), "maintainer=some_maintainer".to_string());
docker // type: bollard::Docker
.list_images(Some(ListImagesOptions::<String> {
filters,
..Default::default()
}))
will turn out:
Parsing uri: unix://---/images/json?all=false&filters=%7B%22label%22%3A%22maintainer%3Dsome_maintainer%22%7D&digests=false, client_type: Unix, socket: /var/run/docker.sock
thread 'test_run' panicked at 'called `Result::unwrap()` on an `Err` value: DockerResponseServerError { status_code: 500, message: "{\"message\":\"json: cannot unmarshal string into Go value of type map[string]bool\"}\n" }', libcore/result.rs:1009:5
After URL decoding filters
query:
{"label":"maintainer=some_maintainer"}
However, the query is expected to be the following:
{"label":["maintainer=some_maintainer"]}
Trying with fixed query:
# Before fixing query
❯ curl --unix-socket /var/run/docker.sock "http:/images/json?all=false&filters=%7B%22label%22%3A%22maintainer%3Dsome_maintainer%22%7D&digests=false"
{"message":"json: cannot unmarshal string into Go value of type map[string]bool"}
# After fixing query
❯ curl --unix-socket /var/run/docker.sock "http:/images/json?all=false&filters=%7B%22label%22%3A%5B%22maintainer%3Dsome_maintainer%22%5D%7D&digests=false"
[]
In version v0.5, the Config<T>
derived serde Deserialize
, after upgrade to v0.6 Config< T > only derived Serialize
.
Other struct like: NetworkingConfig
, HealthConfig
etc .. seems are all derived Deserialize
,
If it is possible add Deserialize
back for Config<T>
?
It's a breaking change after all. (At least break my code 😄 )
I am trying to create a container with some exposed ports.
when running the program i get this error: Error("key must be a string", line: 0, column: 0)
Wich is associated with exposing the ports in the Config that is passed to create container :
let mut exposed_ports = HashMap::new();
let mut empty = HashMap::new();
empty.insert((), ());
exposed_ports.insert("3000/tcp", empty);
let config = Config {
image: Some("preview-my-app"),
exposed_ports: Some(exposed_ports), // If i remove this line, it works fine
..Default::default()
};
stable-x86_64-unknown-linux-gnu (default)
rustc 1.36.0 (a53f9df32 2019-07-03)
My main.rs i test with:
extern crate bollard;
extern crate failure;
extern crate futures;
extern crate pretty_env_logger;
extern crate serde;
extern crate tokio;
use bollard::container::{
Config, CreateContainerOptions, LogOutput, LogsOptions, StartContainerOptions,
};
use bollard::{Docker, DockerChain};
use failure::Error;
use futures::{Future, Stream};
use serde::ser::Serialize;
use std::cmp::Eq;
use std::collections::HashMap;
use std::hash::Hash;
use tokio::prelude::*;
use tokio::runtime::Runtime;
fn create_and_logs<T>(
docker: DockerChain,
name: &'static str,
config: Config<T>,
) -> impl Stream<Item = LogOutput, Error = Error>
where
T: AsRef<str> + Eq + Hash + Serialize,
{
docker
.create_container(Some(CreateContainerOptions { name: name }), config)
.and_then(move |(docker, _)| {
docker.start_container(name, None::<StartContainerOptions<String>>)
})
.and_then(move |(docker, _)| {
docker.logs(
name,
Some(LogsOptions {
follow: true,
stdout: true,
stderr: false,
..Default::default()
}),
)
})
.map(|(_, stream)| stream)
.into_stream()
.flatten()
}
fn main() {
pretty_env_logger::init();
let mut rt = Runtime::new().unwrap();
let docker = Docker::connect_with_unix_defaults().unwrap();
let mut exposed_ports = HashMap::new();
let mut empty = HashMap::new();
empty.insert((), ());
exposed_ports.insert("3000/tcp", empty);
let config = Config {
image: Some("preview-my-app"),
exposed_ports: Some(exposed_ports),
..Default::default()
};
let stream = create_and_logs(docker.chain(), "preview-my-app-container", config);
let future = stream
.map_err(|e| eprintln!("{:?}", e))
.for_each(|x| Ok(println!("{:?}", x)));
rt.spawn(future);
rt.shutdown_on_idle().wait().unwrap();
}
Complete output of RUST_LOG=debug cargo run
Running `target/debug/rbot-discord`
Error("key must be a string", line: 0, column: 0)
I tried to find a solution by my self for hours now without any luck, i hope you can help me
The official documentation for the docker engine SDK point blank states that, when creating a container, the "ExportsPort" field should be:
An object mapping ports to an empty object in the form
{"<port>/<tcp|udp|sctp>": {}}
However, this seems to be a bold-faced lie since after inspecting what the docker CLI does on docker run
with a -p 0.0.0.0:5050:5050
flag and looking at the HTTP requests made using strace
(thanks for @smklein for the idea)`, you can clearly see that that's not what they're passing (json prettified):
POST /v1.40/containers/create HTTP/1.1
Host: docker
User-Agent: Docker-Client/19.03.13 (linux)
Content-Length: 1607
Content-Type: application/json
{
"Hostname": "",
[...]
"HostConfig": {
[...]
"PortBindings": {
"5050/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5050"
}
]
}
[...]
}
[...]
}
It seems these are undocumented optional parameters that are used by official tools. Should these be added in? I'd be more than glad to help with this and implement it.
👋 I maintain a project called Docuum which has been using Bollard since Docuum 1.12. The program listens for Docker events using Bollard's Docker::events
.
During some testing tonight, I noticed that the Docuum no longer seems to actually receive any events from Docker (I can see events using the docker events
command, but Bollard doesn't seem to pick them up), at least on my machine. From bisecting I have learned that Docuum 1.12+ does not receive Docker events, but previous versions work fine. That's the version when we switched to Bollard.
The weird thing is: I remember this working. I bet @mdonoughe does too. I'm having a hard time believing Docuum hasn't been working since August (when we released 1.12). This makes me think a recent Docker upgrade may have caused the issue, and not a change in Bollard or Docuum.
Does Bollard have any automated integration testing for Docker events? If so, can we run them now with the latest version of Docker and see if they pass?
Bollard version: 0.8.0
Docker version: 19.03.13 (2.4.0.0), build 4484c46d9d
macOS 10.15.6 (19G2021)
The failure::Error
type is not compatible with std::error::Error
.
When receiving output from Docker::build_image
, I noticed that deserialisation of responses occasionally fails. It appears that bollard does not put these responses together properly, instead trying to deserialise them when they are incomplete.
The code below produces the output at the very bottom, and is how I had to work around the issue:
let mut stream = docker.build_image(options, None, Some(tarball.into()));
while let Some(update) = stream.next().await {
let update = match update {
Ok(u) => Ok(u),
Err(e) => {
use bollard::errors::ErrorKind;
match e.kind() {
ErrorKind::JsonDeserializeError { .. } => {
warn!(
"Failed to deserialize Docker response: {:?}. Trying to keep going...",
e.kind()
);
continue;
}
ErrorKind::JsonDataError { .. } => {
warn!(
"Failed to deserialize Docker response: {} Trying to keep going...",
e.kind(),
);
continue;
}
_ => {
error!("Other Docker error: {}", e);
Err(e)
}
}
}
}
// a little later...
debug!("Importing {}: {:?}", info, update);
// --snip--
}
[2020-04-06T15:57:23Z DEBUG laps::web::admin] Importing laps-test:0.1.0: BuildImageStatus { status: "Extracting", progress_detail: Some(BuildImageProgressDetail { current: Some(32768), total: Some(2749033) }), progress: Some("[> ] 32.77kB/2.749MB"), id: Some("62b0f1bf7919") }
[2020-04-06T15:57:23Z WARN laps::web::admin] Failed to deserialize Docker response: JsonDeserializeError { content: "{\"status\":\"Extracting\",\"pr", err: Error("EOF while parsing a string", line: 1, column: 26) }. Trying to keep going...
[2020-04-06T15:57:23Z WARN laps::web::admin] Failed to deserialize Docker response: JsonDeserializeError { content: "gressDetail\":{\"current\":2749033,\"total\":2749033},\"progress\":\"[==================================================\\u003e] 2.749MB/2.749MB\",\"id\":\"62b0f1bf7919\"}\r", err: Error("expected value", line: 1, column: 1) }. Trying to keep going...
[2020-04-06T15:57:23Z DEBUG laps::web::admin] Importing laps-test:0.1.0: BuildImageStatus { status: "Extracting", progress_detail: Some(BuildImageProgressDetail { current: Some(2749033), total: Some(2749033) }), progress: Some("[==================================================>] 2.749MB/2.749MB"), id: Some("62b0f1bf7919") }
Hello! Looks like a cool library!
Is there any work being put into supporting the newer api version? Is there a list of work that needs to be done in order to do so?
Any idea why this might happen?
API queried with a bad parameter: {"message":"file with no instructions."}
pub fn save(&mut self, container: &str, repo: &str) -> Result<(), Error> {
self.validate_container_exists(container)?;
let options = CommitContainerOptions {
container: container.to_owned(),
author: "tensorman".into(),
comment: "automated image creation by tensorman".into(),
pause: true,
repo: ["tensorman/", repo].concat(),
..Default::default()
};
let config = ContainerConfig::<String> { ..Default::default() };
let future = self.docker.commit_container(options, config);
self.tokio.block_on(future).map_err(|failure| failure.compat()).map_err(|source| {
Error::Commit { container: container.into(), repo: repo.into(), source }
})?;
Ok(())
}
First of all, thanks for patching the fixes triggered by Cargo audit, and generally maintaining this crate. It has been super useful for me, and I appreciate all the work you've done in keeping it up-to-date!
When fixes for "cargo audit" are patched into bollard without a corresponding minor release, projects which depend on bollard are still triggering audit issues. As an example, a "needs-bollard" binary that uses version 0.8.0 (released Aug 23, 2020) will still trigger RUSTSEC-2020-0036 and RUSTSEC-2020-0053, so although "cargo audit" would succeed for "bollard", it would fail for "needs-bollard".
This is admittedly a low-priority issue - our workaround is to pass the "--ignore=ISSUE" flag to audit, and wait for a new release of Bollard, but just figured I'd mention it - if a new minor release of bollard came out, this would help us remove these workarounds.
Identifiers beginning with underscores are reserved for values that are not used. Bollard uses _type
, presumably to avoid clashing with the reserved keyword type
. This triggers the Clippy lint used_underscore_binding in consuming code.
Unfortunately, changing the name of this field is obviously a breaking change.
If I locally build an image and name it "myimage" and try to use the API, it results in the following:
DockerResponseNotFoundError {
message: "{\"message\":\"pull access denied for myimage, repository does not exist or may require \'docker login\': denied: requested access to the resource is denied\"}\n"
}
I'm using the CreateImageOptions { from_image: ... }
field.
I expect the result to be the same as if I call create_image
on a pulled image. Am I missing something?
Is there currently anything planned regarding the docker service api?
If not, would a PR for that be welcome?
I'd be willing to implement it in the next few weeks.
Awesome project btw.
Does this library support the Client.ContainerAttach methods to read stdout and write to stdin?
It seems that Bollard does not support ImageBuild API yet.
Is there any plan for the feature?
In some cases, the HealthcheckResult
s returned from a Docker inspection call may include negative ExitCode
values which currently don't deserialize to the u16
type defined in container.rs
.
An example case was found when launching postgres:alpine
in Docker (API version 1.40) with a healthcheck command of CMD /bin/su - postgres -c /usr/local/bin/pg_isready
. Until the result of this command is successful, the exit code received is -1
, causing a panic:
Failed to deserialize JSON: invalid value: integer `-1`, expected u16 at ...
If the LogStateHealth struct is modified to use an i16
exit_code type, the inspection is successful:
/// Log of the health of a running container.
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
#[allow(missing_docs)]
pub struct LogStateHealth {
pub start: DateTime<Utc>,
pub end: DateTime<Utc>,
pub exit_code: i16,
pub output: String,
}
According to the API spec there is a limited set of known values and a catchall "other values" in the ExitCode description:
ExitCode:
description: |
ExitCode meanings:
- `0` healthy
- `1` unhealthy
- `2` reserved (considered unhealthy)
- other values: error running probe
type: "integer"
example: 0
A -1
value has been found to be one of those other values but I don't know whether an i16
type would restrict bollard from catching all possible values. In theory it may only need to cover the 8 bit range if following unix-like conventions. An i8
works for this issue but an i16
would give a more tollerant range.
Cargo.toml
specifies Apache License 2.0, but the repo itself is missing LICENSE
file for some reason.
https://help.github.com/en/articles/adding-a-license-to-a-repository
#[get("/docker/version")]
pub async fn docker_version() -> Result<Json<Option<String>>, String> {
#[cfg(unix)]
let docker =
Docker::connect_with_unix_defaults().expect("Impossible to connect to Docker");
let version: Option<String> = match docker.version().await {
Ok(v) => v.version,
Err(_err) => None,
};
Ok(Json(version))
}
When I call the route
thread 'rocket-worker-thread' panicked at 'there is no timer running, must be called from the context of a Tokio 0.2.x runtime', /home/williamdes/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.25/src/time/driver/handle.rs:24:32
In order to create a container you need to use the Config
struct like so:
let create_container_config: Config<String> = Config {
image: Some("some_image".to_string()),
..Default::default()
};
let res = docker
.create_container(
Some(CreateContainerOptions { name: "test" }),
create_container_config,
)
.await
.unwrap();
The issue I had was when I tried to use
let create_container_config: Config<String> = Config {
..Default::default()
};
I tried to create a container with the default initialization, however the create_container
function failed because of the error:
thread 'main' panicked at 'called `Result::unwrap()` on an
`Err` value: DockerResponseBadParameterError {
message: "{\"message\":\"Config cannot be empty in order to create a container\"}"
}'
So i assumed this to mean that the image name is required when configuring to create a container. Maybe I'm missing other ways to create a container with the Config
, but maybe the image
field should not be Option<T>
and should be required.
From reading documentation it seems if an image isn't present on host the library will pull from Docker Hub but this doesn't seem to be happening. Could I be missing something in the below example?
pub fn docker_run_slave(&mut self) {
let mut rt = Runtime::new().unwrap();
let docker = Docker::connect_with_local_defaults().unwrap();
#[cfg(feature = "tls")]
let docker = Docker::connect_with_tls_defaults().unwrap();
let stream = docker
.chain()
.create_image(
Some(CreateImageOptions {
from_image: "confluentinc/cp-kafka:5.0.1",
..Default::default()
}),
None
)
.and_then(move |(docker, _)| {
docker.create_container(
Some(CreateContainerOptions { name: "mc" }),
Config {
image: Some("confluentinc/cp-kafka:5.0.1"),
host_config: Some(HostConfig {
..Default::default()
}),
..Default::default()
},
)
})
.and_then(move |(docker, _)| {
docker.start_container("mc", None::<StartContainerOptions<String>>)
})
.into_stream();
let future = stream
.map_err(|e| println!("{:?}", e))
.for_each(|x| Ok(println!("{:?}", x)));
rt.spawn(future);
rt.shutdown_on_idle().wait().unwrap();
}
Currently there doesn't seem to be any way to handle stdin
when running docker exec.
Currently functions like BollardDockerApi::connect_with_unix_defaults
return Result<Docker<impl Connect>, Error>
. This has some downsides from the API viewpoint: the users of the API are unable to store the returned value in a struct, because Rust doesn't allow using impl Connect
in structs yet. It would be better to just return a concrete type for the time being.
Hi,
I use an image from an azure container registry, which i can pull localy via docker pull
.
In my application using bollard, I get a DockerResponseServerError 500 though.
DockerResponseServerError {
status_code: 500,
message: "{\"message\":\"Head https://xxx.azurecr.io/v2/xxx/xxx/manifests/latest: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.\"}\n",
})
Error: Kind(Other)
my ~/.docker/config.json
:
{
"auths": {
"xxx.azurecr.io": {
"auth": "xxx==",
"identitytoken": "xxx"
}
}
}
My dev environment is windows wsl2 with docker for windows.
Does bollard use this docker configuration file, or am I supposed to parse it and provide it via AuthConfig
?
The info
route can be accessed by executing...
# On macOS, but should work on Linux too...
curl --unix-socket /var/run/docker.sock http://localhost/v1.40/info
If use docker api directly for /events
except fields in SystemEventsResponse
there also: status, id, from
$ curl --unix-socket /var/run/docker.sock http:/v1.40/events
{"status":"create","id":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","from":"busybox","Type":"container","Action":"create","Actor":{"ID":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","Attributes":{"image":"busybox","name":"practical_vaughan"}},"scope":"local","time":1593772755,"timeNano":1593772755732072556}
{"status":"attach","id":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","from":"busybox","Type":"container","Action":"attach","Actor":{"ID":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","Attributes":{"image":"busybox","name":"practical_vaughan"}},"scope":"local","time":1593772755,"timeNano":1593772755733017549}
{"Type":"network","Action":"connect","Actor":{"ID":"8a16dc0362440fc8810815f6fa61aec18f639d9541ce59945cbdbbc50dcdd644","Attributes":{"container":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","name":"bridge","type":"bridge"}},"scope":"local","time":1593772755,"timeNano":1593772755791087643}
{"status":"start","id":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","from":"busybox","Type":"container","Action":"start","Actor":{"ID":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","Attributes":{"image":"busybox","name":"practical_vaughan"}},"scope":"local","time":1593772756,"timeNano":1593772756196716487}
{"status":"die","id":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","from":"busybox","Type":"container","Action":"die","Actor":{"ID":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","Attributes":{"exitCode":"0","image":"busybox","name":"practical_vaughan"}},"scope":"local","time":1593772756,"timeNano":1593772756238929210}
{"Type":"network","Action":"disconnect","Actor":{"ID":"8a16dc0362440fc8810815f6fa61aec18f639d9541ce59945cbdbbc50dcdd644","Attributes":{"container":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","name":"bridge","type":"bridge"}},"scope":"local","time":1593772756,"timeNano":1593772756295640842}
{"status":"destroy","id":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","from":"busybox","Type":"container","Action":"destroy","Actor":{"ID":"082f82a954ba8450a2f5c46345be9e20e19ba68793c1c67fbb3ff1f8cb1264be","Attributes":{"image":"busybox","name":"practical_vaughan"}},"scope":"local","time":1593772756,"timeNano":1593772756328079737}
^C
This I receive for: docker run -it --rm busybox echo 'hi'
.
I see that in docker docs no such fields and understand that because this they not in bollard
. https://docs.docker.com/engine/api/v1.40/#operation/SystemEvents
Any ideas how values of these fields can be received?
To increase discoverability on GitHub I think it would be good to migrate the repository to a standalone repository so that it will pop up in GitHub searches. I almost didn't find it because I found the repository that it was forked from instead.
You have to contact GitHub support to have them migrate the repo: https://help.github.com/en/articles/why-are-my-contributions-not-showing-up-on-my-profile#commit-was-made-in-a-fork.
Lines 498 to 500 in a37730e
The example uses http://, when it should either be tcp://, or has no scheme.
Maybe change it to: https://docs.rs/bollard/0.10.1/bollard/
The documentation mentions this being unsupported, but it would be great to have a function for using the /images/load
endpoint easily. Currently, my workaround is to create an image from a filesystem tarball instead, which has a number of drawbacks.
Hello! I'm the maintainer of boondock
, a barely-maintained Docker client for Rust. I'm hoping to migrate away from it at some point, and bollard
is on my shortlist.
There's one piece of potentially interesting code in boondock
: A hyperlocal
+ rustls
connection routine, which you can find here. The completely removes the dependency on OpenSSL, and it makes it easy to use the same code to talk to either unix://
sockets or https://
endpoints. It implements Docker-compatible certificate and key management.
If you'd be interested in switching to this at some point, I'd be happy to answer questions and maybe help with the integration.
The signature of the bollard::Docker::start_exec()
function is misleading: container_name
is actually the id
field of a created Docker exec
instance.
While using the Docker exec
API, I want to use the start_exec
function.
I noticed that the signature of this function is as follows:
Lines 165 to 167 in c720ee7
Lines 183 to 187 in c720ee7
I tried to pass a predefined container_name
to this function, but I got a 404
error.
Fortunately, I have found the real usage of this function in your tests:
Lines 18 to 19 in c720ee7
Lines 38 to 39 in c720ee7
Hi,
we've got problems with this change. We are running api 1.41.
While everything works fine, we got errors streaming stats with the new variants.
The json result from stream looks exactly like ContainerStats, which does not know about c groups.
{
"read": "2021-05-02T05:10:29.5953243Z",
"preread": "2021-05-02T05:10:28.5614296Z",
"pids_stats": { "current": 18 },
"blkio_stats": {
"io_service_bytes_recursive": [],
"io_serviced_recursive": [],
"io_queue_recursive": [],
"io_service_time_recursive": [],
"io_wait_time_recursive": [],
"io_merged_recursive": [],
"io_time_recursive": [],
"sectors_recursive": []
},
"num_procs": 0,
"storage_stats": {},
"cpu_stats": {
"cpu_usage": {
"total_usage": 193674787300,
"percpu_usage": [
38839858700,
7732528900,
40562396900,
9015874200,
38276766600,
11291387200,
38560103400,
9395871400
],
"usage_in_kernelmode": 26410000000,
"usage_in_usermode": 78130000000
},
"system_cpu_usage": 4438290140000000,
"online_cpus": 8,
"throttling_data": {
"periods": 0,
"throttled_periods": 0,
"throttled_time": 0
}
},
"precpu_stats": {
"cpu_usage": {
"total_usage": 193673934900,
"percpu_usage": [
38839686400,
7732409700,
40562396900,
9015874200,
38276205700,
11291387200,
38560103400,
9395871400
],
"usage_in_kernelmode": 26410000000,
"usage_in_usermode": 78130000000
},
"system_cpu_usage": 4438281850000000,
"online_cpus": 8,
"throttling_data": {
"periods": 0,
"throttled_periods": 0,
"throttled_time": 0
}
},
"memory_stats": {
"usage": 196747264,
"max_usage": 199966720,
"stats": {
"active_anon": 89210880,
"active_file": 946176,
"cache": 4653056,
"dirty": 405504,
"hierarchical_memory_limit": 9223372036854771712,
"hierarchical_memsw_limit": 9223372036854771712,
"inactive_anon": 92180480,
"inactive_file": 3108864,
"mapped_file": 135168,
"pgfault": 366630,
"pgmajfault": 66,
"pgpgin": 205755,
"pgpgout": 185872,
"rss": 180195328,
"rss_huge": 100663296,
"total_active_anon": 89210880,
"total_active_file": 946176,
"total_cache": 4653056,
"total_dirty": 405504,
"total_inactive_anon": 92180480,
"total_inactive_file": 3108864,
"total_mapped_file": 135168,
"total_pgfault": 366630,
"total_pgmajfault": 66,
"total_pgpgin": 205755,
"total_pgpgout": 185872,
"total_rss": 180195328,
"total_rss_huge": 100663296,
"total_unevictable": 0,
"total_writeback": 0,
"unevictable": 0,
"writeback": 0
},
"limit": 13355659264
},
"name": "/home-assistant",
"id": "2a7054d569ebeb767d6d4eead95833a765c5fa3c443330fdb94366bd7c0cc913",
"networks": {
"eth0": {
"rx_bytes": 739081,
"rx_packets": 6900,
"rx_errors": 0,
"rx_dropped": 0,
"tx_bytes": 2196039,
"tx_packets": 21172,
"tx_errors": 0,
"tx_dropped": 0
}
}
}
Am i missing something? If you need further information or investigation, just give me a ping :)
Kind regards
Alexander
Originally posted by @aserowy in #143 (comment)
OS: Windows 10 Pro 2004
Docker: 19.03.12; Docker Desktop 2.3.0.5 (stable) and 2.3.7.0 (edge)
The Docker::build_image
function returs a stream of empty structs, like this:
Ok(CreateImageInfo { id: None, error: None, status: None, progress: None, progress_detail: None })
Ok(CreateImageInfo { id: None, error: None, status: None, progress: None, progress_detail: None })
Ok(CreateImageInfo { id: None, error: None, status: None, progress: None, progress_detail: None })
Ok(CreateImageInfo { id: None, error: None, status: None, progress: None, progress_detail: None })
Ok(CreateImageInfo { id: None, error: None, status: None, progress: None, progress_detail: None })
Ok(CreateImageInfo { id: None, error: None, status: None, progress: None, progress_detail: None })
Ok(CreateImageInfo { id: None, error: None, status: None, progress: None, progress_detail: None })
...
It seems that the object Docker returns is not properly deserialized.
I then tested with the official Python low-level client, which returned this:
{"stream":"Step 1/3 : FROM busybox:buildroot-2014.02"}
{"stream":"\n"}
{"stream":" ---\u003e 9875fb006e07\n"}
{"stream":"Step 2/3 : VOLUME /data"}
{"stream":"\n"}
{"stream":" ---\u003e Using cache\n"}
{"stream":" ---\u003e 9d806142a52c\n"}
{"stream":"Step 3/3 : CMD [\"/bin/sh\"]"}
{"stream":"\n"}
{"stream":" ---\u003e Using cache\n"}
{"stream":" ---\u003e f95df1e61acc\n"}
{"aux":{"ID":"sha256:f95df1e61accc87736b6625495179e07d3e19b31f35691c5e57d84cefcbe4623"}}
{"stream":"Successfully built f95df1e61acc\n"}
{"stream":"Successfully tagged test:test\n"}
The number of results returned from the Python client is the same as that returned from Bollard, so I suspect these JSON objects are deserialized into the wrong type in Bollard.
From the python API, it seems the result is a tagged union of stream
or aux
, similar to this in Rust:
enum BuildResult {
Stream(String),
Aux{ ID: String },
}
The code and related file I used to test the API:
https://gist.github.com/01010101lzy/59b3ef618a5d96965b0db12730cb7c33
Convert documentation links to intra-doc links (instead of linking by html file paths), stabilized with 1.48.0.
See more:
I'm unable to execute inspect_container
on a running container. It looks like the .HostConfig.Ulimits.Hard
is causing the parser to die.
loggerv::init_with_verbosity(args.logging_opts.verbose).unwrap();
let docker_api = Docker::connect_with_local_defaults().expect("docker was not accessable");
println!("Docker version: {:?}", docker_api.version().await.unwrap());
let container_details = docker_api.inspect_container(&args.id, None::<InspectContainerOptions>).await;
println!("Container Details: {:?}", container_details);
bollard::docker: Decoded into string: {"Id":"70083c70c606594c455a415b1632a798e8c8025672828b058bea9388ecf6dc77","Created":"2020-05-10T05:14:49.249089609Z","Path":"sleep","Args":["100000"],"State":{"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":2136,"ExitCode":0,"Error":"","StartedAt":"2020-05-10T05:14:51.373748234Z","FinishedAt":"0001-01-01T00:00:00Z"},"Image":"sha256:1d622ef86b138c7e96d4f797bf5e4baca3249f030c575b9337638594f2b63f01","ResolvConfPath":"/var/lib/docker/containers/70083c70c606594c455a415b1632a798e8c8025672828b058bea9388ecf6dc77/resolv.conf","HostnamePath":"/var/lib/docker/containers/70083c70c606594c455a415b1632a798e8c8025672828b058bea9388ecf6dc77/hostname","HostsPath":"/var/lib/docker/containers/70083c70c606594c455a415b1632a798e8c8025672828b058bea9388ecf6dc77/hosts","LogPath":"","Name":"/elastic_pare","RestartCount":0,"Driver":"overlay2","Platform":"linux","MountLabel":"","ProcessLabel":"","AppArmorProfile":"","ExecIDs":null,"HostConfig":{"Binds":null,"ContainerIDFile":"","LogConfig":{"Type":"journald","Config":{}},"NetworkMode":"default","PortBindings":{},"RestartPolicy":{"Name":"no","MaximumRetryCount":0},"AutoRemove":false,"VolumeDriver":"","VolumesFrom":null,"CapAdd":null,"CapDrop":null,"Capabilities":null,"Dns":[],"DnsOptions":[],"DnsSearch":[],"ExtraHosts":null,"GroupAdd":null,"IpcMode":"private","Cgroup":"","Links":null,"OomScoreAdj":0,"PidMode":"","Privileged":false,"PublishAllPorts":false,"ReadonlyRootfs":false,"SecurityOpt":null,"UTSMode":"","UsernsMode":"","ShmSize":67108864,"Runtime":"runc","ConsoleSize":[0,0],"Isolation":"","CpuShares":0,"Memory":0,"NanoCpus":0,"CgroupParent":"","BlkioWeight":0,"BlkioWeightDevice":[],"BlkioDeviceReadBps":null,"BlkioDeviceWriteBps":null,"BlkioDeviceReadIOps":null,"BlkioDeviceWriteIOps":null,"CpuPeriod":0,"CpuQuota":0,"CpuRealtimePeriod":0,"CpuRealtimeRuntime":0,"CpusetCpus":"","CpusetMems":"","Devices":[],"DeviceCgroupRules":null,"DeviceRequests":null,"KernelMemory":0,"KernelMemoryTCP":0,"MemoryReservation":0,"MemorySwap":0,"MemorySwappiness":null,"OomKillDisable":false,"PidsLimit":null,"Ulimits":[{"Name":"nofile","Hard":1024,"Soft":1024}],"CpuCount":0,"CpuPercent":0,"IOMaximumIOps":0,"IOMaximumBandwidth":0,"MaskedPaths":["/proc/asound","/proc/acpi","/proc/kcore","/proc/keys","/proc/latency_stats","/proc/timer_list","/proc/timer_stats","/proc/sched_debug","/proc/scsi","/sys/firmware"],"ReadonlyPaths":["/proc/bus","/proc/fs","/proc/irq","/proc/sys","/proc/sysrq-trigger"]},"GraphDriver":{"Data":{"LowerDir":"/var/lib/docker/overlay2/32d373bbbc849287aec79911bae50f6e0eaec64af878590f5e3ff4b5947f7f65-init/diff:/var/lib/docker/overlay2/95a3cdf7a08ed394bd2b835214d9acd2879979d442407c5c69d8c55372567dcb/diff:/var/lib/docker/overlay2/e9983786de3b0c376fd1acfea25c5a0a81f514edd73861bcdee12028c12e25b5/diff:/var/lib/docker/overlay2/aa6fdcf323dbdf20b1df4e4f0c5d033aca06fcb51602d8b7e143cea3e990424e/diff:/var/lib/docker/overlay2/d6e5a061f41694fa4e4da65348a1a12df4386d739647096521fce5d0c5d86794/diff","MergedDir":"/var/lib/docker/overlay2/32d373bbbc849287aec79911bae50f6e0eaec64af878590f5e3ff4b5947f7f65/merged","UpperDir":"/var/lib/docker/overlay2/32d373bbbc849287aec79911bae50f6e0eaec64af878590f5e3ff4b5947f7f65/diff","WorkDir":"/var/lib/docker/overlay2/32d373bbbc849287aec79911bae50f6e0eaec64af878590f5e3ff4b5947f7f65/work"},"Name":"overlay2"},"Mounts":[],"Config":{"Hostname":"70083c70c606","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],"Cmd":["sleep","100000"],"Image":"ubuntu","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":{}},"NetworkSettings":{"Bridge":"","SandboxID":"872cb1de038af77b8a3878d4d04ec2fd9fe530478ff298a8d382875235d6bb62","HairpinMode":false,"LinkLocalIPv6Address":"","LinkLocalIPv6PrefixLen":0,"Ports":{},"SandboxKey":"/var/run/docker/netns/872cb1de038a","SecondaryIPAddresses":null,"SecondaryIPv6Addresses":null,"EndpointID":"5842a5b3c4d4e6b767e26d32ccff9bcd8ad8f446ef39005f13e3e14b2294dc17","Gateway":"172.17.0.1","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"IPAddress":"172.17.0.2","IPPrefixLen":16,"IPv6Gateway":"","MacAddress":"02:42:ac:11:00:02","Networks":{"bridge":{"IPAMConfig":null,"Links":null,"Aliases":null,"NetworkID":"5aeda22df1f53fe440e2e401802c27f87a51ce57fbba5f01f71e96a19f6c022d","EndpointID":"5842a5b3c4d4e6b767e26d32ccff9bcd8ad8f446ef39005f13e3e14b2294dc17","Gateway":"172.17.0.1","IPAddress":"172.17.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02","DriverOpts":null}}}}
Container Details: Err(Error { inner:
Failed to deserialize JSON: invalid type: integer `1024`, expected a string at line 1 column 2139 })
The Default
implementation for some of your options structs like CreateExecOptions
or ListContainerOptions
require the type parameter T
to also implement Default
, eventhough this is not necessary because it is wrapped in structs like Option
or Vec
for each field, therefore no Default
implementation is required for T
.
Hey, having a bit of trouble understanding the usage of attaching a volume to a container
So with volumes it needs a HashMap<T, HashMap<(),()>.
What i dont understand is what the key value pairs actually mean
i'm guessing T is the path the volume will be mounted in the container but its the other hashmap that gets me. why a hashmap with a key and a value of ()? does the api only allow anonymous volumes?
Apparently, I got this error:
JsonDataError { message: "unknown variant `loki`, expected one of ``, `json-file`, `syslog`, `journald`, `gelf`, `fluentd`, `awslogs`, `splunk`, `etwlogs`, `none`
Because loki is not part of Docker OpenAPI spec. I'm not sure if it is maintained well, because as you can see here, there is gcplogs
that also not in OpenAPI spec.
So, what is the best option that I have right now?
I'm using bollard to connect to a docker daemon using a unix socket. However, it still depends on rustls
eventhough I don't need that. I'd be nice if I could disable the tls feature if I don't need it.
This is the current definition of BuildImageXXXDetail
structs.
/// Subtype for the [Build Image Results](struct.BuildImageResults.html) type.
#[derive(Debug, Clone, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct BuildImageAuxDetail {
#[serde(rename = "ID")]
id: String,
}
/// Subtype for the [Build Image Results](struct.BuildImageResults.html) type.
#[derive(Debug, Clone, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct BuildImageErrorDetail {
code: Option<u64>,
message: String,
}
/// Subtype for the [Build Image Results](struct.BuildImageResults.html) type.
#[derive(Debug, Clone, Copy, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct BuildImageProgressDetail {
current: Option<u64>,
total: Option<u64>,
}
I was trying to show error messages to users for a build failure. However, the current definition prevents the code below because the fields are private.
match build_image_result {
...
BuildImageError { error, error_detail } => {
// this fails because the message field is private
eprintln!("{}", &error_detail.message);
}
...
}
Is it possible to change the type of strings from &str
to String
or use Into<String>
, since passing futures to tokio::spawn
requires references to be of static lifetimes?
I'm using the stats function (with streaming) but always get the following error:
Err(JsonDataError { message: "missing field `cache` at line 1 column 1542", contents: "{\"read\":\"2021-03-01T23:18:29.592917695Z\",\"preread\":\"0001-01-01T00:00:00Z\",\"pids_stats\":{\"current\":5,\"limit\":4915},\"blkio_stats\":{\"io_service_bytes_recursive\":[{\"major\":8,\"minor\":16,\"op\":\"read\",\"value\":0},{\"major\":8,\"minor\":16,\"op\":\"write\",\"value\":4096},{\"major\":254,\"minor\":0,\"op\":\"read\",\"value\":0},{\"major\":254,\"minor\":0,\"op\":\"write\",\"value\":4096}],\"io_serviced_recursive\":null,\"io_queue_recursive\":null,\"io_service_time_recursive\":null,\"io_wait_time_recursive\":null,\"io_merged_recursive\":null,\"io_time_recursive\":null,\"sectors_recursive\":null},\"num_procs\":0,\"storage_stats\":{},\"cpu_stats\":{\"cpu_usage\":{\"total_usage\":51898000,\"usage_in_kernelmode\":24360000,\"usage_in_usermode\":27538000},\"system_cpu_usage\":174945770000000,\"online_cpus\":8,\"throttling_data\":{\"periods\":0,\"throttled_periods\":0,\"throttled_time\":0}},\"precpu_stats\":{\"cpu_usage\":{\"total_usage\":0,\"usage_in_kernelmode\":0,\"usage_in_usermode\":0},\"throttling_data\":{\"periods\":0,\"throttled_periods\":0,\"throttled_time\":0}},\"memory_stats\":{\"usage\":3047424,\"stats\":{\"active_anon\":0,\"active_file\":0,\"anon\":98304,\"anon_thp\":0,\"file\":0,\"file_dirty\":0,\"file_mapped\":0,\"file_writeback\":0,\"inactive_anon\":0,\"inactive_file\":0,\"kernel_stack\":0,\"pgactivate\":0,\"pgdeactivate\":0,\"pgfault\":2673,\"pglazyfree\":0,\"pglazyfreed\":0,\"pgmajfault\":0,\"pgrefill\":0,\"pgscan\":0,\"pgsteal\":0,\"shmem\":0,\"slab\":1179648,\"slab_reclaimable\":147456,\"slab_unreclaimable\":1032192,\"sock\":0,\"thp_collapse_alloc\":0,\"thp_fault_alloc\":0,\"unevictable\":0,\"workingset_activate\":0,\"workingset_nodereclaim\":0,\"workingset_refault\":0},\"limit\":33354317824},\"name\":\"/exp-test-1\",\"id\":\"b3fd9103f3aa4a0146bebf56687e07d936a2c0b9bbaa28ed9ea63c927ed34b09\",\"networks\":{\"eth0\":{\"rx_bytes\":366,\"rx_packets\":3,\"rx_errors\":0,\"rx_dropped\":0,\"tx_bytes\":0,\"tx_packets\":0,\"tx_errors\":0,\"tx_dropped\":0}}}", column: 1542 })
I assume it is this cache that is missing (potentially along with some other fields).
docker version
:
Client:
Version: 20.10.2
API version: 1.41
Go version: go1.15.7
Git commit: v20.10.2
Built: Thu Jan 1 00:00:00 1970
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.2
API version: 1.41 (minimum version 1.12)
Go version: go1.15.7
Git commit: v20.10.2
Built: Tue Jan 1 00:00:00 1980
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.4.3
GitCommit: v1.4.3
runc:
Version: 1.0.0-rc92
GitCommit:
docker-init:
Version: 0.18.0
GitCommit:
bollard
version: 0.10.0
I think I'm on a newer docker version than this library generates bindings from but they still seem to have the cache
field here
If this isn't anything wrong then maybe some serde
annotations could be added for defaults?
The docker run
command accepts --publish-all
argument that will publish all ports exposed by the container.
It would be very useful, if it would be possible to use this as an argument to Config
for create_container()
function.
I contributed some code to stepchowfun/docuum to use Bollard instead of invoking the Docker CLI, and almost immediately after releasing this version it was discovered that Bollard cannot parse the output of /system/df when if the user has been using BuildKit (stepchowfun/docuum#78).
Bollard is following exactly the Docker documentation, which was apparently wrong in this case (moby/moby#41264). The documentation was fixed in moby/moby#41267, but possibly the URL where Bollard pulls the API specifications from won't be updated until Docker 20.03 ships (I just checked and it hadn't been), and even then Bollard will need a rebuild because the generated code is in codegen/target/generated-sources/src/models.rs.
I don't know if you want to pick up the new spec from the repo now or if you want to wait until the spec is updated on the website. In the mean time, Docuum has gone back to using Docker CLI.
cargo audit
fails for bollard because of a transitive dependency on failure
, which is deprecated: https://github.com/RustSec/advisory-db/blob/master/crates/failure/RUSTSEC-2020-0036.toml
The transitive dependency comes from https://github.com/akshayknarayan/hyper-unix-connector - I've made a PR to move the dependency to anyhow, but in the meantime, audit will be failing for bollard.
Posting this issue as a heads up.
There is a stray println in src/read.rs:56
I suppose that's not a feature.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.