Giter Club home page Giter Club logo

python-on-whales's Introduction

logo


Run tests Code style: black Imports: isort License Downloads


A Docker client for Python, designed to be fun and intuitive!

Works on Linux, macOS and Windows, for Python 3.8 and above.


How to install?

pip install python-on-whales

Some cool examples

Start by doing

from python_on_whales import docker

and then:

You get the idea 🙂 it's the same as the CLI we all know and love.

>>> from python_on_whales import docker

>>> output = docker.run("hello-world")
>>> print(output)

Hello from Docker!
This message shows that your installation appears to be working correctly.

...
>>> from python_on_whales import docker
>>> print(docker.run("nvidia/cuda:11.0-base", ["nvidia-smi"], gpus="all"))
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06    Driver Version: 450.51.06    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            On   | 00000000:00:1E.0 Off |                    0 |
| N/A   34C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
>>> from python_on_whales import docker
>>> my_docker_image = docker.pull("ubuntu:20.04")
20.04: Pulling from library/ubuntu
e6ca3592b144: Downloading [=============>                                     ]  7.965MB/28.56MB
534a5505201d: Download complete
990916bd23bb: Download complete

>>> print(my_docker_image.repo_tags)
['ubuntu:20.04']

>>> docker.image.list()
[python_on_whales.Image(id='sha256:1a437e363abfa', tags=['ubuntu:20.04'])]

>>> my_docker_image.remove()
>>> from python_on_whales import docker
>>> my_image = docker.build(".", tags="some_name")  # uses Buildx/buildkit by default
[+] Building 1.6s (17/17) FINISHED
 => [internal] load build definition from Dockerfile                                                            0.0s
 => => transferring dockerfile: 32B                                                                             0.0s
 => [internal] load .dockerignore                                                                               0.0s
 => => transferring context: 2B                                                                                 0.0s
 => [internal] load metadata for docker.io/library/python:3.6                                                   1.4s
 => [python_dependencies 1/5] FROM docker.io/library/python:3.6@sha256:29328c59adb9ee6acc7bea8eb86d0cb14033c85  0.0s
 => [internal] load build context                                                                               0.1s
 => => transferring context: 72.86kB                                                                            0.0s
 => CACHED [python_dependencies 2/5] RUN pip install typeguard pydantic requests tqdm                           0.0s
 => CACHED [python_dependencies 3/5] COPY tests/test-requirements.txt /tmp/                                     0.0s
 => CACHED [python_dependencies 4/5] COPY requirements.txt /tmp/                                                0.0s
 => CACHED [python_dependencies 5/5] RUN pip install -r /tmp/test-requirements.txt -r /tmp/requirements.txt     0.0s
 => CACHED [tests_ubuntu_install_without_buildx 1/7] RUN apt-get update &&     apt-get install -y       apt-tr  0.0s
 => CACHED [tests_ubuntu_install_without_buildx 2/7] RUN curl -fsSL https://download.docker.com/linux/ubuntu/g  0.0s
 => CACHED [tests_ubuntu_install_without_buildx 3/7] RUN add-apt-repository    "deb [arch=amd64] https://downl  0.0s
 => CACHED [tests_ubuntu_install_without_buildx 4/7] RUN  apt-get update &&      apt-get install -y docker-ce-  0.0s
 => CACHED [tests_ubuntu_install_without_buildx 5/7] WORKDIR /python-on-whales                                  0.0s
 => CACHED [tests_ubuntu_install_without_buildx 6/7] COPY . .                                                   0.0s
 => CACHED [tests_ubuntu_install_without_buildx 7/7] RUN pip install -e .                                       0.0s
 => exporting to image                                                                                          0.1s
 => => exporting layers                                                                                         0.0s
 => => writing image sha256:e1c2382d515b097ebdac4ed189012ca3b34ab6be65ba0c650421ebcac8b70a4d                    0.0s
 => => naming to docker.io/library/some_image_name

Some more docker.run() advanced examples with postgres

docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres

becomes

from python_on_whales import docker

docker.run(
    "postgres:9.6",
    name="some-postgres",
    envs={"POSTGRES_PASSWORD": "mysecretpassword"},
    detach=True,
)
print(docker.ps())
# [python_on_whales.Container(id='f5fb939c409d', name='some-postgres')]

docker run -it --rm --network some-network postgres psql -h some-postgres -U postgres

becomes

from python_on_whales import docker

# since it's interactive, you'll be dropped into the psql shell. The python code
# will continue only after you exit the shell.
docker.run(
    "postgres:9.6",
    ["psql", "-h", "some-postgres", "-U", "postgres"],
    networks=["some-network"],
    interactive=True,
    tty=True,
    remove=True,
)

docker run -d --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -e PGDATA=/var/lib/postgresql/data/pgdata -v /custom/mount:/var/lib/postgresql/data -v myvolume:/tmp/myvolume postgres -c shared_buffers=256MB -c max_connections=200

becomes

from python_on_whales import docker

docker.run(
    "postgres:9.6",
    ["-c", "shared_buffers=256MB", "-c", "max_connections=200"],
    name="some-postgres",
    envs={"POSTGRES_PASSWORD": "mysecretpassword", "PGDATA": "/var/lib/postgresql/data/pgdata"},
    volumes=[("/custom/mount", "/var/lib/postgresql/data"), ("myvolume", "/tmp/myvolume")],
    detach=True,
)

Any Docker object can be used as a context manager to ensure it's removed even if an exception occurs:

from python_on_whales import docker

with docker.volume.create("random_name") as some_volume:
    docker.run(
        "postgres:9.6",
        ["-c", "shared_buffers=256MB", "-c", "max_connections=200"],
        name="some-postgres",
        envs={"POSTGRES_PASSWORD": "mysecretpassword", "PGDATA": "/var/lib/postgresql/data/pgdata"},
        volumes=[(some_volume, "/var/lib/postgresql/data"), ("myvolume", "/tmp/myvolume")],
        detach=True,
    )
    # so some stuff here
    
# here we are out of the context manager, so the volume has been removed, even if there was an exception.

Main features

  • 1 to 1 mapping between the CLI interface and the Python API. No need to look in the docs what is the name of the function/argument you need.
  • Support for the latest Docker features: Docker buildx/buildkit, docker run --gpu=all ...
  • Support for Docker stack, services and Swarm (same API as the command line).
  • Progress bars and progressive outputs when pulling, pushing, loading, building...
  • Support for some other CLI commands that are not in Docker-py: docker cp, docker run --cpus ... and more.
  • Nice SSH support for remote daemons.
  • Docker object as Python objects: Container, Images, Volumes, Services... and their attributes are updated in real-time!
  • Each Docker object can be used as a context manager. When getting out of the context, the Docker object is removed automatically, even if an exception occurs.
  • A fully typed API (Mypy and IDE-friendly) compatible with pathlib and os.path
  • All Docker objects and the Docker client are safe to use with multithreading and multiprocessing.
  • Display the commands called and the environment variables used by setting the environment variable PYTHON_ON_WHALES_DEBUG=1.

Why another project? Why not build on Docker-py?

In a sense this project is built on top of Docker-py because the implementation, the organisation and the API is inspired from the project, but the codebases could not be the same.

Two major differences do not permit that:

  1. The API is quite different. The aim of Python on Whales is to provide a 1-to-1 mapping between the Docker command line and Python, so that users don't even have to open the docs to do write code.

  2. While Docker-py is a complete re-implementation of the Docker client binary (written in Go), Python on whales sits on top of the Docker client binary, which makes implementing new features much easier and safer. For example, it's unlikely that docker-py supports Buildx/buildkit anytime soon because rewriting a large Go codebase in Python is hard work.

Should I use Docker-py or Python on Whales?

Well, it's written in each project's description!

  • Docker-py: A Python library for the Docker Engine API
  • Python on whales: An awesome Python wrapper for an awesome Docker CLI

If you need to talk to the Docker engine directly, you need to do low level operations, use docker-py. Some good example would be writing the code to control docker from an IDE, or if the speed of Docker calls is very important. If you don't want to depend on the Docker CLI binary (~50MB), use docker-py.

If you wanted to call the docker command line from Python, do high level operations, use Python on Whales. For example if you want to write your CI logic in Python rather than in bash (a very good choice 😉). Some commands are only available in Python on whales too: docker.buildx.build(...), docker.stack.deploy(...)...

Use the right tool for the right job 🙂

Alternatives to Docker: Podman, nerdctl...

Support for Docker-compatible clients like Podman and Nerdctl was introduced in Python-on-whales version 0.44.0.

You can use an arbitrary binary to execute Docker commands by using the argument client_call of python_on_whales.DockerCLient. Here is an example:

>>> from python_on_whales import DockerClient

>>> nerdctl = DockerClient(client_call=["nerdctl"])

>>> nerdctl.pull("python:3.9")
docker.io/library/python:3.9:                                                     resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:a83c0aa6471527636d7331c30704d0f88e0ab3331bbc460d4ae2e53bbae64dca:    done           |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:8ccef93ff3c9e1bb9562d394526cdc6834033a0498073d41baa8b309f4fac20e: done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:f033692e2c5abe1e0ee34bcca759a3e4432b10b0031174b08d48bcc90d14d68b:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:9952b1051adaff513c99f86765361450af108b12b0073d0ba40255c4e419b481:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:c766e27afb21eddf9ab3e4349700ebe697c32a4c6ada6af4f08282277a291a28:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:1535e3c1181a81ea66d5bacb16564e4da2ba96304506598be39afe9c82b21c5c:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:6de7cb7bdc8f9b4c4d6539233fe87304aa1a6427c3238183265c9f02d831eddb:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:967757d5652770cfa81b6cc7577d65e06d336173da116d1fb5b2d349d5d44127:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:c357e2c68cb3bf1e98dcb3eb6ceb16837253db71535921d6993c594588bffe04:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:26787c68cf0c92a778db814d327e283fe1da4434a7fea1f0232dae8002e38f33:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:6aefca2dc61dcbcd268b8a9861e552f9cdb69e57242faec64ac120d2355a9c1a:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:32a180f5cf85702e7680719c40c39c07972b1176355df5a621de9eb87ad07ce2:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 35.9s                                                                    total:  333.5  (9.3 MiB/s)

python_on_whales.Image(id='sha256:f033692e2c5ab', tags=['python:3.9'])

You can do something similar with podman:

from python_on_whales import DockerClient

podman = DockerClient(client_call=["podman"])

podman.pull("hello-world")
podman.run("hello-world")
print(podman.ps())
...

Contributing

Any and all PRs are welcome. Please see this documentation.

What about the license?

It's a MIT license, so quite permissive.

The license can be found in the git repository.

python-on-whales's People

Contributors

a-gentilhomme avatar anesmemisevic avatar avgdev avatar betaboon avatar bigcat2014 avatar d4nj1 avatar dvizzini avatar einarwar avatar fuentes73 avatar gabrieldemarmiesse avatar hemaz avatar jashparekh avatar jhc4318 avatar john-vash avatar joshkarlin avatar kashyab12 avatar lewisgaul avatar marshall7m avatar martinburchell avatar mdantonio avatar misterowlpt avatar n0k0 avatar rafrafek avatar raylas avatar redref avatar thomasleveil avatar tobiasjunsten avatar valberg avatar vrazdalovschi avatar wannaphong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

python-on-whales's Issues

Capturing docker build logs in memory

Hello,

I would like to use the package but am needing to capture the stdout and stderr build logs in my context instead of outputting it to the console, is that a possibility?

Incompatibility with compose-cli 2.0.0-rc1

Hello,
after upgrading compose-cli 2.0.0 from beta6 to rc1 I started to get the following error:

* 'Deploy.Resources.Reservations.cpus' expected type 'string', got unconvertible type 'float64', value: '0.25'

So I changed my .yml by replacing:

services:
  myservice:
    deploy:
      resources:
        reservations:
          cpus: 0.25

with:

services:
  myservice:
    deploy:
      resources:
        reservations:
          cpus: "0.25"

This silenced the error on compose-cli, but now is python on whales that is complaining about the expected type:

pydantic.error_wrappers.ValidationError: 18 validation errors for ComposeConfig
services -> myservice -> deploy -> resources -> reservations -> cpus
  value is not a valid float (type=type_error.float) 

I didn't find any reference to this type change in the changelog of compose-cli rc1 so I'm not totally sure if this is a feature that should be supported by python on whales or a bug that should be reported on compose cli

Error when running container with detach=False

Thanks for making this package available, really useful! I'm trying to run a container in detached mode but I'm getting an error. As the same code works with detach=False I think this is likely a bug. Here's an example with detach=False and then detach=True:

>>> from python_on_whales import docker
>>> docker.run("tensorflow/tensorflow:2.3.2-gpu", gpus="all", detach=False)
''
>>> docker.run("tensorflow/tensorflow:2.3.2-gpu", gpus="all", detach=True)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/pip/pipenvs/ml-UCgrIzmY/lib/python3.7/site-packages/python_on_whales/components/container.py", line 1625, in run
    return Container(self.client_config, run(full_cmd))
  File "/pip/pipenvs/ml-UCgrIzmY/lib/python3.7/site-packages/python_on_whales/components/container.py", line 338, in __init__
    super().__init__(client_config, "id", reference, is_immutable_id)
  File "/pip/pipenvs/ml-UCgrIzmY/lib/python3.7/site-packages/python_on_whales/client_config.py", line 149, in __init__
    self._fetch_and_parse_inspect_result(reference_or_id)
  File "/pip/pipenvs/ml-UCgrIzmY/lib/python3.7/site-packages/python_on_whales/client_config.py", line 204, in _fetch_and_parse_inspect_result
    return self._parse_json_object(json_object)
  File "/pip/pipenvs/ml-UCgrIzmY/lib/python3.7/site-packages/python_on_whales/components/container.py", line 355, in _parse_json_object
    return ContainerInspectResult.parse_obj(json_object)
  File "pydantic/main.py", line 520, in pydantic.main.BaseModel.parse_obj
  File "pydantic/main.py", line 362, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 2 validation errors for ContainerInspectResult
HostConfig -> DeviceRequests -> 0 -> DeviceIDs
  none is not an allowed value (type=type_error.none.not_allowed)
HostConfig -> DeviceRequests -> 0 -> Capabilities -> 0
  str type expected (type=type_error.str)

Docker container exec --interactive

Hello again,
I was trying to implement a wrapper of docker exec, but I found that some options are missing in the interface currently implemented:

docker.container.execute(container, command, detach=False)

The target command that I would implement in python is:

docker exec --interactive --tty --user myuser mycontainer mycommand

In particular options --interactive --tty and --user are missing and I would like to ask you if there are any plan for the integration

One of the use cases would be the execution of bash / ash to enter the container

Thank you again for your work

ParsingError: missing ImageRootFS.layers field

Building a Dockerfile having only a scratch layer fails due to the missing ImageRootFS.layers field:

$ cat Dockerfile 
FROM scratch
$ docker buildx build .
#1 [internal] load build definition from Dockerfile
#1 sha256:b19ad5a1751f726b260f534e1d3de4a59532e3a651aea6e375283e9fc3fc80cc
#1 transferring dockerfile: 50B done
#1 DONE 0.0s

#2 [internal] load .dockerignore
#2 sha256:cb562fd7e2ec07a3840a2c40f30e3e12af3e99ff7559d143c1ed611444828437
#2 transferring context: 2B done
#2 DONE 0.1s

#3 exporting to image
#3 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#3 writing image sha256:71de1148337f4d1845be01eb4caf15d78e4eb15a1ab96030809826698a5b7e30 done
#3 DONE 0.0s
$ pex python-on-whales==0.17.0 -- -c 'from python_on_whales import docker; docker.build(".")'
#1 [internal] load build definition from Dockerfile
#1 sha256:8272615fc5af2f67dd620e365814fe0c53ab0af6a1d4602be14b3233456af35d
#1 transferring dockerfile: 31B done
#1 DONE 0.0s

#2 [internal] load .dockerignore
#2 sha256:6a04879656730f99d6d88f3bcf00e7c638462cee6e1c0319a3e7d4224e8bd6eb
#2 transferring context: 2B done
#2 DONE 0.0s

#3 exporting to image
#3 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#3 writing image sha256:71de1148337f4d1845be01eb4caf15d78e4eb15a1ab96030809826698a5b7e30 done
#3 DONE 0.0s
Traceback (most recent call last):
  File "/home/socrates/.pex/installed_wheels/3319b96494d66102f31ee3f9b49bac3b6bbd0cd9/python_on_whales-0.17.0-py3-none-any.whl/python_on_whales/client_config.py", line 212, in _fetch_and_parse_inspect_result
    return self._parse_json_object(json_object)
  File "/home/socrates/.pex/installed_wheels/3319b96494d66102f31ee3f9b49bac3b6bbd0cd9/python_on_whales-0.17.0-py3-none-any.whl/python_on_whales/components/image/cli_wrapper.py", line 46, in _parse_json_object
    return ImageInspectResult.parse_obj(json_object)
  File "pydantic/main.py", line 572, in pydantic.main.BaseModel.parse_obj
  File "pydantic/main.py", line 400, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ImageInspectResult
RootFS -> Layers
  field required (type=value_error.missing)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/tmp/tmpng3_93uf/.bootstrap/pex/pex.py", line 483, in execute
    exit_value = self._wrap_coverage(self._wrap_profiling, self._execute)
  File "/tmp/tmpng3_93uf/.bootstrap/pex/pex.py", line 400, in _wrap_coverage
    return runner(*args)
  File "/tmp/tmpng3_93uf/.bootstrap/pex/pex.py", line 431, in _wrap_profiling
    return runner(*args)
  File "/tmp/tmpng3_93uf/.bootstrap/pex/pex.py", line 542, in _execute
    return self.execute_interpreter()
  File "/tmp/tmpng3_93uf/.bootstrap/pex/pex.py", line 582, in execute_interpreter
    return self.execute_content("-c <cmd>", content, argv0="-c")
  File "/tmp/tmpng3_93uf/.bootstrap/pex/pex.py", line 649, in execute_content
    exec_function(ast, globals_map)
  File "/tmp/tmpng3_93uf/.bootstrap/pex/compatibility.py", line 93, in exec_function
    exec (ast, globals_map, locals_map)
  File "-c <cmd>", line 1, in <module>
  File "/home/socrates/.pex/installed_wheels/3319b96494d66102f31ee3f9b49bac3b6bbd0cd9/python_on_whales-0.17.0-py3-none-any.whl/python_on_whales/components/buildx/cli_wrapper.py", line 329, in build
    return docker_image.inspect(image_id)
  File "/home/socrates/.pex/installed_wheels/3319b96494d66102f31ee3f9b49bac3b6bbd0cd9/python_on_whales-0.17.0-py3-none-any.whl/python_on_whales/components/image/cli_wrapper.py", line 234, in inspect
    return Image(self.client_config, x)
  File "/home/socrates/.pex/installed_wheels/3319b96494d66102f31ee3f9b49bac3b6bbd0cd9/python_on_whales-0.17.0-py3-none-any.whl/python_on_whales/components/image/cli_wrapper.py", line 34, in __init__
    super().__init__(client_config, "id", reference, is_immutable_id)
  File "/home/socrates/.pex/installed_wheels/3319b96494d66102f31ee3f9b49bac3b6bbd0cd9/python_on_whales-0.17.0-py3-none-any.whl/python_on_whales/client_config.py", line 156, in __init__
    self._fetch_and_parse_inspect_result(reference_or_id)
  File "/home/socrates/.pex/installed_wheels/3319b96494d66102f31ee3f9b49bac3b6bbd0cd9/python_on_whales-0.17.0-py3-none-any.whl/python_on_whales/client_config.py", line 218, in _fetch_and_parse_inspect_result
    raise ParsingError(
python_on_whales.client_config.ParsingError: There was an error parsing the json response from the Docker daemon. 
This is a bug with python-on-whales itself. Please head to 
https://github.com/gabrieldemarmiesse/python-on-whales/issues 
and open an issue. You should copy this error message and 
the json response from the Docker daemon. The json response was put 
in /tmp/tmp5umqoql4.json because it's a bit too big to be printed 
on the screen. Make sure that there are no sensitive data in the 
json file before copying it in the github issue.
$ cat /tmp/tmp5umqoql4.json 
[
    {
        "Id": "sha256:71de1148337f4d1845be01eb4caf15d78e4eb15a1ab96030809826698a5b7e30",
        "RepoTags": [],
        "RepoDigests": [],
        "Parent": "",
        "Comment": "",
        "Created": "0001-01-01T00:00:00Z",
        "Container": "",
        "ContainerConfig": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": null,
            "Cmd": null,
            "Image": "",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "DockerVersion": "",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": null,
            "Image": "",
            "Volumes": null,
            "WorkingDir": "/",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 0,
        "VirtualSize": 0,
        "GraphDriver": {
            "Data": null,
            "Name": "overlay2"
        },
        "RootFS": {
            "Type": "layers"
        },
        "Metadata": {
            "LastTagTime": "0001-01-01T00:00:00Z"
        }
    }
]

Docker compose scale will never exist

Hello,
I was looking at the scale command and found some ambiguities

Here:
https://gabrieldemarmiesse.github.io/python-on-whales/sub-commands/compose/#scale

docker compose scale is reported as Not yet implemented but this command will never exist in compose-cli because it is deprecated in docker-compose

here in compose-cli (command not found)

$ docker compose scale

Usage:  docker compose [OPTIONS] COMMAND
[...]
unknown docker command: "compose scale"

here in docker-compose (deprecated in favor of --scale flag on up)

$ docker-compose scale --help
Set number of containers to run for a service.

Numbers are specified in the form service=num as arguments.
For example:

    $ docker-compose scale web=2 worker=3

This command is deprecated. Use the up command with the --scale flag
instead.

Usage: scale [options] [SERVICE=NUM...]

Options:
  -t, --timeout TIMEOUT      Specify a shutdown timeout in seconds.
                             (default: 10)

The lack of support of the scale command in compose-cli is also confirmed here: docker-archive/compose-cli#1366

So I think that compose scale should be removed from the docs of python on whales instead of being reported as not implemented yet.

In addition, the scale option should be added to compose up

Make all fields optional when parsing the docker engine json responses

It's not written in the docs, but many fields in the json response can be ommitted. As such, it breaks the pydantic models made to wrap de json output.

I made a simple wrapper in python_on_whales/utils.py that can be used as a decorator (@all_fields_optional). It should be put on all pydantic classes to make all fields optional and so that we don't get errors when the daemon doesn't put a specific field in the json.

Problem with buildx when using python-on-whales in docker

Hi, thanks for this great tool! With the newer version, 0.19.x, I ran into issues that appear to be related to buildx. The current solution I use consists in installing the docker cli in the docker image from the debian repository. Otherwise, when using the python-on-whales download-cli command buildx is somehow not available.

I'll add more details on how to reproduce the problem as soon as possible.

Specialized DockerExceptions

Hello again!

while I'm working on the integration of python on whales into my applications, I found the error management mostly thought for a human use.

Currently all errors raise a DockerException that contains the exact error output produced by the CLI. That's great when using the package by a human and provides enough information to debug the problem. But when the package is used to automate processes this approach is not very helpful and to be able to properly react, a parser of the exception is needed.

Just a practical example:

docker.service.inspect("my_service")

This command can fail for a number of reasons, for example:

the swarm is not initialized:

python_on_whales.utils.DockerException: The docker command executed was `/usr/local/bin/docker service inspect myservice`.
It returned with code 1
The content of stdout is '[]
'
The content of stderr is 'Status: Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again., Code: 1
'

or the service does not exist:

python_on_whales.utils.DockerException: The docker command executed was `/usr/local/bin/docker service inspect myservice`.
It returned with code 1
The content of stdout is '[]
'
The content of stderr is 'Status: Error: no such service: myservice, Code: 1
'

Probably there are much more possible reasons, but two are enough for the example

To implement specific reactions to the different problems a parser of the exception is needed:

try:
    docker.service.inspect("my_service")
except python_on_whales.utils.DockerException as e:
    if "This node is not a swarm manager" is str(e):
        initialized_my_node()
    elif "no such service" in str(e):
        log.error("This service does not exist")
    else:
        log.critical("Unexpected error")

I think that raise specific exceptions (that would extend the parent DockerException, of course) would be very very useful for the user experience:

try:
    docker.service.inspect("my_service")
except python_on_whales.utils.SwarmNotInitialized:
        initialized_my_node()
except python_on_whales.utis.NoSuchService:
        log.error("This service does not exist")
except Exception:
        log.critical("Unexpected error")

That means, in a practical way, move the parsing directly into the package to centralized that operation and make it transparent to the final user.

I understand that is a very boring issue but I'm pretty sure that would greatly improve the adoption... and of course I'm 100% available to ease this task as much as I can

Error related to host net usge

I was trying to use the host net with a stack of mine and ran an error. Without using the host net, the stack can be deployed.
I am using:

> python3 -m pip show python_on_whales
Name: python-on-whales
Version: 0.20.2       
Summary: A Docker client for Python, designed to be fun and intuitive!
Home-page: UNKNOWN
Author: None
Author-email: None
License: MIT
Location: /home/bruno/.local/lib/python3.8/site-packages
Requires: tqdm, typer, pydantic, requests
Required-by:
There was an error parsing the json response from the Docker daemon.
This is a bug with python-on-whales itself. Please head to
https://github.com/gabrieldemarmiesse/python-on-whales/issues
and open an issue. You should copy this error message and
the json response from the Docker daemon. The json response was put
in /tmp/tmpqmi1g8fd.json because it's a bit too big to be printed
on the screen. Make sure that there are no sensitive data in the
json file before copying it in the github issue.
Traceback (most recent call last):
  File "/home/bruno/.local/lib/python3.8/site-packages/python_on_whales/client_config.py", line 214, in _fetch_and_parse_inspect_result
    return self._parse_json_object(json_object)
  File "/home/bruno/.local/lib/python3.8/site-packages/python_on_whales/components/task/cli_wrapper.py", line 32, in _parse_json_object
    return TaskInspectResult.parse_obj(json_object)
  File "pydantic/main.py", line 520, in pydantic.main.BaseModel.parse_obj
  File "pydantic/main.py", line 362, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for TaskInspectResult
Spec -> Networks -> 0 -> Aliases
  field required (type=value_error.missing)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "run.py", line 700, in <module>
    master[0].status.state ) )
  File "/home/bruno/.local/lib/python3.8/site-packages/python_on_whales/components/task/cli_wrapper.py", line 84, in status
    return self._get_inspect_result().status
  File "/home/bruno/.local/lib/python3.8/site-packages/python_on_whales/components/task/cli_wrapper.py", line 36, in _get_inspect_result
    return super()._get_inspect_result()
  File "/home/bruno/.local/lib/python3.8/site-packages/python_on_whales/client_config.py", line 186, in _get_inspect_result
    self.reload()
  File "/home/bruno/.local/lib/python3.8/site-packages/python_on_whales/client_config.py", line 178, in reload
    self._fetch_and_parse_inspect_result(self._immutable_id)
  File "/home/bruno/.local/lib/python3.8/site-packages/python_on_whales/client_config.py", line 220, in _fetch_and_parse_inspect_result
    raise ParsingError(
python_on_whales.client_config.ParsingError: There was an error parsing the json response from the Docker daemon.
This is a bug with python-on-whales itself. Please head to
https://github.com/gabrieldemarmiesse/python-on-whales/issues
and open an issue. You should copy this error message and
the json response from the Docker daemon. The json response was put
in /tmp/tmpqmi1g8fd.json because it's a bit too big to be printed
on the screen. Make sure that there are no sensitive data in the
json file before copying it in the github issue.

Docker compose file I try to stack deploy with python on whales:

# https://github.com/compose-spec/compose-spec/blob/master/spec.md
version: "3.7"

secrets:
  aaa:
    name: aaa
    file: aaa

networks:
  hostnet:
    external: true
    name: host

services:
  # sunpot master -------------------------------------------------------------
  sunpot-master:
    image: aaa
    command: [
      "-h",aaa,
      "-p", aaa,
      "-d", aaa,
      "-u", "aaa,
      "-pw", aaa,
      "-t", "aaa",
      "-r", aaa
    ]
    secrets:
      - sunpotConfig
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
          - node.role == manager
      resources:
        limits:
          cpus: "14"
          # memory: 128M
        reservations:
          cpus: "14"
         # memory: 64M
      restart_policy:
        condition: none
        # delay: 10s
        # max_attempts: 3
        # window: 120s
    networks:
      hostnet: {}
    extra_hosts:
      - "localhost:127.0.0.1"

Docker daemon json response:

[    
    {
        "ID": "z9382z35bvw0earue1tvorel5",
        "Version": {
            "Index": 3370
        },
        "CreatedAt": "2021-07-09T17:34:59.2002062Z",
        "UpdatedAt": "2021-07-09T17:35:00.676241Z",
        "Labels": {},
        "Spec": {
            "ContainerSpec": {
                "Image": "xxxx/xxxx:1.0.0@sha256:57d28200bd23ff51aab203498ffc6eb79a32be560f1d2ea102761eca29136abc",
                "Labels": {
                    "com.docker.stack.namespace": "xxxx"
                },
                "Args": [
                    "-h",
                    "aaa",
                    "-p",
                    "6666",
                    "-d",
                    "aa",
                    "-u",
                    "aa",
                    "-pw",
                    "aa",
                    "-t",
                    "aa",
                    "-r",
                    "aa"
                ],
                "Privileges": {
                    "CredentialSpec": null,
                    "SELinuxContext": null
                },
                "Hosts": [
                    "127.0.0.1 localhost"
                ],
                "Secrets": [
                    {
                        "File": {
                            "Name": "aaa",
                            "UID": "0",
                            "GID": "0",
                            "Mode": 292
                        },
                        "SecretID": "aaa",
                        "SecretName": "aaa"
                    }
                ],
                "Isolation": "default"
            },
            "Resources": {
                "Limits": {
                    "NanoCPUs": 14000000000
                },
                "Reservations": {
                    "NanoCPUs": 14000000000
                }
            },
            "RestartPolicy": {
                "Condition": "none",
                "MaxAttempts": 0
            },
            "Placement": {
                "Constraints": [
                    "node.role == manager"
                ],
                "Platforms": [
                    {
                        "Architecture": "amd64",
                        "OS": "linux"
                    }
                ]
            },
            "Networks": [
                {
                    "Target": "xgkebiqoh7rhb27irhgr7dnev"
                }
            ],
            "ForceUpdate": 0
        },
        "ServiceID": "4kb4zu9du8b0pui4lyt820gia",
        "Slot": 1,
        "NodeID": "o6lvbcj08gbweofyg6hjzysgx",
        "Status": {
            "Timestamp": "2021-07-09T17:35:00.3918131Z",
            "State": "failed",
            "Message": "started",
            "Err": "task: non-zero exit (1)",
            "ContainerStatus": {
                "ContainerID": "ee2cfb7ebcf80298238ff41af8871529cc494af223054cd43233640ec02e245f",
                "PID": 0,
                "ExitCode": 1
            },
            "PortStatus": {}
        },
        "DesiredState": "shutdown",
        "NetworksAttachments": [
            {
                "Network": {
                    "ID": "xgkebiqoh7rhb27irhgr7dnev",
                    "Version": {
                        "Index": 2916
                    },
                    "CreatedAt": "2021-03-30T11:42:28.9523662Z",
                    "UpdatedAt": "2021-07-09T09:41:36.1615405Z",
                    "Spec": {
                        "Name": "host",
                        "Labels": {
                            "com.docker.swarm.predefined": "true"
                        },
                        "DriverConfiguration": {
                            "Name": "host"
                        },
                        "Scope": "swarm"
                    },
                    "DriverState": {
                        "Name": "host"
                    },
                    "IPAMOptions": {
                        "Driver": {}
                    }
                }
            }
        ]
    }
]

activate CI tests

If you claim your library works on macos, windows and linux you might want to test things.

as of https://github.com/gabrieldemarmiesse/python-on-whales/actions/runs/931356479
quite a few tests are skipped:

tests/python_on_whales/components/test_compose.py::test_docker_compose_build SKIPPED [  3%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_up_down SKIPPED [  3%]
tests/python_on_whales/components/test_compose.py::test_no_containers SKIPPED [  4%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_up_detach_down SKIPPED [  4%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_pause_unpause SKIPPED [  5%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_create_down SKIPPED [  5%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_config SKIPPED [  6%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_create_extra_options_down SKIPPED [  6%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_up_detach_down_extra_options SKIPPED [  6%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_up_build SKIPPED [  7%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_up_stop_rm SKIPPED [  7%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_up_rm SKIPPED [  8%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_up_down_some_services SKIPPED [  8%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_ps SKIPPED [  9%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_kill SKIPPED [  9%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_pull SKIPPED [  9%]
tests/python_on_whales/components/test_compose.py::test_docker_compose_up_abort_on_container_exit SKIPPED [ 10%]

it forces me to download docker client binary file on windows

I ran this code.

from python_on_whales import DockerClient
docker = DockerClient(host="ssh://[email protected]")
docker.ps()

and it suddenly started downloading docker and to make it worse, it crashed. i have downloaded this 3 times. Also, i ran the equivalent for this with docker-py and it worked perfectly. i am using windows.

C:/Users/USER/AppData/Local/Programs/Python/Python37/python.exe c:/Users/USER/Desktop/cvbn/eyede
/pywale.py
C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\python_on_whales\client_config.py:65: UserWarning: The docker client binary file was not found on your system.
Docker on whales will try to download it for you.
Don't worry, it won't be in the PATH and won't have anything to do with the package manager of your system.
Note: We are not installing the docker daemon, which is a lot heavier and harder to install. We're just downloading a single standalone binary file.
If you want to trigger the download of the client binary file manually (for example if you want to do it in a Dockerfile), you can run the following command:
 $ python-on-whales download-cli

  "The docker client binary file was not found on your system. \n"
100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 65.4M/65.4M [00:25<00:00, 2.53MiB/s]
Traceback (most recent call last):
  File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\shutil.py", line 932, in _unpack_tarfile
    tarobj = tarfile.open(filename)
  File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\tarfile.py", line 1578, in open
    raise ReadError("file could not be opened successfully")
tarfile.ReadError: file could not be opened successfully

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:/Users/USER/Desktop/cvbn/eyede/pywale.py", line 4, in <module>
    docker.ps()
  File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\python_on_whales\components\container\cli_wrapper.py", line 947,
in list
    full_cmd = self.docker_cmd
  File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\python_on_whales\client_config.py", line 134, in docker_cmd
    return self.client_config.docker_cmd
  File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\python_on_whales\client_config.py", line 85, in docker_cmd
    result = Command([self.get_docker_path()])
  File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\python_on_whales\client_config.py", line 78, in get_docker_path
    download_docker_cli()
  File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\python_on_whales\download_binaries.py", line 39, in download_docker_cli
    shutil.unpack_archive(str(tar_file), str(extract_dir))
  File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\shutil.py", line 1002, in unpack_archive
    func(filename, extract_dir, **kwargs)
  File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\shutil.py", line 935, in _unpack_tarfile
    "%s is not a compressed or uncompressed tar file" % filename)
shutil.ReadError: C:\Users\USER\AppData\Local\Temp\tmpv86v5tuj\docker.tgz is not a compressed or uncompressed tar file

docker.stack.ps: ValidationError

Version: 0.14.0

Hello! One more bug found.

So I have a stack and one of services is broken:

 ➜  0.3.0118 git:(master) ✗ docker stack ps test1
ID             NAME                    IMAGE                                              NODE             DESIRED STATE   CURRENT STATE             ERROR                              PORTS
jmzs1ntvjxd6   test1_name1.1   name1:2.3.0131                    Running         Pending 43 seconds ago    "no suitable node (scheduling …"
...

I've tried to do next

for l in docker.stack.ps("test1"):
    print(l.id). # jmzs1ntvjxd6...
    print(' l', l.labels). # ERROR!

Error:

---------------------------------------------------------------------------
ValidationError                           Traceback (most recent call last)
~/.../updater/venv/lib/python3.7/site-packages/python_on_whales/client_config.py in _fetch_and_parse_inspect_result(self, reference)
    211         try:
--> 212             return self._parse_json_object(json_object)
    213         except pydantic.error_wrappers.ValidationError as err:

~/.../updater/venv/lib/python3.7/site-packages/python_on_whales/components/task.py in _parse_json_object(self, json_object)
    178     def _parse_json_object(self, json_object: Dict[str, Any]) -> TaskInspectResult:
--> 179         return TaskInspectResult.parse_obj(json_object)
    180 

~/.../updater/venv/lib/python3.7/site-packages/pydantic/main.cpython-37m-darwin.so in pydantic.main.BaseModel.parse_obj()

~/.../updater/venv/lib/python3.7/site-packages/pydantic/main.cpython-37m-darwin.so in pydantic.main.BaseModel.__init__()

ValidationError: 1 validation error for TaskInspectResult
NodeID
  field required (type=value_error.missing)

Container logs: stdout and stderr in random order

Followup of #219

Same example postgres-based

$ docker run -e POSTGRES_PASSWORD=password --detach postgres
8fb3fd4da45f0294ed183a988b27bc728407e10955dfa04fdea9869083e16304

$ docker logs --tail 8 8fb3fd4da45f0294ed183a988b27bc728407e10955dfa04fdea9869083e16304
PostgreSQL init process complete; ready for start up.

2021-07-18 16:03:21.262 UTC [1] LOG:  starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2021-07-18 16:03:21.262 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2021-07-18 16:03:21.262 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2021-07-18 16:03:21.268 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-07-18 16:03:21.274 UTC [65] LOG:  database system was shut down at 2021-07-18 16:03:21 UTC
2021-07-18 16:03:21.281 UTC [1] LOG:  database system is ready to accept connections

now docker.container.logs reports all the expected lines, but stdout and stderr are not properly ordered.

The correct order is the line "PostgreSQL init process complete; ready for start up." (that is a stdout), then a blank line (should be from stderr) and finally the six lines of LOG (from stderr). But when printed with python on whales the order is randomly modified:

Here three consecutive attempts:

>>> print(docker.container.logs("780ff84d6772c93c8d21e3e37d51fa77b5fafd0cae22bda757535ac4dc61248c", tail=8))
PostgreSQL init process complete; ready for start up.

2021-07-20 04:54:43.927 UTC [1] LOG:  starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2021-07-20 04:54:43.928 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2021-07-20 04:54:43.928 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2021-07-20 04:54:43.932 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-07-20 04:54:43.940 UTC [66] LOG:  database system was shut down at 2021-07-20 04:54:43 UTC
2021-07-20 04:54:43.948 UTC [1] LOG:  database system is ready to accept connections

>>> print(docker.container.logs("780ff84d6772c93c8d21e3e37d51fa77b5fafd0cae22bda757535ac4dc61248c", tail=8))
PostgreSQL init process complete; ready for start up.
2021-07-20 04:54:43.927 UTC [1] LOG:  starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2021-07-20 04:54:43.928 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2021-07-20 04:54:43.928 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2021-07-20 04:54:43.932 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-07-20 04:54:43.940 UTC [66] LOG:  database system was shut down at 2021-07-20 04:54:43 UTC
2021-07-20 04:54:43.948 UTC [1] LOG:  database system is ready to accept connections


>>> print(docker.container.logs("780ff84d6772c93c8d21e3e37d51fa77b5fafd0cae22bda757535ac4dc61248c", tail=8))
2021-07-20 04:54:43.927 UTC [1] LOG:  starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2021-07-20 04:54:43.928 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2021-07-20 04:54:43.928 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2021-07-20 04:54:43.932 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-07-20 04:54:43.940 UTC [66] LOG:  database system was shut down at 2021-07-20 04:54:43 UTC
2021-07-20 04:54:43.948 UTC [1] LOG:  database system is ready to accept connections
PostgreSQL init process complete; ready for start up.


The first is correct, the second is missing the blank line (moved at the end, since there are two blanks?), in the third both the "PostgreSQL init process complete; ready for start up." and the blank are at the end

docker-compose instead of Compose V2

I am going to start by saying that this project is awesome, thank you for making this project.
I am using this project to run docker containers in GitHub Actions as part of a test suite, the problem is that I can't run docker compose in CI as compose V2 needs to be installed and doesn't come pre-installed with GitHub Action linux images, so my current code is failing on CI, as of right now my only two options are:

  1. Install Docker Compose V2.
  2. Use docker-compose instead if this project supports it.

So my question is does this project support docker-compose?

Thank you.

Can I run it inside a Kubernetes cluster and create a buildx builder with kubernetes drive to build an image?

Hello, @gabrieldemarmiesse.

First, thank you for this library. It looks awesome!

I have a use-case in which I need to build a docker image within a Python code, inside a Kubernetes cluster. I tried using DinD, but the node crashes and I found in the AKS that they don't support DinD and they suggest using buildx instead. So, I found you library and thought about using it to build the image with the buildx. I also noticed that downloads the docker binary, which is great. I was wondering if I could, within the Kubernetes cluster, call the docker.buildx.create() passing the driver="kubernetes" and build the image from there. I guess it would spawn a pod within the Kubernetes cluster that would handle the build, right?

I'm planning to try to do it soon, but I tried asking here first to see if I'm understanding things correctly. Thanks in advance.

Docker container list error

When i try to get the list of containers in my docker on my widows using the latest python_on_whales

from python_on_whales import DockerClient
docker = DockerClient(host="ssh://[email protected]")
docker.container.list(all=True)

i get this error```

Traceback (most recent call last):
File "pywale.py", line 3, in
docker.container.list(all=True)
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\python_on_whales\components\container\cli_wrapper.py", line 955,
in list
for x in run(full_cmd).splitlines()
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\python_on_whales\utils.py", line 149, in run
completed_process.stderr,
python_on_whales.exceptions.DockerException: The docker command executed was C:\Users\USER\.cache\python-on-whales\docker-cli\20.10.5\docker --host ssh://[email protected] container list -q --no-trunc --all.
It returned with code 2
The content of stdout is ''
The content of stderr is 'panic: Invalid standard handle identifier: 4294967286

goroutine 1 [running]:
github.com/docker/cli/vendor/github.com/Azure/go-ansiterm/winterm.GetStdFile(0xfffffff6, 0x2f4, 0xc0004a5dec)
C:/gopath/src/github.com/docker/cli/vendor/github.com/Azure/go-ansiterm/winterm/ansi.go:173 +0x1f3
github.com/docker/cli/vendor/github.com/moby/term/windows.NewAnsiReader(0xfffffff6, 0xc0004a5dec, 0x20d5a60)
C:/gopath/src/github.com/docker/cli/vendor/github.com/moby/term/windows/ansi_reader.go:34 +0x36
github.com/docker/cli/vendor/github.com/moby/term.StdStreams(0xc0003e36c0, 0xc0004a5eb8, 0x1, 0x1, 0x0, 0x0)
C:/gopath/src/github.com/docker/cli/vendor/github.com/moby/term/term_windows.go:75 +0x1eb
github.com/docker/cli/cli/command.NewDockerCli(0x0, 0x0, 0x0, 0x19ba60c, 0x1b500e0, 0xc0002421b0)
C:/gopath/src/github.com/docker/cli/cli/command/cli.go:473 +0x1a7
main.main()
C:/gopath/src/github.com/docker/cli/cmd/docker/docker.go:291 +0x4b

'

Incorrect stack env file parsing

If the value of the variable in the env file contains the symbol =, then the parsing of the line ends with an error
Affected version: 0.23.0

affects to the stack component only:

env = read_env_files([Path(x) for x in env_files])

How to reproduce:

# envfile
MYVAR="value"
MYARGS="--tls=true"
Python 3.9.6 (default, Jul 19 2021, 16:40:44) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from python_on_whales import docker
>>> docker.stack.deploy(name='mystack', env_files=['envfile'])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/user/src/example/lib/python3.9/site-packages/python_on_whales/components/stack/cli_wrapper.py", line 83, in deploy
    env = read_env_files([Path(x) for x in env_files])
  File "/home/user/src/example/lib/python3.9/site-packages/python_on_whales/utils.py", line 246, in read_env_files
    result_dict.update(read_env_file(file))
  File "/home/user/src/example/lib/python3.9/site-packages/python_on_whales/utils.py", line 238, in read_env_file
    key, value = line.split("=")
ValueError: too many values to unpack (expected 2)

How to fix:

key, value = line.split("=")

split line into key=value one time only:

key, value = line.split("=", 1)

Missing information in ComposeConfigService

Hello,
I was trying to extract some information from compose.config(return_json=False) but found that ComposeConfigService is missing some information, that are available in the json

For example ports, networks and volumes fields are missing

In addition, in deploy (i.e. ServiceDeployConfig) is missing the replicas field. Just as example, I include a comparison between the config obtained as json vs ComposeConfigService:

print(compose.config(return_json=False).services["myservice"].deploy)

labels=None resources={'reservations': {'cpus': '0.25', 'memory': '20971520'}} placement={}

print(compose.config(return_json=True)["services"]["myservice"]["deploy"])
{'replicas': 1, 'resources': {'reservations': {'cpus': '0.25', 'memory': '20971520'}}, 'placement': {}}

Is it enough to extend the interfaces to support the missing information, or some additional parsing/work is needed?

Thank you very much for your support!

ComposeConfigService should include entrypoint

It seems necessary/useful to represent how a service is invoked by including both the entrypoint as well as the command.

Reading components.compose.models, I see:

class ComposeConfigService(BaseModel):
    deploy: Optional[ServiceDeployConfig]
    blkio_config: Any
    cpu_count: Optional[float]
    cpu_percent: Optional[float]
    cpu_shares: Optional[int]
    cpuset: Optional[str]
    build: Any
    cap_add: List[str] = Field(default_factory=list)
    cap_drop: List[str] = Field(default_factory=list)
    cgroup_parent: Optional[str]
    command: Optional[List[str]]
    configs: Any
    container_name: Optional[str]
    depends_on: Dict[str, DependencyCondition] = Field(default_factory=dict)
    device_cgroup_rules: List[str] = Field(default_factory=list)
    devices: Any
    environment: Optional[Dict[str, Optional[str]]]
    image: Optional[str]

command is there, but no entrypoint. entrypoint is included in the json output from compose cli but not picked up by this model (info is lost).

In the meantime, for my use case it was imperative that I read the entrypoint specified by the compose file. I've monkeypatched my code with this file:
monkeypatched_whales.py:

from typing import Optional, Any, List, Dict
from pydantic import BaseModel, Field
from python_on_whales.components.compose import models
from python_on_whales.components.compose.models import ServiceDeployConfig, DependencyCondition, \
    ComposeConfigNetwork, ComposeConfigVolume


class ComposeConfigServiceWithEntrypoint(BaseModel):
    deploy: Optional[ServiceDeployConfig]
    blkio_config: Any
    cpu_count: Optional[float]
    cpu_percent: Optional[float]
    cpu_shares: Optional[int]
    cpuset: Optional[str]
    build: Any
    cap_add: List[str] = Field(default_factory=list)
    cap_drop: List[str] = Field(default_factory=list)
    cgroup_parent: Optional[str]
    command: Optional[List[str]]
    configs: Any
    container_name: Optional[str]
    depends_on: Dict[str, DependencyCondition] = Field(default_factory=dict)
    device_cgroup_rules: List[str] = Field(default_factory=list)
    devices: Any
    entrypoint: Optional[List[str]]
    environment: Optional[Dict[str, Optional[str]]]
    image: Optional[str]


class ComposeConfigWithEntrypoint(BaseModel):
    services: Dict[str, ComposeConfigServiceWithEntrypoint]
    networks: Dict[str, ComposeConfigNetwork] = Field(default_factory=dict)
    volumes: Dict[str, ComposeConfigVolume] = Field(default_factory=dict)
    configs: Any
    secrets: Any

models.ComposeConfigService = ComposeConfigServiceWithEntrypoint
models.ComposeConfig = ComposeConfigWithEntrypoint
from python_on_whales.components.compose import cli_wrapper

cli_wrapper.ComposeConfig = ComposeConfigWithEntrypoint

I'm aware that it'll be a cat-and-mouse game keeping up with the compose spec but this particular case seems like an oversight.

docker.image.prune hangs

Everything seems to work well, but I found that docker.image.prune just hangs and does nothing until ctrl+c.
However, I found that getting the image through docker.image.inspect and then removing it works.

# doesn't work
docker.image.prune(all=True, filter=some_label)

# work around
image = docker.image.inspect(name_of_the_image)
image.remove(prune=True)

Any insights on why the prune command isn't working correctly?

Docker service logs

Hello,
I was trying to use docker service logs but it is not implemented yet

I made some tests with my local copy by (mostly) copying the implementation of docker.container.logs and it works fine included the follow and stream options

The only three differences that I found between container.logs and service.logs are:

  1. service logs has some additional flags, but they are very easy to be included:
      --no-resolve     Do not map IDs to Names in output
      --no-task-ids    Do not include task IDs in output
      --no-trunc       Do not truncate output
      --raw            Do not neatly format logs
  1. the --until option is only available for container.logs (that is very strange considering that the --since option is available on both... but this is another story)

  2. service.logs has multiple targets. While container.logs only targets containers, service.logs works on both services and tasks. That can be tricky for the initial inspection:

self.inspect(service_or_task)

the inspect method already implemented in components/service/cli_wrapper.py works fine with services, but not with tasks:

logs("a_service") -> ok
logs("a_task") -> python_on_whales.exceptions.NoSuchService The content of stderr is 'Status: Error: no such service: xyz.1.s1rsmzocziba8rbjpjo4vbw9w, Code: 1

This would require some extra work, for example a method to guess if the target is a service or a task (a task should contains a .{slot}.{id} in the name) to properly validate it

In that case the raised exception should be a python_on_whales.exceptions.NoSuchServiceOrTask based on the corresponding error raised from docker:

The docker command executed was `/usr/local/bin/docker service logs --tail 1 invalid`.
It returned with code 1
The content of stdout can be found above the stacktrace (it wasn't captured).
The content of stderr is 'no such task or service: invalid

Final thought: the implementation of service.logs would basically duplicate some utilities and may be worth to move them in a utils package. For example the format_time_arg method from components/container and the task inspect from components/task

Error parsing extra_hosts

When wrapping a container that has extra_hosts defined, a parser error is thrown because the expected return is extra_hosts: Optional[Dict[str, str]] but the actual return is a list of "host:ip" strings

Missing lines from logs output

When using docker.container.logs, some lines are missing in the output

For example with postgres:

$ docker run -e POSTGRES_PASSWORD=password --detach postgres
8fb3fd4da45f0294ed183a988b27bc728407e10955dfa04fdea9869083e16304

$ docker logs --tail 8 8fb3fd4da45f0294ed183a988b27bc728407e10955dfa04fdea9869083e16304
PostgreSQL init process complete; ready for start up.

2021-07-18 16:03:21.262 UTC [1] LOG:  starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2021-07-18 16:03:21.262 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2021-07-18 16:03:21.262 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2021-07-18 16:03:21.268 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-07-18 16:03:21.274 UTC [65] LOG:  database system was shut down at 2021-07-18 16:03:21 UTC
2021-07-18 16:03:21.281 UTC [1] LOG:  database system is ready to accept connections

The same command on python on whales only prints 1 line, all the LOGs after the ready for start up are missing:

>>> print(docker.container.logs("8fb3fd4da45f0294ed183a988b27bc728407e10955dfa04fdea9869083e16304", tail=8))
PostgreSQL init process complete; ready for start up.

Maybe that the missing lines are from stderr and a return_stderr is needed? (it is defaulted False in run and is not specified by the logs command)

There was an error parsing the json response from the Docker daemon.

Traceback (most recent call last):
File "/home/wf/.local/lib/python3.8/site-packages/python_on_whales/client_config.py", line 214, in _fetch_and_parse_inspect_result
return self._parse_json_object(json_object)
File "/home/wf/.local/lib/python3.8/site-packages/python_on_whales/components/container/cli_wrapper.py", line 58, in _parse_json_object
return ContainerInspectResult.parse_obj(json_object)
File "pydantic/main.py", line 520, in pydantic.main.BaseModel.parse_obj
File "pydantic/main.py", line 362, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for ContainerInspectResult
HostConfig -> Mounts -> 0 -> Source
field required (type=value_error.missing)

raise ParsingError(

python_on_whales.client_config.ParsingError: There was an error parsing the json response from the Docker daemon.
This is a bug with python-on-whales itself. Please head to
https://github.com/gabrieldemarmiesse/python-on-whales/issues
and open an issue. You should copy this error message and
the json response from the Docker daemon.
The json response was put
in /tmp/tmphd1it3er.json because it's a bit too big to be printed
on the screen. Make sure that there are no sensitive data in the
json file before copying it in the github issue.

ssh argument not working as expected

I have been working on porting a bash script which I use to build some of my docker images to python, and your package has been a great help! I appreciate how easy and straightforward it is to port my docker run commands directly to python.

I just ran into an issue with the ssh arg though. Looks like when setting ssh=default in the docker.build() command, I see an error:

error: invalid empty ssh agent socket, make sure SSH_AUTH_SOCK is set

I checked, and that variable is set in my shell environment. Any suggestions? Am I using the argument incorrectly? The same bash script with --ssh default is working without problems.

Let me know if I can provide anything to help debug. Thanks!

    raise DockerException(
python_on_whales.utils.DockerException: The docker command executed was `/usr/bin/docker buildx build --build-arg FROM_TAG=gui-support --ssh default --file docker/ros/kinetic/autoyard-core/Dockerfile --tag ros-kinetic-autoyard-core:test --iidfile /tmp/tmpc9q55l8w/id_file.txt .`.
It returned with code 1

Logged the variable with print("SSH socket {}".format(os.environ['SSH_AUTH_SOCK'])) and can see that the variable is printed correctly, but the docker build command still fails:

SSH socket /run/user/1001/keyring/ssh
error: invalid empty ssh agent socket, make sure SSH_AUTH_SOCK is set

Nvidia-docker

Is there any way to deploy the stack with nvidia as runtime,

Basically ,
nvidia-docker stack deploy --compose-file /path/to/composefile service_name.

Type Hints: py.typed file is missing

Hello,
I see that most of the code (with very few exceptions) is typed but mypy is unable to pick up the types because the py.typed file is missing (https://www.python.org/dev/peps/pep-0561/#packaging-type-information)

Based on my experience on other projects, something like that should be enough:

echo "# Marker file for PEP 561. The python-on-whales package uses inline types." > python-on-whales/py.typed

and then you can add the file to the setup.py / MANIFEST.in

Thank you very much (for this very small issue and for the great work that you are doing with this package!)

pydantic models: components.compose.models::ComposeConfigService needs labels

There can be labels attached directly to services (ie not in the deploy config or the build stanzas). I added the following:

class ComposeConfigService(BaseModel):
    # ... snipped
    labels: Dict[str, str] = Field(default_factory=dict)

And now I'm getting the labels from the docker config json output.

And again for volume mounts:

class ComposeConfigServiceVolume(BaseModel):
    type: str
    source: str
    target: str

class ComposeConfigService(BaseModel):
    # ... snipped
    volumes: List[ComposeConfigServiceVolume]

I don't have a proper git setup to do a simple PR at the moment. But this should be easy for someone who does!

compose up --force-recreate

Hello,
I would like to ask you to introduce the support for the --force-recreate flag in compose up

The introduction itself should be pretty easy and most of the work should involve the implementation of a proper test
What about some asserts on the start_at date?

Something like:

services:
  mytest:
    container_name: "mytest"
    image: alpine
    command: sleep infinity
docker.compose.up(detach=True)

d1 = docker.container.inspect("mytest").state.started_at

docker.compose.up(detach=True)

d2 = docker.container.inspect("mytest").state.started_at

# The container is not restarted because it's configuration haven't changed
assert d1 == d2

docker.compose.up(detach=True, force=True)

d3 = docker.container.inspect("mytest").state.started_at

# The container is restarted due to the force flag, even if the configuration haven't changed
assert d3 != d1

# or if you prefer:
# assert d3.timestamp() > d1.timestamp()




ParsingError: There was an error parsing the json response from the Docker daemon

I've created stack with docker.stack.deploy - in this stack all services are running.
When I run docker.stack.services("my_stack") everything is working as expected.

Then I've deployed a new stack with one broken service!
And run docker.stack.services("my_stack") again. But this time I've got an error

--------------------------------------------------------------------
ValidationError                    Traceback (most recent call last)
~/.../updater/venv/lib/python3.7/site-packages/python_on_whales/client_config.py in _fetch_and_parse_inspect_result(self, reference)
    211         try:
--> 212             return self._parse_json_object(json_object)
    213         except pydantic.error_wrappers.ValidationError as err:

~/.../updater/venv/lib/python3.7/site-packages/python_on_whales/components/service.py in _parse_json_object(self, json_object)
    120     def _parse_json_object(self, json_object: Dict[str, Any]) -> ServiceInspectResult:
--> 121         return ServiceInspectResult.parse_obj(json_object)
    122 

~/.../updater/venv/lib/python3.7/site-packages/pydantic/main.cpython-37m-darwin.so in pydantic.main.BaseModel.parse_obj()

~/.../updater/venv/lib/python3.7/site-packages/pydantic/main.cpython-37m-darwin.so in pydantic.main.BaseModel.__init__()

ValidationError: 1 validation error for ServiceInspectResult
PreviousSpec -> UpdateConfig -> Monitor
  field required (type=value_error.missing)

docker container exec without tty: inconsistent behaviour between docker and python on whales

Hello, sorry for the long issue, as usual I tried to give as much as debug information as possible

I found that when docker.container.execute(...) is executed without a tty, the stderr is lost, while on docker it is returned combined to the stdout.
In the examples I will use both tty and interactive flags because the use of tty only is not implemented yet in python on whales, but the issue should not be related to the interactive flag.

Let's image to have the following script into a container:

#!/usr/bin/python3
import sys
print("This is out", file=sys.stdout)
print("This is err", file=sys.stderr)

docker exec -it my_container /test.py =>

This is out
This is err

I got exactly the same on python on whales:

>>> print(docker.container.execute("my_container", command=["python3", "/test.py"], interactive=True, tty=True))
This is out
This is err

But without tty the output differs,
on docker I still get both the streams:

docker exec my_container /test.py =>

This is out
This is err

But with python on whales the error stream is lost:

>>> print(docker.container.execute("my_container", command=["python3", "/test.py"]))
This is out

I edited here:

by adding return_stderr=True

result = run(full_cmd, tty=tty) => result = run(full_cmd, tty=tty, return_stderr=True)

and this way I can get both the streams as a tuple:

>>> print(docker.container.execute("my_container", command=["python3", "/test.py"]))
('This is out', 'This is err')

Can we consider to add the error stream to the output?

Anyway also this way I cannot combine them, I can only print the whole stdout and then the whole stderr. Maybe that in this case a stream would be useful? Maybe that the stream_stdout_and_stderr utility would do the trick? (ok, I'm taking a guess!)

Have a nice weekend and thank you! 🌞

docker.run with image entrypoint fails

Docker images with entrypoints are not working as intended. A command is added as "ENTRYPOINT 'COMMAND' " which always results in an error.
Normal Docker behavior for this is "ENTRYPOINT COMMAND"

Detach is also not working for this image

Another error regarding this it the empty entrypoint flag
Trying to remove the default entrypoint by adding an empty entrypoint="" is not working either. Result is something like: docker run --entrypoint -... -... image

can be reproduced with

test = "m4b-tool merge --output-file=/data/test.m4b --use-filenames-as-chapters --jobs=3 --no-chapter-reindexing --dry-run /data/"
output_generator = docker.run("teardrop/m4b-tool",
                              [test],
                              volumes=[("PATH/data", "/data")],
                              detach=False
                              )

docker-compose not called but docker compose which is not available

I am trying to implement WolfgangFahl/pymediawikidocker#4

when calling:

  docker.compose.up(detach=True)

it works on MacOS with

docker --version
Docker version 20.10.6, build 370c289

docker-compose --version
docker-compose version 1.29.1, build c34c88b2

but not on Ubutu 20.04 LTS
with

docker --version
Docker version 20.10.7, build f0df350

The error is:

python_on_whales.utils.DockerException: The docker command executed was `/usr/bin/docker compose up --detach`.
It returned with code 125
The content of stdout can be found above the stacktrace (it wasn't captured).
The content of stderr is 'unknown flag: --detach
See 'docker --help'.

Usage:  docker [OPTIONS] COMMAND

A self-sufficient runtime for containers

Options:
      --config string      Location of client config files (default
                           "/home/wf/.docker")
  -c, --context string     Name of the context to use to connect to the
                           daemon (overrides DOCKER_HOST env var and
                           default context set with "docker context use")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level
                           ("debug"|"info"|"warn"|"error"|"fatal")
                           (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default
                           "/home/wf/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default
                           "/home/wf/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default
                           "/home/wf/.docker/key.pem")
      --tlsverify          Use TLS and verify the remote
  -v, --version            Print version information and quit

Management Commands:
  app*        Docker App (Docker Inc., v0.9.1-beta3)
  builder     Manage builds
  buildx*     Build with BuildKit (Docker Inc., v0.5.1-docker)
  config      Manage Docker configs
  container   Manage containers
  context     Manage contexts
  image       Manage images
  manifest    Manage Docker image manifests and manifest lists
  network     Manage networks
  node        Manage Swarm nodes
  plugin      Manage plugins
  scan*       Docker Scan (Docker Inc., v0.8.0)
  secret      Manage Docker secrets
  service     Manage services
  stack       Manage Docker stacks
  swarm       Manage Swarm
  system      Manage Docker
  trust       Manage trust on Docker images
  volume      Manage volumes

Commands:
  attach      Attach local standard input, output, and error streams to a running container
  build       Build an image from a Dockerfile
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  diff        Inspect changes to files or directories on a container's filesystem
  events      Get real time events from the server
  exec        Run a command in a running container
  export      Export a container's filesystem as a tar archive
  history     Show the history of an image
  images      List images
  import      Import the contents from a tarball to create a filesystem image
  info        Display system-wide information
  inspect     Return low-level information on Docker objects
  kill        Kill one or more running containers
  load        Load an image from a tar archive or STDIN
  login       Log in to a Docker registry
  logout      Log out from a Docker registry
  logs        Fetch the logs of a container
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  ps          List containers
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  rmi         Remove one or more images
  run         Run a command in a new container
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  search      Search the Docker Hub for images
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  version     Show the Docker version information
  wait        Block until one or more containers stop, then print their exit codes

Run 'docker COMMAND --help' for more information on a command.

�[1mTo get more help with docker, check out our guides at https://docs.docker.com/go/guides/�[0m

It looks like

docker-compose

should be called instead of

docker compose

Documentation: How to use ssh connection

Hi,

thank you very much for starting this nice package.

Could you please provide one example that shows how to execute the commands on a remote host? I've tried to set host='ssh://user@ip', unfortunately without success (the host runs docker and is accessible, validated via docker-py). Connecting to the local machine seems to work.

client = DockerClient(
    config=client_config,
    context=None,
    debug=None,
    host='ssh://user@ip',
    log_level=None,
    tls=None,
    tlscacert=None,
    tlscert=None,
    tlskey=None,
    tlsverify=None,
    client_config=None,
    compose_files=[],
)
client.info()
ValidationError: 53 validation errors for SystemInfo
ID
  field required (type=value_error.missing)
Containers
  field required (type=value_error.missing)
ContainersRunning
  field required (type=value_error.missing)
ContainersPaused
  field required (type=value_error.missing)
ContainersStopped
  field required (type=value_error.missing)
Images
  field required (type=value_error.missing)
Driver
  field required (type=value_error.missing)
DriverStatus
  field required (type=value_error.missing)
DockerRootDir
  field required (type=value_error.missing)
Plugins
  field required (type=value_error.missing)
MemoryLimit
  field required (type=value_error.missing)
SwapLimit
  field required (type=value_error.missing)
KernelMemory
  field required (type=value_error.missing)
CpuCfsPeriod
  field required (type=value_error.missing)
CpuCfsQuota
  field required (type=value_error.missing)
CPUShares
  field required (type=value_error.missing)
CPUSet
  field required (type=value_error.missing)
PidsLimit
  field required (type=value_error.missing)
OomKillDisable
  field required (type=value_error.missing)
IPv4Forwarding
  field required (type=value_error.missing)
BridgeNfIptables
  field required (type=value_error.missing)
BridgeNfIp6tables
  field required (type=value_error.missing)
Debug
  field required (type=value_error.missing)
NFd
  field required (type=value_error.missing)
NGoroutines
  field required (type=value_error.missing)
SystemTime
  field required (type=value_error.missing)
LoggingDriver
  field required (type=value_error.missing)
CgroupDriver
  field required (type=value_error.missing)
NEventsListener
  field required (type=value_error.missing)
KernelVersion
  field required (type=value_error.missing)
OperatingSystem
  field required (type=value_error.missing)
OSType
  field required (type=value_error.missing)
Architecture
  field required (type=value_error.missing)
NCPU
  field required (type=value_error.missing)
MemTotal
  field required (type=value_error.missing)
IndexServerAddress
  field required (type=value_error.missing)
RegistryConfig
  field required (type=value_error.missing)
HttpProxy
  field required (type=value_error.missing)
HttpsProxy
  field required (type=value_error.missing)
NoProxy
  field required (type=value_error.missing)
Name
  field required (type=value_error.missing)
Labels
  field required (type=value_error.missing)
ExperimentalBuild
  field required (type=value_error.missing)
ServerVersion
  field required (type=value_error.missing)
Runtimes
  field required (type=value_error.missing)
DefaultRuntime
  field required (type=value_error.missing)
Swarm
  field required (type=value_error.missing)
LiveRestoreEnabled
  field required (type=value_error.missing)
Isolation
  field required (type=value_error.missing)
InitBinary
  field required (type=value_error.missing)
ContainerdCommit
  field required (type=value_error.missing)
RuncCommit
  field required (type=value_error.missing)
InitCommit
  field required (type=value_error.missing)

docker.buildx.is_installed

Hello,
although buildx is nowadays pre-installed with the majority of docker bundles, it remains an external plugin not guaranteed to be available (For example a colleague of mine was using Ubuntu 18.04 and docker 20.10.x and buildx was not installed)

As for compose, would be worth introduce an is_installed utility to avoid the pattern:

try:
  docker.buildx.version()
  print("Buildx installed")

# a specific exception should be preferred here
except DockerException:
  print("Buildx not installed")

Here my proposal (copied from compose):

mdantonio@dffbcc1

Pydantic Validation Failures

ubuntu 18.04

$ docker --version
Docker version 20.10.1, build 831ebea

I am just trying to list the names of the currently running containers:

    for container in docker.container.list():
        print(container.name)
        
    output = docker.run("hello-world")
    print(output)

And getting pydantic validation errors:

  File "/home/gkedge/.local/share/virtualenvs/upgrade-yFYUhZyD/lib/python3.7/site-packages/python_on_whales/components/container.py", line 402, in name
    return removeprefix(self._get_inspect_result().name, "/")
  File "/home/gkedge/.local/share/virtualenvs/upgrade-yFYUhZyD/lib/python3.7/site-packages/python_on_whales/client_config.py", line 177, in _get_inspect_result
    self.reload()
  File "/home/gkedge/.local/share/virtualenvs/upgrade-yFYUhZyD/lib/python3.7/site-packages/python_on_whales/client_config.py", line 169, in reload
    self._fetch_and_parse_inspect_result(self._immutable_id)
  File "/home/gkedge/.local/share/virtualenvs/upgrade-yFYUhZyD/lib/python3.7/site-packages/python_on_whales/client_config.py", line 204, in _fetch_and_parse_inspect_result
    return self._parse_json_object(json_object)
  File "/home/gkedge/.local/share/virtualenvs/upgrade-yFYUhZyD/lib/python3.7/site-packages/python_on_whales/components/container.py", line 352, in _parse_json_object
    return ContainerInspectResult.parse_obj(json_object)
  File "pydantic/main.py", line 520, in pydantic.main.BaseModel.parse_obj
  File "pydantic/main.py", line 362, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ContainerInspectResult
HostConfig -> VolumesFrom
  str type expected (type=type_error.str)

The JSON looks fine for json_object returned.

Missing docker compose --env-file option

Hello,
I haven't the .env file in the same location of the compose .yml configurations, so that I need to specify a custom .env path location but currently python on whales does not support this option

I did some tests on my local copy and this is working for me:

mdantonio@1bcf796

Of course I can send a PR, but tests are missing there

docker buildx command always fails in install check on Windows

The subprocess.call overrides all environment variables instead of updating them to add the DOCKER_CLI_EXPERIMENTAL variable. This has the effect of the docker buildx binary being downloaded every time a build step is run.

def install_buildx_if_needed(docker_binary: str):
completed_process = subprocess.run(
[docker_binary, "buildx"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env={"DOCKER_CLI_EXPERIMENTAL": "enabled"},
)
if completed_process.returncode == 0:
return

It looks like this is handled properly where this function is called, so perhaps that env dictionary should be passed in with the docker binary argument:

subprocess_env = dict(os.environ)
subprocess_env.update(env)
if args[1] == "buildx":
install_buildx_if_needed(args[0])
subprocess_env["DOCKER_CLI_EXPERIMENTAL"] = "enabled"

compose run

Hello,
I know that a number of compose commands are still not implemented and I can imagine that you have your own priority list about them, but in case of tie can I ask you to push the run command higher in the ranking? 😄

I think that compose run is the last command that I need to complete the migration of all my use cases to python on whales, so the implementation of that command would be very appealing to me, but as usual I don't want to bother you with many requests, also considering that I have a number of other issues still open... so please consider this as a low priority request

Thank you again!

compose down --volumes

Hello,
I would like to ask you to introduce the support for the --volumes flag in compose down

If you need a testing strategy I can work on a minimal example to be used as test case

Thank you again!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.