Giter Club home page Giter Club logo

testcontainers / testcontainers-python Goto Github PK

View Code? Open in Web Editor NEW
1.5K 18.0 280.0 1.89 MB

Testcontainers is a Python library that providing a friendly API to run Docker container. It is designed to create runtime environment to use during your automatic tests.

Home Page: https://testcontainers-python.readthedocs.io/en/latest/

License: Apache License 2.0

Python 98.76% Dockerfile 0.51% Makefile 0.70% Shell 0.02%
python python3 testing testcontainers selenium database

testcontainers-python's Introduction

Poetry Ruff PyPI - Version PyPI - License PyPI - Python Version codecov Core Tests Community Tests Docs

Codespace

Testcontainers Python

testcontainers-python facilitates the use of Docker containers for functional and integration testing.

For more information, see the docs.

Getting Started

>>> from testcontainers.postgres import PostgresContainer
>>> import sqlalchemy

>>> with PostgresContainer("postgres:16") as postgres:
...     engine = sqlalchemy.create_engine(postgres.get_connection_url())
...     with engine.begin() as connection:
...         result = connection.execute(sqlalchemy.text("select version()"))
...         version, = result.fetchone()
>>> version
'PostgreSQL 16...'

The snippet above will spin up a postgres database in a container. The get_connection_url() convenience method returns a sqlalchemy compatible url we use to connect to the database and retrieve the database version.

Contributing / Development / Release

See CONTRIBUTING.md for more details.

Configuration

Env Variable Example Description
TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE /var/run/docker.sock Path to Docker's socket used by ryuk
TESTCONTAINERS_RYUK_PRIVILEGED false Run ryuk as a privileged container
TESTCONTAINERS_RYUK_DISABLED false Disable ryuk
RYUK_CONTAINER_IMAGE testcontainers/ryuk:0.8.1 Custom image for ryuk
RYUK_RECONNECTION_TIMEOUT 10s Reconnection timeout for Ryuk TCP socket before Ryuk reaps all dangling containers

testcontainers-python's People

Contributors

alexanderankin avatar annabaas avatar balint-backmaker avatar bearrito avatar dstape avatar duchadian avatar eddumelendez avatar github-actions[bot] avatar gospodinkot avatar kieranlea avatar kiview avatar lippertto avatar maltehedderich avatar max-pfeiffer avatar oliverlambson avatar pffijt avatar robsdedude avatar romank0 avatar santi avatar sergeypirogov avatar skirino avatar spicy-sauce avatar t0ch1k avatar tillahoffmann avatar timbmg avatar totallyzen avatar tranquility2 avatar tuvshuud avatar vindex10 avatar yakimka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

testcontainers-python's Issues

Dockerfile support

Similar to allowing to run docker-compose files, would be great to just run a Dockerfile.

Cannot specify custom docker-compose file

If you pass something like os.path.dirname("{}/docker-compose-test.yml".format(os.getcwd())) to testcontainers.compose.DockerCompose it will try to use docker-compose.yml instead of docker-compose-test.yml.

I would expect that having being passed a full path to a file it would use that specified file.

ImportError: cannot import name 'APIClient' / running problem on python 3.6

Hi,

I was trying to use the test containers with django in order to set-up postgres db for tests. Unfortunately when I try to setup db according the documentation:

    postgres_container = PostgresContainer("postgres:9.5")
    with postgres_container as postgres:
        e = sqlalchemy.create_engine(postgres.get_connection_url())
        result = e.execute("select version()")

I receive the following error:

  File ".virtualenv/lib/python3.6/site-packages/django/conf/__init__.py", line 110, in __init__
    mod = importlib.import_module(self.SETTINGS_MODULE)
  File "/usr/local/Cellar/python3/3.6.3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "settings.py", line 14, in <module>
    from testcontainers import PostgresContainer
  File ".virtualenv/lib/python3.6/site-packages/testcontainers/__init__.py", line 1, in <module>
    from testcontainers.selenium import BrowserWebDriverContainer
  File ".virtualenv/lib/python3.6/site-packages/testcontainers/selenium.py", line 16, in <module>
    from testcontainers.core.container import DockerContainer
  File ".virtualenv/lib/python3.6/site-packages/testcontainers/core/container.py", line 3, in <module>
    from docker.models.containers import Container
  File ".virtualenv/lib/python3.6/site-packages/docker/models/containers.py", line 3, in <module>
    from ..api import APIClient
ImportError: cannot import name 'APIClient'

I'm using Python 3.6.3 and testcontainers 2.1.0. Do you know what could make a problem?

Thx

Cannot import name 'MySqlContainer' from 'testcontainers'

I recently installed this library and also installed a few downstream dependencies that other users mentioned in old issues. I am running python 3.7.

I have the following file:

import sqlalchemy
from testcontainers import MySqlContainer


@pytest.fixture(scope="session")
def mysql():
    mysql = MySqlContainer('mysql:5.7.17').start()
    engine = sqlalchemy.create_engine(mysql.get_connection_url())
    yield engine
    mysql.stop()


def test_with_mysql(mysql):
    result = mysql.execute("select version()")
    for row in result:
        assert row[0] == '5.7.17'

When I attempt to run pytest test_mysql.py, I get:

============================= test session starts ==============================
collected 0 items / 1 errors

==================================== ERRORS ====================================
________________________ ERROR collecting test_mysql.py ________________________
ImportError while importing test module '../test_mysql.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_mysql.py:4: in <module>
    from testcontainers import MySqlContainer
E   ImportError: cannot import name 'MySqlContainer' from 'testcontainers' 

Same with python3 test_mysql, I get:

ImportError: cannot import name 'MySqlContainer' from 'testcontainers' (/Users/<redacted>/Library/Python/3.7/lib/python/site-packages/testcontainers/__init__.py)

could not translate host name "localnpipe" to address: Unknown host

venv\lib\site-packages\wrapt\wrappers.py:605: in __call__
    return self._self_wrapper(self.__wrapped__, self._self_instance,
venv\lib\site-packages\testcontainers\core\waiting_utils.py:46: in wrapper
    raise TimeoutException(
E   testcontainers.core.exceptions.TimeoutException: Wait time exceeded 120 sec.
E                       Method _connect, args () , kwargs {}.
E                        Exception (psycopg2.OperationalError) could not translate host name "localnpipe" to address: Unknown host
E
E   (Background on this error at: http://sqlalche.me/e/e3q8)

Please help me out with this thing.

Missing [elasticsearch] extra

I see testcontainers/elasticsearch.py but

root@7d069e7f01e3:/# pip install 'testcontainers[elasticsearch]'
Collecting testcontainers[elasticsearch]
  Downloading testcontainers-3.0.3.tar.gz (13 kB)
  WARNING: testcontainers 3.0.3 does not provide the extra 'elasticsearch'
(...)

Can you help me understand if elasticsearch is supported or not?

HTTP Exception when stopping DockerContainer

Hi! Not sure if this is actually a question for this project or for docker. However, I give it a shot here and any help is much appreciated :)

Given

import testcontainers.core.container

def test_fixture(self):
  with testcontainers.core.container.DockerContainer("spotify/kafka") as kafka:
    kafka.start()
    kafka.stop()

I end up with an http exception of the docker container not existing when stoping it.

./python3.7/site-packages/testcontainers/core/container.py:60: in __exit__
    self.stop()
./python3.7/site-packages/testcontainers/core/container.py:54: in stop
    self.get_wrapped_contaner().remove(force=force, v=delete_volume)
./python3.7/site-packages/docker/models/containers.py:337: in remove
    return self.client.api.remove_container(self.id, **kwargs)
./python3.7/site-packages/docker/utils/decorators.py:19: in wrapped
    return f(self, resource_id, *args, **kwargs)
./python3.7/site-packages/docker/api/container.py:976: in remove_container
    self._raise_for_status(res)
./python3.7/site-packages/docker/api/client.py:248: in _raise_for_status
    raise create_api_error_from_http_exception(e)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
e = HTTPError('404 Client Error: Not Found for url: http+docker://localhost/v1.35/containers/4ac639801ca7c3f7ab923548e9ef801607e328805017e6c6a297f78445d60749?v=True&link=False&force=True')

def create_api_error_from_http_exception(e):
...
>       raise cls(e, response=response, explanation=explanation)
E       docker.errors.NotFound: 404 Client Error: Not Found ("No such container: 4ac639801ca7c3f7ab923548e9ef801607e328805017e6c6a297f78445d60749")

./python3.7/site-packages/docker/errors.py:31: NotFound

Docker compose example throws WebDriverException: Message: Error forwarding the new session Empty pool of VM for setup Capabilities {browserName: chrome, version: }

I tried to follow the Docker compose example in the documentation, but only get this error:

$ python test_selenium_grid.py
โ Creating network "testcontainers_default" with the default driver
Creating testcontainers_hub_1 ... done
Creating testcontainers_firefox_1 ... done
Creating testcontainers_chrome_1  ... done
Stopping testcontainers_chrome_1  ... done
Stopping testcontainers_firefox_1 ... done
Stopping testcontainers_hub_1     ... done
Removing testcontainers_chrome_1  ... done
Removing testcontainers_firefox_1 ... done
Removing testcontainers_hub_1     ... done
Removing network testcontainers_default
Traceback (most recent call last):
  File "test_selenium_grid.py", line 12, in <module>
    desired_capabilities=webdriver.DesiredCapabilities.CHROME)
  File "/home/arne/.virtualenvs/testcontainers/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__
    self.start_session(capabilities, browser_profile)
  File "/home/arne/.virtualenvs/testcontainers/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
    response = self.execute(Command.NEW_SESSION, parameters)
  File "/home/arne/.virtualenvs/testcontainers/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
    self.error_handler.check_response(response)
  File "/home/arne/.virtualenvs/testcontainers/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: Error forwarding the new session Empty pool of VM for setup Capabilities {browserName: chrome, version: }
Stacktrace:
    at org.openqa.grid.web.servlet.handler.RequestHandler.process (RequestHandler.java:118)
    at org.openqa.grid.web.servlet.DriverServlet.process (DriverServlet.java:85)
    at org.openqa.grid.web.servlet.DriverServlet.doPost (DriverServlet.java:69)
    at javax.servlet.http.HttpServlet.service (HttpServlet.java:707)
    at javax.servlet.http.HttpServlet.service (HttpServlet.java:790)
    at org.seleniumhq.jetty9.servlet.ServletHolder.handle (ServletHolder.java:865)
    at org.seleniumhq.jetty9.servlet.ServletHandler.doHandle (ServletHandler.java:535)
    at org.seleniumhq.jetty9.server.handler.ScopedHandler.handle (ScopedHandler.java:146)
    at org.seleniumhq.jetty9.security.SecurityHandler.handle (SecurityHandler.java:548)
    at org.seleniumhq.jetty9.server.handler.HandlerWrapper.handle (HandlerWrapper.java:132)
    at org.seleniumhq.jetty9.server.handler.ScopedHandler.nextHandle (ScopedHandler.java:257)
    at org.seleniumhq.jetty9.server.session.SessionHandler.doHandle (SessionHandler.java:1595)
    at org.seleniumhq.jetty9.server.handler.ScopedHandler.nextHandle (ScopedHandler.java:255)
    at org.seleniumhq.jetty9.server.handler.ContextHandler.doHandle (ContextHandler.java:1340)
    at org.seleniumhq.jetty9.server.handler.ScopedHandler.nextScope (ScopedHandler.java:203)
    at org.seleniumhq.jetty9.servlet.ServletHandler.doScope (ServletHandler.java:473)
    at org.seleniumhq.jetty9.server.session.SessionHandler.doScope (SessionHandler.java:1564)
    at org.seleniumhq.jetty9.server.handler.ScopedHandler.nextScope (ScopedHandler.java:201)
    at org.seleniumhq.jetty9.server.handler.ContextHandler.doScope (ContextHandler.java:1242)

These are the files that I use:

docker-compose.yml

version: '3.3'
services:
  hub:
    image: selenium/hub
    ports:
    - "4444:4444"

  firefox:
    image: selenium/node-firefox
    links:
    - hub
    expose:
    - "5555"

  chrome:
    image: selenium/node-chrome
    links:
    - hub
    expose:
    - "5555"

test_selenium_grid.py

from selenium import webdriver
from testcontainers.compose import DockerCompose

compose = DockerCompose(".")
with compose:
    host = compose.get_service_host("hub", 4444)
    port = compose.get_service_port("hub", 4444)
    driver = webdriver.Remote(
        command_executor=("http://{}:{}/wd/hub".format(host, port)),
        desired_capabilities=webdriver.DesiredCapabilities.CHROME)
    driver.get("http://automation-remarks.com")

and these are my software versions:

$ docker version
Client:
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        2d0083d
 Built:             Fri Aug 16 14:19:38 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.7
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       2d0083d
  Built:            Thu Aug 15 15:12:41 2019
  OS/Arch:          linux/amd64
  Experimental:     false

$ docker-compose version
docker-compose version 1.20.1, build 5d8c71b
docker-py version: 3.1.4
CPython version: 3.6.4
OpenSSL version: OpenSSL 1.0.1t  3 May 2016

$ pip freeze
attrs==19.3.0
blindspin==2.0.1
certifi==2019.11.28
chardet==3.0.4
colorama==0.4.3
crayons==0.3.0
docker==4.1.0
idna==2.8
importlib-metadata==1.2.0
more-itertools==8.0.2
packaging==19.2
pluggy==0.13.1
py==1.8.0
pyparsing==2.4.5
pytest==5.3.1
requests==2.22.0
selenium==3.141.0
six==1.13.0
testcontainers==2.5
urllib3==1.25.7
wcwidth==0.1.7
websocket-client==0.56.0
wrapt==1.11.2
zipp==0.6.0

Error running test inside a container

Hi,

I'm running my tests as part of the development pipeline inside a container. When I tried test containers I got the following error:
requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

Can someone please help?

Thanks

Error getting get_connection_url with DockerContainer

I run pytest, but I got the errer.
Please help me.

/usr/local/lib/python3.7/site-packages/testcontainers/postgres.py:38: in get_connection_url
    port=self.port_to_expose)
/usr/local/lib/python3.7/site-packages/testcontainers/core/generic.py:42: in _create_connection_url
    host=self.get_container_host_ip(),
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <testcontainers.postgres.PostgresContainer object at 0x7fb906ef6438>

    def get_container_host_ip(self) -> str:
        # if testcontainers itself runs in docker, get the newly spawned
        # container's IP address from the dockder "bridge" network
        if inside_container():
>           return self.get_docker_client().bridge_ip(self._container.id)
E           AttributeError: 'NoneType' object has no attribute 'id'

/usr/local/lib/python3.7/site-packages/testcontainers/core/container.py:76: AttributeError
from testcontainers.postgres import PostgresContainer
postgres_container = PostgresContainer("postgres:9.5")
sql_url = postgres_container.get_connection_url()

get_container_host_ip returning 0.0.0.0 instead of localhost

When run locally, we're having trouble using get_container_host_ip, as it is returning 0.0.0.0.

As the container IP is intended to be used by clients, surely it should be returning either localhost or 127.0.0.1? I'm not sure about other Python clients, but with asyncio ClientSession we're finding that it is unwilling to accept 0.0.0.0 as a valid server address.

I'd be happy to raise a PR, but again don't want to cause breakage if there's a specific reason for it to be as it is.

Unable to provide a custom URL for selenium standalone image

Hi,

I was attempting to subclass testcontainers.selenium.BrowserWebDriverContainer and ran into an issue.

def __init__(self, capabilities, image=None):
        self.capabilities = capabilities
        if not image:
            self.image = get_image_name(capabilities)
        self.port_to_expose = 4444
        self.vnc_port_to_expose = 5900
        super(BrowserWebDriverContainer, self).__init__(image=self.image)
        self.with_exposed_ports(self.port_to_expose, self.vnc_port_to_expose)

This __init__ does not account for when a user specifies the input image . As shown above, there needs to be an else statement for image so that we don't hit a NameError because self.image does not get declared if the user provides a custom URL for the image.

I am using an image from my company's Docker registry as I cannot download outside images.

I'd like to submit a PR that can solve this.
The change I'd make is

def __init__(self, capabilities, image=None):
        self.capabilities = capabilities
        if  image is None:
            self.image = get_image_name(capabilities)
        else:
            self.image = image
        self.port_to_expose = 4444
        self.vnc_port_to_expose = 5900
        super(BrowserWebDriverContainer, self).__init__(image=self.image)
        self.with_exposed_ports(self.port_to_expose, self.vnc_port_to_expose)

For reference see testcontainers.redis.RedisContainer; in particular: super(RedisContainer, self).__init__(image) where a user-supplied image will be passed to the underlying class DockerContainer.

Is it possible to configure `--shm-size=2g`?

I would like to use testcontainers to start a solace pubsub container ala:

https://docs.solace.com/Solace-SW-Broker-Set-Up/Docker-Containers/Set-Up-Single-Linux-Container.htm

sudo docker run -d -p 8080:8080 -p 55555:55555 --shm-size=2g --env username_admin_globalaccesslevel=admin --env username_admin_password=admin --name=solace solace/solace-pubsub-standard

I've managed to expose the ports, and configure the environment variables, but is it possible to achieve the equivalent of --shm-size=2g?

test is stuck at waiting and then times out : postgres container

Can someone please help fix this issue? Not sure what to do. Container starts up fine. But my test is timing out.

py.test -s
========================================================================== test session starts ==========================================================================
platform darwin -- Python 3.8.3, pytest-5.4.3, py-1.8.1, pluggy-0.13.1
rootdir: /Users/ypatel/postgres
collecting ... testing
collected 1 item                                                                                                                                                        

test_db_containers.py 
Pulling image localhost/opensource/postgres/postgresql12:12.3
โ ด
Container started:  d21da7fa95
Waiting to be ready...
F

=============================================================================== FAILURES ================================================================================
_______________________________________________________________________ test_docker_run_postgress _______________________________________________________________________

    def test_docker_run_postgress():
        postgres_container = PostgresContainer("localhost/opensource/postgres/postgresql12:12.3")
    
>       with postgres_container as postgres:

test_db_containers.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/testcontainers/core/container.py:64: in __enter__
    return self.start()
/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/testcontainers/core/generic.py:42: in start
    self._connect()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

wrapped = <bound method DbContainer._connect of <testcontainers.postgres.PostgresContainer object at 0x7fcb692b6700>>
instance = <testcontainers.postgres.PostgresContainer object at 0x7fcb692b6700>, args = (), kwargs = {}

    @wrapt.decorator
    def wrapper(wrapped, instance, args, kwargs):
        exception = None
        print(crayons.yellow("Waiting to be ready..."))
        with blindspin.spinner():
            for _ in range(0, config.MAX_TRIES):
                try:
                    return wrapped(*args, **kwargs)
                except Exception as e:
                    time.sleep(config.SLEEP_TIME)
                    exception = e
>           raise TimeoutException(
                """Wait time exceeded {0} sec.
                    Method {1}, args {2} , kwargs {3}.
                     Exception {4}""".format(config.MAX_TRIES,
                                             wrapped.__name__,
                                             args, kwargs, exception))
E           testcontainers.core.exceptions.TimeoutException: Wait time exceeded 120 sec.
E                               Method _connect, args () , kwargs {}.
E                                Exception (psycopg2.OperationalError) server closed the connection unexpectedly
E               This probably means the server terminated abnormally
E               before or while processing the request.
E           
E           (Background on this error at: http://sqlalche.me/e/e3q8)

/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/testcontainers/core/waiting_utils.py:46: TimeoutException
======================================================================== short test summary info ========================================================================
FAILED test_db_containers.py::test_docker_run_postgress - testcontainers.core.exceptions.TimeoutException: Wait time exceeded 120 sec.
===================================================================== 1 failed in 124.21s (0:02:04) =====================================================================

Exposing bridged ip address to facilitate container to container communcation?

I saw that there is a method called get_container_host_ip that returns the container's ip depending on if it's in a container or not. I didn't see a way to actually get the bridge ip address of a specific container. The use case that I have is that I have a python integration test that runs localstack container that needs to communicate with multiple containers, one being a database. My localstack container runs a lambda that gets passed an environment variable that says where the database is located. If i use get_container_host_ip to pass the ip to the lambda, it always returns localhost because the python tests are running locally on my machine.

Possible workarounds are running my tests inside of a container so that it never uses localhost. I saw the testcontainers for java has the ability to set the network and so you can use a network alias to communicate between the containers which you can't use when using the bridged network. Another workaround is to just expose a function that returns the bridged IP on the DockerContainer class, but that seems like a hack.

Are there any other alternative solutions?

Supporting Ryuk container

So I'm using testcontainers within Java, as well as Python. I noticed that sometimes containers stick around in Python, while in Java they always get cleaned up properly thanks to Ryuk.

I was wondering, if that was a decision made on purpose to not use Ryuk in the Python project? Otherwise supporting it would be really great, in order to keep the docker environments of the host machine clean.

Feature Request: Localstack container

Localstack is a mock of AWS services. There is a Localstack available in the testcontainers-java library: https://github.com/testcontainers/testcontainers-java/blob/master/modules/localstack/src/main/java/org/testcontainers/containers/localstack/LocalStackContainer.java

It would be very useful if this were directly available in the python module. My teams current workaround is to use the DockerCompose object with this docker-compose.yaml file:

version: '2.1'

services:
  localstack:
    image: localstack/localstack
    ports:
      - "4567-4597:4567-4597"
      - "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
    environment:
      - SERVICES="kinesis,lambda"
      - DEBUG=true
      - USE_SSL=true
      - DOCKER_HOST=unix:///var/run/docker.sock

docker.errors.APIError: 500 Server Error: bind: address already in use

Hi! I'm using testcontainers (version 2.5) and when I load several containers from different modules the next exception appeared occasionally:

e = HTTPError('500 Server Error: Internal Server Error for url: http+docker://localhost/v1.35/containers/92bb224c98058c18df93f94514bf353ac4a7548fac0f40dfb74cada80050f936/start',)
def create_api_error_from_http_exception(e):
"""
Create a suitable APIError from requests.exceptions.HTTPError.
"""
response = e.response
try:
explanation = response.json()['message']
except ValueError:
explanation = (response.content or '').strip()
cls = APIError
if response.status_code == 404:
if explanation and ('No such image' in str(explanation) or
'not found: does not exist or no pull access'
in str(explanation) or
'repository does not exist' in str(explanation)):
cls = ImageNotFound
else:
cls = NotFound
raise cls(e, response=response, explanation=explanation)
E docker.errors.APIError: 500 Server Error: Internal Server Error ("driver failed programming external connectivity on endpoint dazzling_saha (de58082fa7fb5b1e69d05f7c692aa14368047e49a86673e5e7736fabf78e1c00): Error starting userland proxy: listen tcp 0.0.0.0:34927: bind: address already in use")

each time, several tests are trying to connect the same 4 ports (different ports from run to run).
I checked and the containers are closing correctly at the end of the test.

PostgresContainer start stuck at waiting to be ready in Gitlab CI using docker:dind

I am trying to setup test environment in Gitlab CI/CD. But starting PostgresContainer stuck forever.

Version:

  • Python3.8
  • SQLAlchemy==1.3.12
  • testcontainers==2.6.0

Samples from my .gitlab-ci.yml

variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""

test:
    stage: test
    image: python:3.8-slim
    services:
        - docker:19.03.8-dind
    script:
        - pytest tests -s -v --log-cli-level=debug

Logs from gitlab-runner

๏ฟฝ$ pytest tests -s -v --log-cli-level=debug๏ฟฝ
============================= test session starts ==============================
platform linux -- Python 3.8.1, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /builds/platform/account/venv/bin/python
cachedir: .pytest_cache
rootdir: /builds/platform/account
plugins: flask-1.0.0
collecting ... collected 2 items

tests/functional/test_login.py::test_simple_login 
-------------------------------- live log setup --------------------------------
DEBUG    docker.utils.config:config.py:21 Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
DEBUG    docker.utils.config:config.py:28 No config file found
DEBUG    docker.utils.config:config.py:21 Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
DEBUG    docker.utils.config:config.py:28 No config file found

-----Start postgres db container-----


Pulling image postgres:11.4-alpine
DEBUG    urllib3.connectionpool:connectionpool.py:221 Starting new HTTP connection (1): docker:2375
DEBUG    urllib3.connectionpool:connectionpool.py:428 http://docker:2375 "POST /v1.35/containers/create HTTP/1.1" 404 50
DEBUG    docker.auth:auth.py:41 Looking for auth config
DEBUG    docker.auth:auth.py:43 No auth config in memory - loading from filesystem
DEBUG    docker.utils.config:config.py:21 Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
DEBUG    docker.utils.config:config.py:28 No config file found
DEBUG    docker.auth:auth.py:242 Looking for auth entry for 'docker.io'
DEBUG    docker.auth:auth.py:253 No entry found
DEBUG    docker.auth:auth.py:58 No auth config found
DEBUG    urllib3.connectionpool:connectionpool.py:428 http://docker:2375 "POST /v1.35/images/create?tag=11.4-alpine&fromImage=postgres HTTP/1.1" 200 None
DEBUG    urllib3.connectionpool:connectionpool.py:428 http://docker:2375 "GET /v1.35/images/postgres:11.4-alpine/json HTTP/1.1" 200 None
DEBUG    urllib3.connectionpool:connectionpool.py:428 http://docker:2375 "POST /v1.35/containers/create HTTP/1.1" 201 88
DEBUG    urllib3.connectionpool:connectionpool.py:428 http://docker:2375 "GET /v1.35/containers/f5813a1784187260cb46ee3f71edde72d189f00cc35308666a4df624a4bc680e/json HTTP/1.1" 200 None
DEBUG    urllib3.connectionpool:connectionpool.py:428 http://docker:2375 "POST /v1.35/containers/f5813a1784187260cb46ee3f71edde72d189f00cc35308666a4df624a4bc680e/start HTTP/1.1" 204 0

Container started:  f5813a1784
Waiting to be ready...
DEBUG    urllib3.connectionpool:connectionpool.py:428 http://docker:2375 "GET /v1.35/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22id%22%3A+%5B%22f5813a1784187260cb46ee3f71edde72d189f00cc35308666a4df624a4bc680e%22%5D%7D HTTP/1.1" 200 1101
DEBUG    sqlalchemy.pool.impl.QueuePool:base.py:643 Error on connect(): could not connect to server: Connection timed out
	Is the server running on host "172.18.0.2" and accepting
	TCP/IP connections on port 5432?

DEBUG    urllib3.connectionpool:connectionpool.py:428 http://docker:2375 "GET /v1.35/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22id%22%3A+%5B%22f5813a1784187260cb46ee3f71edde72d189f00cc35308666a4df624a4bc680e%22%5D%7D HTTP/1.1" 200 1092
DEBUG    sqlalchemy.pool.impl.QueuePool:base.py:643 Error on connect(): could not connect to server: Connection timed out
	Is the server running on host "172.18.0.2" and accepting
	TCP/IP connections on port 5432?

Informations from docker:dind

[root@gitlab-runner ~]# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                     NAMES
f5813a178418        postgres:11.4-alpine   "docker-entrypoint.sโ€ฆ"   9 minutes ago       Up 9 minutes        0.0.0.0:32768->5432/tcp   musing_khorana

[root@gitlab-runner-build-1 ~]# docker inspect f5813a178418
[
    {
        "Id": "f5813a1784187260cb46ee3f71edde72d189f00cc35308666a4df624a4bc680e",
        "Created": "2020-04-15T05:22:14.043451232Z",
        "Path": "docker-entrypoint.sh",
        "Args": [
            "postgres"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 271,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2020-04-15T05:22:14.631050518Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:5239fade3a90b73a10592a252289d6d916d050f39dafca0650ae14e878c23b0a",
        "ResolvConfPath": "/var/lib/docker/containers/f5813a1784187260cb46ee3f71edde72d189f00cc35308666a4df624a4bc680e/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/f5813a1784187260cb46ee3f71edde72d189f00cc35308666a4df624a4bc680e/hostname",
        "HostsPath": "/var/lib/docker/containers/f5813a1784187260cb46ee3f71edde72d189f00cc35308666a4df624a4bc680e/hosts",
        "LogPath": "/var/lib/docker/containers/f5813a1784187260cb46ee3f71edde72d189f00cc35308666a4df624a4bc680e/f5813a1784187260cb46ee3f71edde72d189f00cc35308666a4df624a4bc680e-json.log",
        "Name": "/musing_khorana",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {
                "5432/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": ""
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Capabilities": null,
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/8c0f74be6b6273dfa92f746332d4c7b5a02183efa5bde3c40abf25f3329dbc59-init/diff:/var/lib/docker/overlay2/29efd662a010319d509a75483ff5650fa0ae46b41c98e374e4b3cc5e2c3e0167/diff:/var/lib/docker/overlay2/8723a0ffff99f7e5af3d94f9ebaaf155b9883836203690a908c64819718f8f66/diff:/var/lib/docker/overlay2/603fd62f94e002cdf27665e27b2ce6d0016da652eaf13838fcd66c0027c1171f/diff:/var/lib/docker/overlay2/fa06a053297fdcca4423d783ed6a8586bea4dd5aa9b9cb2631ad773f1cb1218d/diff:/var/lib/docker/overlay2/c72b4d9e0c149ec0062d2f8da3b23ff758ab58d0a55410e2c6697efe196ef9d8/diff:/var/lib/docker/overlay2/2a7c676edf9d77134aa92aa35b567935486e1efa37b13019f6ad6ce937824828/diff:/var/lib/docker/overlay2/bf218232ea043ee53d881f802b971e48327fd574505ed13b0be86cebbbd7f7a7/diff:/var/lib/docker/overlay2/fbd9de45ae6e8b7efc153206057e355beaf88899a462723c775db0b520fe882e/diff:/var/lib/docker/overlay2/903dc6646e74d53678e8d395df9bb080cceeb23bb945e628006d0650404dbdce/diff",
                "MergedDir": "/var/lib/docker/overlay2/8c0f74be6b6273dfa92f746332d4c7b5a02183efa5bde3c40abf25f3329dbc59/merged",
                "UpperDir": "/var/lib/docker/overlay2/8c0f74be6b6273dfa92f746332d4c7b5a02183efa5bde3c40abf25f3329dbc59/diff",
                "WorkDir": "/var/lib/docker/overlay2/8c0f74be6b6273dfa92f746332d4c7b5a02183efa5bde3c40abf25f3329dbc59/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "volume",
                "Name": "60cdc4a7b03157ab2f14b66e4620480f24b2562912912ac9ed5305b439eb13a5",
                "Source": "/var/lib/docker/volumes/60cdc4a7b03157ab2f14b66e4620480f24b2562912912ac9ed5305b439eb13a5/_data",
                "Destination": "/var/lib/postgresql/data",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "f5813a178418",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "5432/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "POSTGRES_USER=test",
                "POSTGRES_PASSWORD=test",
                "POSTGRES_DB=test",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "LANG=en_US.utf8",
                "PG_MAJOR=11",
                "PG_VERSION=11.4",
                "PG_SHA256=02802ddffd1590805beddd1e464dd28a46a41a5f1e1df04bab4f46663195cc8b",
                "PGDATA=/var/lib/postgresql/data"
            ],
            "Cmd": [
                "postgres"
            ],
            "Image": "postgres:11.4-alpine",
            "Volumes": {
                "/var/lib/postgresql/data": {}
            },
            "WorkingDir": "",
            "Entrypoint": [
                "docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "a1dbb4c5bd2422ce0cffa574e1b44408c2aa7fbdef720d4ebd18bce550d9fe50",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "5432/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "32768"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/a1dbb4c5bd24",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "6e8581cc476a339fb31a62df1a6acd6ae98e2bee101c713caffe1bb02a6318b8",
            "Gateway": "172.18.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.18.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:12:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "8fe294d72eef7fd7fc47f6bad666ad0c70b395c3314b3a73545eac718c232b2e",
                    "EndpointID": "6e8581cc476a339fb31a62df1a6acd6ae98e2bee101c713caffe1bb02a6318b8",
                    "Gateway": "172.18.0.1",
                    "IPAddress": "172.18.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:12:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]

It seems to me that after starting the container, testcontainer tries to connect with container_ip:5432 rather than randomly generated docker_dind_host:32768. Without docker:dind, it works perfectly.

Should get_container_host_ip() return docker:dind host ip? Also get_exposed_port() return mapped host port on docker:dind?

Import Error in examples

I found an import error in mysql_example.py.
from testcontainers import MySqlContainer
should be changed to
from testcontainers.mysql import MySqlContainer
Moreover, mysql:5.7.17 is deprecated from dockerhub, so I changed to mysql:5.7.28.

Changes can be found in #49

ImportError without optional dependencies

After #8, from testcontainers import PostgresContainer fails because __init__.py imports the selenium submodule.

If you want to keep the top level imports the submodules will need to catch ImportError and set the imported names as None. Then in the classes that use them check if the imported names are None and raise an exception about the optional dependencies.

Option to build the docker containers before starting

When using this package to test a custom container, it'd be good to build the containers before starting the service every time the test runs to make sure any new changes to the Dockerfiles are tested.

I think this can be a param to the DockerCompose constructor:
def __init__(self, filepath, compose_file_name="docker-compose.yml", build=False):

If True, it would add a --build option to the docker-compose command inside start()

ServerSelectionTimeoutError when running in gitlab-ci on gitlab.com

I am running testcontainers-python 2.6 with mongo:4.2.3. My tests work fine on my local system, but when I have them run on gitlab.com all testcontainer-using tests fail with errors of the following type:

557 ___________________ ERROR at teardown of test_create_new_run ___________________
558     @pytest.fixture(scope="function")
559     def database_connection():
560         container = MongoDbContainer('mongo:4.2.3')
561         container.start()
562         print("MongoDB connection URL: " + container.get_connection_url())
563         database = Database(MongoClient(container.get_connection_url()), "WES_Test")
564         yield database
565 >       database._db_runs().drop()
566 tests/Database_test.py:14: 
567 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
568 /opt/conda/envs/wesnake/lib/python3.7/site-packages/pymongo/collection.py:1103: in drop
569     dbo.drop_collection(self.__name, session=session)
570 /opt/conda/envs/wesnake/lib/python3.7/site-packages/pymongo/database.py:914: in drop_collection
571     with self.__client._socket_for_writes(session) as sock_info:
572 /opt/conda/envs/wesnake/lib/python3.7/site-packages/pymongo/mongo_client.py:1266: in _socket_for_writes
573     server = self._select_server(writable_server_selector, session)
574 /opt/conda/envs/wesnake/lib/python3.7/site-packages/pymongo/mongo_client.py:1253: in _select_server
575     server = topology.select_server(server_selector)
576 /opt/conda/envs/wesnake/lib/python3.7/site-packages/pymongo/topology.py:235: in select_server
577     address))
578 /opt/conda/envs/wesnake/lib/python3.7/site-packages/pymongo/topology.py:193: in select_servers
579     selector, server_timeout, address)
580 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
581 self = <pymongo.topology.Topology object at 0x7fc6ab1a8c88>
582 selector = <function writable_server_selector at 0x7fc6ae81af28>, timeout = 30
583 address = None
584     def _select_servers_loop(self, selector, timeout, address):
585         """select_servers() guts. Hold the lock when calling this."""
586         now = _time()
587         end_time = now + timeout
588         server_descriptions = self._description.apply_selector(
589             selector, address, custom_selector=self._settings.server_selector)
590     
591         while not server_descriptions:
592             # No suitable servers.
593             if timeout == 0 or now > end_time:
594                 raise ServerSelectionTimeoutError(
595 >                   self._error_message(selector))
596 E               pymongo.errors.ServerSelectionTimeoutError: 172.18.0.2:27017: timed out
597 /opt/conda/envs/wesnake/lib/python3.7/site-packages/pymongo/topology.py:209: ServerSelectionTimeoutError

My gitlab-ci.yml is as follows:

cache:
  paths:
    - /opt/cache/pip
    - /opt/conda/pkgs
    - /root/.conda/pkgs

services:
  - docker:dind

variables:
  DOCKER_HOST: "tcp://docker:2375"
  DOCKER_DRIVER: overlay2

before_script:
  - python -V              # For debugging
  - export PIP_CACHE_DIR="/opt/cache/pip"
  - conda init bash
  - source /root/.bashrc
  - conda info             # For debugging
  - conda config --show    # For debugging
  - conda env create -n wesnake -f environment.yaml
  - conda activate wesnake
  - pip install git+https://github.com/testcontainers/testcontainers-python.git

test:
  image: continuumio/miniconda3:latest
  script:
    - python -m pytest -v -s

run:
  image: continuumio/miniconda3:latest
  script:
    - python setup.py bdist_wheel
  artifacts:
    paths:
      - dist/*.whl

My connection-setup is as follows

@pytest.fixture(scope="function")
def database_connection():
    container = MongoDbContainer('mongo:4.2.3')
    container.start()
    database = Database(MongoClient(container.get_connection_url()), "WES_Test")
    yield database
    database._db_runs().drop()

Containers are being started and container.get_connection_url() produces URLs with variable IP numbers for the container of the form:

mongodb://172.18.0.$number:27017

I would appreciate every help with this!

cant install with python 3.7 getting UnicodeDecodeError

blindspin installation is failing

(base) ma13416:bi-pentaho-etl xxxxx$ /Users/xxxxx/anaconda3/bin/pip3.7 install testcontainers==2.5
Collecting testcontainers==2.5
  Using cached https://files.pythonhosted.org/packages/f8/37/2636e51aba7007eaba07f88198805f28cabb26c51d5245aaa0c457c1eae5/testcontainers-2.5.tar.gz
Collecting docker (from testcontainers==2.5)
  Using cached https://files.pythonhosted.org/packages/cc/ca/699d4754a932787ef353a157ada74efd1ceb6d1fc0bfb7989ae1e7b33111/docker-4.1.0-py2.py3-none-any.whl
Requirement already satisfied: wrapt in /Users/xxxxx/anaconda3/lib/python3.6/site-packages (from testcontainers==2.5) (1.10.11)
Collecting crayons (from testcontainers==2.5)
  Using cached https://files.pythonhosted.org/packages/f8/64/ab71c69db049a5f404f1f2c7627578f4b59aca55e6ad9d939721ce6466dd/crayons-0.3.0-py2.py3-none-any.whl
Collecting blindspin (from testcontainers==2.5)
  Using cached https://files.pythonhosted.org/packages/bd/e7/eb0db2558be572efc431a24b8561b0efdac16da73edfc83a2efee8cfeb1c/blindspin-2.0.1.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/private/var/folders/bv/7xtx5scd0mqf15w89873l5q80000gp/T/pip-install-qnz9dpnp/blindspin/setup.py", line 7, in <module>
        readme = f.read()
      File "/Users/xxxxx/anaconda3/lib/python3.6/encodings/ascii.py", line 26, in decode
        return codecs.ascii_decode(input, self.errors)[0]
    UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 189: ordinal not in range(128)
    

Dev environment setup

Some documentation on setting up the dev environment would be great.

Also, why not use a dependency management package like pipenv or poetry?

Use extras for optional dependencies

Currently, selenium, pymysql, and psycopg2 are all installed regardless of what I need.

Extras should be used like sqlalchemy does.

That way, I can have testcontainers[postgresql,mysql] as a dependency if that's all I need.

I can submit a PR if you like.

Installation with poetry on python > 3.5

Hi,

It is currently impossible to install this library using poetry that has python version requirements of > 3.5.

This is due to the python_requires in your setup.py of '~=3.5',: https://github.com/testcontainers/testcontainers-python/blob/master/setup.py#L64

Poetry fails with an error message like so:

[SolverProblemError]
The current project's Python requirement (>=3.7) is not compatible with some of the required packages Python requirement:
  - testcontainers requires Python ~=3.5

Because testcontainers (3.0.0) requires Python ~=3.5
 and no versions of testcontainers match >3.0.0,<4.0.0, testcontainers is forbidden.
So, because data-upload depends on testcontainers (^3.0.0), version solving failed.

Given that this library is tested with versions 3.5 -> 3.8 in travis, i think this python requirement is incorrect.

Allow to pull images before running compose

Currently DockerCompose.start only executes docker-compose up which uses locally pulled images. In some cases it is beneficial to get the latest version of the image that is to do:

docker-compose pull && docker-compose up

The API for it may look like:

with DockerCompose('somefolder', pull=True) as compose:
    # use compose

[Documentation] Plugin system

The docs do not state if its possible to extend the package with functionality from separate packages (e.g. like hypothesis does). Is this possible?

MongoDbContainer on Windows 10 fails with getaddrinfo error

The following code fails with testcontainers-python 3.0.2:

from testcontainers.mongodb import MongoDbContainer
with MongoDbContainer('mongo:4.2.2') as mongo_:
...     db = mongo_.get_connection_client().test
...     db.test.insert_one({'foo': 'bar'})

with

Pulling image mongo:4.2.2
Container started:  d5ca66c443
Traceback (most recent call last):
  File "<input>", line 3, in <module>
  File "C:\...\venv\lib\site-packages\pymongo\collection.py", line 695, in insert_one
    self._insert(document,
  File "C:\...\venv\lib\site-packages\pymongo\collection.py", line 610, in _insert
    return self._insert_one(
  File "C:\...\venv\lib\site-packages\pymongo\collection.py", line 599, in _insert_one
    self.__database.client._retryable_write(
  File "C:\...\venv\lib\site-packages\pymongo\mongo_client.py", line 1490, in _retryable_write
    with self._tmp_session(session) as s:
  File "C:\Program Files\Python38\lib\contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "C:\...\venv\lib\site-packages\pymongo\mongo_client.py", line 1823, in _tmp_session
    s = self._ensure_session(session)
  File "C:\...\venv\lib\site-packages\pymongo\mongo_client.py", line 1810, in _ensure_session
    return self.__start_session(True, causal_consistency=False)
  File "C:\...\venv\lib\site-packages\pymongo\mongo_client.py", line 1763, in __start_session
    server_session = self._get_server_session()
  File "C:\...\venv\lib\site-packages\pymongo\mongo_client.py", line 1796, in _get_server_session
    return self._topology.get_server_session()
  File "C:\...\venv\lib\site-packages\pymongo\topology.py", line 482, in get_server_session
    self._select_servers_loop(
  File "C:\...\venv\lib\site-packages\pymongo\topology.py", line 208, in _select_servers_loop
    raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: localnpipe:32775: [Errno 11001] getaddrinfo failed

Interestingly, the MongoDbContainer.get_connection_url() method returns a URL with hostname localnpipe, which I suspect the mongo client does not understand. Using pymongo directly, I can connect to the created container using a hostname of localhost.

More options for consuming DSN?

For god only knows what reason, some older mysql libraries (e.g. MySQLdb: http://mysql-python.sourceforge.net/MySQLdb.html) don't accept a full DSN and only a parameterized DSN.

e.g.

conn = MySQLdb.connect(
    database.get_connection()                                                                                                                                    
)                                                                                                                                                                

Is illegal.

What it really wants is:

db=MySQLdb.connect(host="localhost",user="joebob",
                  passwd="moonpie",db="thangs",port=50505)

But the only parameters available right now are:

MYSQL_DATABASE
MYSQL_PASSWORD
MYSQL_ROOT_PASSWORD
MYSQL_USER

I think to flesh out what's missing, what should also be added is:

MYSQL_HOST
MYSQL_PORT

There may be more, but I think that's more than enough to satisfy most needs of MySQLdb.

reuse/singleton containers

Hello,
I was looking at testcontainers-java and they implemented a way to "reuse" containers.

Is there away to do that in testcontainer-python?

v2.5 installed from PyPI does not have mongodb module

I've installed v2.5 via pip. Attempting to use the mongodb module like so:

from testcontainers.mongodb import MongoDbContainer

gives me:

ImportError: No module named mongodb

Looking in my virtual environment:

(env) sabsays-MacBook-Pro:st2-test-utils sabsay$ ls env/lib/python2.7/site-packages/testcontainers/
__init__.py       compose.pyc       elasticsearch.pyc google            nginx.py          oracle.pyc        redis.py
__init__.pyc      core              general.py        mysql.py          nginx.pyc         postgres.py       redis.pyc
compose.py        elasticsearch.py  general.pyc       mysql.pyc         oracle.py         postgres.pyc      selenium.py

it is indeed not there :-)

Blindspin Dependency breaks installation in CI/CD

Attempting to install the test container breaks due to blindspin.

All our build started failing now

[2020-12-11T07:08:36.725] + pip3 install -r common/test-containers.txt

[2020-12-11T07:08:37.292] Collecting testcontainers==3.1.0

[2020-12-11T07:08:37.292]   Downloading testcontainers-3.1.0.tar.gz (15 kB)

[2020-12-11T07:08:37.866] Requirement already satisfied: wrapt in /usr/local/lib/python3.6/dist-packages (from testcontainers==3.1.0->-r common/test-containers.txt (line 1)) (1.12.1)

[2020-12-11T07:08:37.866] Collecting blindspin

[2020-12-11T07:08:37.866]   Downloading blindspin-2.0.1.tar.gz (2.2 kB)

[2020-12-11T07:08:38.130]     ERROR: Command errored out with exit status 1:

[2020-12-11T07:08:38.130]      command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-4qbykvrm/blindspin_3af84003b4474b10bf0bafb7094e3fa8/setup.py'"'"'; __file__='"'"'/tmp/pip-install-4qbykvrm/blindspin_3af84003b4474b10bf0bafb7094e3fa8/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-4b6cz5at

[2020-12-11T07:08:38.130]          cwd: /tmp/pip-install-4qbykvrm/blindspin_3af84003b4474b10bf0bafb7094e3fa8/

[2020-12-11T07:08:38.130]     Complete output (7 lines):

[2020-12-11T07:08:38.130]     Traceback (most recent call last):

[2020-12-11T07:08:38.130]       File "<string>", line 1, in <module>

[2020-12-11T07:08:38.130]       File "/tmp/pip-install-4qbykvrm/blindspin_3af84003b4474b10bf0bafb7094e3fa8/setup.py", line 7, in <module>

[2020-12-11T07:08:38.130]         readme = f.read()

[2020-12-11T07:08:38.130]       File "/usr/lib/python3.6/encodings/ascii.py", line 26, in decode

[2020-12-11T07:08:38.130]         return codecs.ascii_decode(input, self.errors)[0]

[2020-12-11T07:08:38.130]     UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 189: ordinal not in range(128)

OracleDbContainer() raises NotImplementedError

Hello!

I think you need to add a _configure(self) to your OracleDbContainer class.

Also, the documentation for OracleDbContainer does not initialize the oracle variable contained in the with clause. It should read:

        with OracleDbContainer() as oracle:
            e = sqlalchemy.create_engine(oracle.get_connection_url())
            result = e.execute("select 1 from dual")

Testing-containers and clickhouse-driver error:Unexpected EOF while reading bytes

I have these libraries installed:

testcontainers==2.5
clickhouse-driver==0.1.0

This code:

from testcontainers.core.generic import GenericContainer
from clickhouse_driver import Client


def test_docker_run_clickhouse():
    ch_container = GenericContainer("yandex/clickhouse-server")
    ch_container.with_bind_ports(9000, 9000)
    with ch_container as ch:

        client = Client(host='localhost')
        print(client.execute("SHOW TABLES"))


if __name__ == '__main__':
    test_docker_run_clickhouse()

I am trying to get a generic container with clickhouse DB running.

But it gives me: EOFError: Unexpected EOF while reading bytes.

I am using Python 3.5.2. How to fix this?

Stackoverflow copy of the question: link

Port mapping problems when using wormhole pattern

I've encountered a couple of problems when running in a Docker-in-Docker environment when the using the 'wormhole' pattern. In this context, our test is executed inside a container, and all containers are created as siblings (alongside it).

The biggest problem is that the port mapping seems to be incorrect - get_exposed_port returns the original port when running inside a container:

def get_exposed_port(self, port) -> str:
if inside_container():
return port
else:
return self.get_docker_client().port(self._container.id, port)

In testcontainers-java we don't have such a branch and it works well in all contexts I'm aware of. I can't really see the reason for the branch.

I'd be keen to submit a pull request, but want to check whether there's a specific reason for the current code path in case we break a usage I'm not aware of!

testcontainers wait indefinitely without effects when running in Gitlab-CI

I'm running a test based on testcontainers-python in a container running in Gitlab CI.

Basically the CI spawns a docker-in-docker container next to the container running the tests. I then set the environment variable DOCKER_HOST so that it can be used by testcontainers.

But in practice the test just get stuck, so I ran it with pytest -s to see the logs of testcontainers and got:

tests/test.py
Pulling image postgres:10
Container started:  4a1115dc98
Waiting to be ready...

And then it gets stuck indefinitely (I waited 20 minutes).

For the record, testcontainers-java works perfectly with this same situation, so it should be possible to make this work.

I'm not sure how I can get more debug information to understand what is happening, but shouldn't it timeout normally at least?

Allow option to keep the containers alive

It would be nice to keep the docker containers alive for speeding up the test runs. Now, in every test run containers are re-created and can't keep them alive since this block

  def __del__(self):
        """
        Try to remove the container in all circumstances
        """
        if self._container is not None:
            try:
                self.stop()
            except:  # noqa: E722
                pass

removes containers when the instance variable is deleted (when the program terminates) and I cannot override it because this also runs without using with as block.

I don't want my Mysql container to be recreated everytime. I am running tests very frequently and I am waiting couple seconds everytime. Pretty annoying.

I can make a small PR if that makes sense?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.