avast / gradle-docker-compose-plugin Goto Github PK
View Code? Open in Web Editor NEWSimplifies usage of Docker Compose for integration testing in Gradle environment.
License: MIT License
Simplifies usage of Docker Compose for integration testing in Gradle environment.
License: MIT License
Is there any reason you don't support Java7?
I compiled the jar with 1.7 and it seemed to work fine.
Hi,
I get following issue when i try to pull a docker container from a private registry in hub.docker.com:
Pulling repository docker.io/someprivateRepo/someImage Error: image someprivateRepo/someImage:latest not found
When i try that with docker pull
command i have no issues.
So my question is there a restriction? Can i do that?
task finalComposeDown(type: com.avast.gradle.dockercompose.tasks.ComposeDown) {
extension = new ComposeExtension(project, composeUp, composeDown)
}
integrationTest {
dependsOn composeDown, composeUp
finalizedBy finalComposeDown
}
I'm new to gradle. Trying to make sure that a mysql container is freshly created before running the integrationTests.
Is there a better way to do this?
HOST network container is broken after #25 on windows using boot2docker.
test.environment.get('WEB_HOST') == 'localhost'
| | | |
| | 172.17.1.100 false
| | 12 differences (0% similarity)
| | (172.17.1.100)
| | (localhost---)
Our CI system doesn't have docker-compose installed in the PATH.
Allowing to configure the docker-compose executable (and keeping PATH relative one as default) location can allow the build script to download docker-compose and use this downloaded executable.
Hi! Thanks for all the great work on this plugin, it's the best one I could find for Gradle.
I'm having one issue with it though. I'm not very familiar with Gradle so maybe I missed something here. I'm trying to stop Docker containers in case integrationTest
task failed.
task integrationTest(type: Test) {
dependsOn composeUp, intTest
finalizedBy composeDown
}
dockerCompose {
stopContainers = true
}
With this setup, even if some test failed I see that composeDown
is not executed which leaves Docker containers running. Is there a way to stop them regardless of the task result? Thanks!
First, thank you for the great plugin.
Is there an easy way of saving the container output to a file? I see there is a captureContainersOutput
option, but that prints the container output to stdout. Since our application can log a lot of things, our gradle build output becomes a fire hose if I turn this on. But frequently, we have tests fail in CI and I would like to see if there were any errors in the application.
I currently have the following code to do this at the end of a build:
composeDown.doFirst {
mkdir "$buildDir/logs/docker"
project.exec { ExecSpec e ->
extension.setExecSpecWorkingDirectory(e)
e.environment = composeDown.extension.environment
e.commandLine composeDown.extension.composeCommand('logs', '--no-color', 'app')
e.standardOutput = new FileOutputStream("$buildDir/logs/docker/app.log")
}
}
This is somewhat cumbersome though, and relies on the current implementation of the composeDown task. It also doesn't write the log files until the end of the build. Ideally, the captureContainersOutput
option could take a file or outputstream to make this easier.
I'm not able to use docker 1.12 in production as it is not yet available on AWS Elasticbeanstalk. I'm thinking about adding an alternate way to determine container's healthy state. Port checking does not work for me as they become ready immediately, way before actual application startup is completed. Here are some options that could make it work:
WDYT?
Hey,
I'm trying to use your plugin on TeamCity CI, but I'm getting groovy exception like this:
Caused by: groovy.lang.MissingMethodException: No signature of method: java.util.LinkedHashMap$LinkedValues.head() is applicable for argument types: () values: [] [14:12:31][Gradle failure report] Possible solutions: clear(), clear(), max(), find(), find(), add(java.lang.Object) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp.getServiceHost(ComposeUp.groovy:108) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp$getServiceHost.callCurrent(Unknown Source) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp.createServiceInfo(ComposeUp.groovy:60) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp$_loadServicesInfo_closure4.doCall(ComposeUp.groovy:53) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp.loadServicesInfo(ComposeUp.groovy:53) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp.up(ComposeUp.groovy:41) [14:12:31][Gradle failure report] at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:63) [14:12:31][Gradle failure report] at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.doExecute(AnnotationProcessingTaskFactory.java:218) [14:12:31][Gradle failure report] at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:211) [14:12:31][Gradle failure report] at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:200) [14:12:31][Gradle failure report] at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:585) [14:12:31][Gradle failure report] at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:568) [14:12:31][Gradle failure report] at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:80) [14:12:31][Gradle failure report] at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:61)
It's about this line:
https://github.com/avast/docker-compose-gradle-plugin/blob/master/src/main/groovy/com/avast/gradle/dockercompose/tasks/ComposeUp.groovy#L108
and seems like groovy incompatibility problem.
Versions:
docker-compose-gradle-plugin: 0.2.85
groovy: 2.4.4
Update:
working for:
docker-compose-gradle-plugin: 0.1.59
Do you have any sample using this plugin?
I would like to add support for the --remove-orphans option to this plugin.
The said option got added to version 1.7.0 of docker-compose back in April 2016. I have already done the change and am ready to prepare a PR if agreed.
@augi Thanks again for this very cool project.
Hi,
First of all, great plug-in, simple yet powerful. There is plenty of gradle-docker plugins. Your was like fifth or sixth I've tried and decided not to look any further.
But i have a question, have you considered functionality of multiple compose configs in one gradle project?
Now its impossible to have two separate compose.ymls. One for let's say integration tests, and second for hmm other integration tests.
I don't know if its clear. First set of tests are some units that test for example dao classes against some database which runs inside docker and is started by your plug-in. The second tests are full blown integration tests that require different compose.yml where application being developed is runn the way it will be running in production.
Now its possible to accomplish it with multiple gradle projects. But I've discovered that your code is quite elastic and it is possible to do things like:
import com.avast.gradle.dockercompose.tasks.ComposeUp
task xxx(type: ComposeUp){
extension = new com.avast.gradle.dockercompose.ComposeExtension(project, xxx,
null);
extension.useComposeFiles = ['src/test/docker/docker4test-sample/docker-compose.yml']
}
Then I can have second to composeUp task with it's own configuration.
So i think it's simple to make it work in less disgusting way, only problem is how to overcome fact that tasks need each other references (Up and Down), but imho doable. Would you consider such change? I'm asking because I don't know whether to prepare PR or just do it for myself by just wrapping your plug-in.
Wondering if it would be possible to add an option to remove only locally created images using something like
docker-compose down --rmi local
I could create a PR if there are no objections?
Now it just checks presence of DOCKER_HOST
environment variable.
We could try to execute docker-machine
and use first or default machine. So use docker-machine ls -q
and docker-machine ip default
commands.
If I specify: executable = '/usr/local/bin/docker-compose'
then docker-compose
command works but docker
command fails, (may be docker inspect
?). This confirms to me that the PATH is not being taken into account. Instead it is trying to locate docker-compose
in the project folder. Note that running docker-compose --version
works from the same terminal I launch ./gradlew build
. (Gradle version 3.4).
dockerCompose.isRequiredBy(test)
dockerCompose {
useComposeFiles = ['src/test/docker/docker-compose-test.yml']
//executable = '/usr/local/bin/docker-compose'
// useComposeFiles = ['docker-compose.yml', 'docker-compose.prod.yml'] // like 'docker-compose -f <file>'
// captureContainersOutput = true // prints output of all containers to Gradle output - very useful for debugging
// stopContainers = false // doesn't call `docker-compose down` - useful for debugging
// removeContainers = false
// removeImages = "None" // Other accepted values are: "All" and "Local"
// removeVolumes = false
// projectName = 'my-project' // allow to set custom docker-compose project name (defaults to directory name)
// executable = '/path/to/docker-compose' // allow to set the path of the docker-compose executable if not present in PATH
// environment.put 'BACKEND_ADDRESS', '192.168.1.100' // Pass environment variable to 'docker-compose' for substitution in compose file
}
Caused by: net.rubygrapefruit.platform.NativeException: Could not start 'docker-compose'
at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:27)
at net.rubygrapefruit.platform.internal.WrapperProcessLauncher.start(WrapperProcessLauncher.java:36)
at org.gradle.process.internal.ExecHandleRunner.run(ExecHandleRunner.java:68)
... 2 more
Caused by: java.io.IOException: Cannot run program "docker-compose" (in directory "/Users/myusername/subdir/subdir/testApp"): error=2, No such file or directory
at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:25)
... 4 more
Caused by: java.io.IOException: error=2, No such file or directory
... 5 more
You get the following error:
* What went wrong:
Execution failed for task ':composeUp'.
> A problem occurred starting process 'command 'docker-compose''
It seems that the environment that the Gradle plugin for IntelliJ uses, has not the docker-compose set in the path. Maybe it would be a good idea to add an option to allow to configure manually the path where the docker-compose is located.
I usually use this plugin in a multi-module project where docker-compose.yml
is located in the root project directory. ComposeUp and ComposeDown tasks work fine (they depend on docker-compose
behavior), but healthcheck will not take place because serviceInfos will not be resolved (it is looking for docker-compose.yml
in the current directory, not in the rootProject.dir
):
String[] composeFiles = extension.useComposeFiles.empty ? ['docker-compose.yml', 'docker-compose.override.yml'] : extension.useComposeFiles
composeFiles
.findAll { project.file(it).exists() }
<...>
I had to debug the plugin to find out that I was missing
dockerCompose {
useComposeFiles = ['../docker-compose.yml']
}
It would be nice to have a hint that docker-compose file cannot be located in the expected location or even better, improve docker-compose.yml detection, mimicking docker-compose:
https://docs.docker.com/compose/reference/overview/
The -f flag is optional. If you don’t provide this flag on the command line, Compose traverses the working directory and its parent directories looking for a docker-compose.yml and a docker-compose.override.yml file.
If I add container_name
as part of my docker-compose.yml
file then I am not able to call dockerCompose.servicesInfos.<service_name>.'<container_name>'
. Looking at the code, the reason is due to this. In case the container_name
is set then the pattern does not match. Is there a need for the pattern check ? Maybe I am missing something, is there anything wrong if we just use the value of inspection.Name
excluding the /
?
Hi,
at the moment the health status check / wait function returns if the status switches from "starting" to something else. This status also might be "unhealthy" for a while before switching to healthy. This can be observed for services that really need a long time to start (like jetty). This might be worked around with unreasonable high thresholds in the health check definition. It would be nice to see this fixed somehow.
Cheers,
Christian
Hi,
It appears I am stuck in an endless wait on Mac if I have set the DOCKER_HOST
environment variable to unix:///var/run/docker.dock
.
My guess is that the getServiceHost
function tries to get the hostname from the "URL" and yields null. The console output says:
Waiting for TCP socket on null:1234 of service 'foo' (Ambiguous method overloading for method java.net.Socket#<init>.
Cannot resolve which method to invoke for [null, class java.lang.Integer] due to overlapping prototypes between:
[class java.lang.String, int]
[class java.net.InetAddress, int])
Unsetting DOCKER_HOST
seems to resolve the issue.
Best regards,
Emil
Use case: using the same YML file for integration test environment and for manual QA verification. YML file contains environment variables and there no way to pass them to composeUp task so that docker-compose up/build can interpolate.
I'm primary interested in setting HOST_IP var that is discovered by plugin as ServiceInfo.host but ability to pass any env to docker-compose up/build is quite generic.
Hey, thanks for the nice plugin. I'd like to see docker-compose pull functionality and would be happy to implement it and do a PR, just wanted your feedback to see if you agreed with the idea. It would be another task you could just add to your build lifecycle. The ComposeExtension class is holding quite a bit of data. Do you think it would make sense to have this as a separate task instead? Doing an analog function of 'build' at the same would probably also make sense.
thanks,
brian
Introduce timeout in waiting for exposed ports.
If it fails then read output using docker-compose logs $service
and print it. It allows to find a bug quickly (typically, you will see an exception during service startup).
I wish to use jacocoagent
with docker as a tcpserver
. Problem is that when I expose a port to the agent, the gradle docker compose plugin
fails to check health for it. Is there anyway to skip health check for a specific port? The agent server itself works fine, I am able to run jacoco:dump
ant task.
Now, the tests are done in 1.6.2 only.
servicesInfos only looks for docker-compose.yml, if the file is named docker-compose.yaml (which mine was), servicesInfos is empty.
Docker 1.12 adds support for HEALTCHECK command in Dockerfile so we can add support for it to the plugin. It means that we will read the status of running containers (using docker inspect
). Waiting for open TCP ports remains.
This should also fix the problem with Docker For Mac that opens all exposed ports immediately (even if the application is not running).
$ ./gradlew composeUp
:composeUp
mysql uses an image, skipping
default_mysql_1 is up-to-date
Will use localhost as host of mysql
Probing TCP socket on localhost:32776 of service 'mysql_1'
TCP socket on localhost:32776 of service 'mysql_1' is ready
BUILD SUCCESSFUL
Total time: 6.075 secs
$ docker-compose ps
Name Command State Ports
------------------------------
I would expect to be able to see the default_mysql_1 listed in the ps output
In README example of getting information on specific container is not entirely correct - def webInfo = dockerCompose.servicesInfos.web.'web_1'
One would first need to access containerInfos map and from that get the key web_1 - def webInfo = dockerCompose.servicesInfos.web.containerInfos.'web_1'
Another solution to keep this shorthand format would be to use groovy propertyMissing method like so:
def propertyMissing(String name) {
containerInfos[name]
}
Hi, what about adding support for docker-compose scale?
docker-compose people are working on some changes in this matter ( docker/compose#1661) but until they get it sorted out it would be nice not having to hardcode execs it in gradle...
If a container has a custom name specified with the container-name
option, exposeAsEnvironment()
throws an exception with the following stacktrace:
java.lang.NullPointerException: Cannot invoke method endsWith() on null object
at java_lang_String$endsWith$1.call(Unknown Source)
at com.avast.gradle.dockercompose.ComposeExtension$_exposeAsEnvironment_closure3$_closure9.doCall(ComposeExtension.groovy:68)
at com.avast.gradle.dockercompose.ComposeExtension$_exposeAsEnvironment_closure3.doCall(ComposeExtension.groovy:67)
at com.avast.gradle.dockercompose.ComposeExtension.exposeAsEnvironment(ComposeExtension.groovy:66)
at com.avast.gradle.dockercompose.ComposeExtension$exposeAsEnvironment$3.call(Unknown Source)
...
The issue is that the regex at line 124 of ComposeUp
will not necessarily match a custom container name. This leads to a null
value of instanceName
, which eventually causes the exception.
I will submit a pull request fixing this issue momentarily.
For docker container with HOST networking, this is how docker inspect looks like
"Networks": {
"host": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "52ca975025b511bfc42caf02c4777bdb9ac2570c95c429339364d80d388a2eea",
"EndpointID": "0c56d7ee3d9df21d762b0c99ef79b1252476e0cd2cf62a1af0ea588829f1a87e",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": ""
}
}
Current implementation parses the networks and uses empty string from gateway as hostname. However I think that in that case it should use localhost instead.
By far the most reliable docker plugin I've found. Great work!
I'm noticing the lack of a dockerComposePull task.
Is this something you would consider as a PR? I don't see any other way to ensure I'm using the latest docker images.
Is it possible to capture the logs from the containers?
I often want to inspect to logs of a service and right now to achieve that I need to quickly run
docker logs -f <service name>
in another terminal while the tests are running. Otherwise I can set removeContainers to false but then I want a fresh container each time I run the test.
It happens when using dockerCompose.isRequiredBy(test)
. The problem is that test
depends on compileTest
and dockerComposeUp
tasks but with undefined order.
The plugin could ensure that dockerComposeUp
will be called as last task before the test
. For example, we could iterate all dependencies of test
task and call mustRunAfter
or shouldRunAfter
on them. We could make it for all the tasks, or just for compile tasks.
dockerCompose {
captureContainersOutput = true
dockerComposeWorkingDirectory = /path/to/docker-compose.yml
}
task ('og-start') {
group 'OG Platform'
description 'Starts nextgen platform for the build.'
doFirst {
composeUp.up()
}
doLast {
def webInfo = dockerCompose.servicesInfos.og.'og_1'
//make changes to nginx proxy configuration based upon the port
}
}
task ('og-stop'){
group 'OG Platform'
description 'Stops OG container for the build.'
doLast {
composeDown.down()
}
}
For some reason, sometimes when calling the task og-start
it fails with the below error even though the container is started
This build could be faster, please consider using the Gradle Daemon: https://docs.gradle.org/2.8/userguide/gradle_daemon.html
Exception in thread "pool-1-thread-1" org.gradle.process.internal.ExecException: Process 'command 'docker-compose'' finished with non-zero exit value 143
at org.gradle.process.internal.DefaultExecHandle$ExecResultImpl.assertNormalExitValue(DefaultExecHandle.java:367)
at org.gradle.process.internal.DefaultExecAction.execute(DefaultExecAction.java:31)
at org.gradle.api.internal.file.DefaultFileOperations.exec(DefaultFileOperations.java:165)
at org.gradle.api.internal.project.AbstractProject.exec(AbstractProject.java:803)
at org.gradle.api.internal.project.AbstractProject.exec(AbstractProject.java:799)
at org.gradle.api.Project$exec$1.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at com.avast.gradle.dockercompose.tasks.ComposeUp$1.run(ComposeUp.groovy:78)
at java.lang.Thread.run(Thread.java:748)
Any ideas?
Even this commit doesn't help to overcome the issue with immediately exposed ports on Windows and Mac.
Hopefully, I found this answer from Docker staff how the binding exactly works. So we should just change the check - open the TCP connection and check if it's not closed immediately (let say in few millis).
It would be nice to add the plugin in global gradle plugins registry. So it can be included to the project as a single line, eg:
id 'com.avast.docker.compose' version '0.3.13'
Hey guys. It would be nice to fine tune some of the parameters/variables from ComposeExtension
within the dockerCompose
block of build.gradle
.
As stated in the Tips section of README.md
"All properties in dockerCompose have meaningful default values so you don't have to touch it" but I think there are some scenarios where overriding would enhance the user experience.
If you startup a container that needs some more time (e.g. 20 seconds) then waitForOpenTcpPorts
will output "Waiting for TCP socket on ${service.host}:${forwardedPort} of service '${service.name}' (${e.message})"
about 20 times because waitAfterTcpProbeFailure
is set to just 1 second.
In this case it would be convinient if I could set the waitAfterTcpProbeFailure
to for instance 5s within build.gradle
.
Thanks for this great plugin.
First of all thank you for a wonderful work you are doing.
I have a suggestion though on possible extension of your plugin.
I am using it for running IT tests for application deployed on WildFly. Unfortunately it exposes 8080 port way earlier then application is actually ready to receive calls. What I suggest is to add something like
void waitForOkHttp(Map<String, String> urlsToWait, Iterable<ServiceInfo> servicesInfos) {
Map<String, Integer> servicePorts = [:]
servicesInfos.forEach { serviceInfo ->
servicePorts.put(serviceInfo.name, serviceInfo.getTcpPort())
}
urlsToWait.forEach { service, u ->
def urlStr = "http://localhost:" + servicePorts.get(service) + u;
logger.lifecycle("Waiting for url of service: ${service} to return 200 response status on url: ${urlStr}")
URL url = new URL(urlStr);
while (true) {
logger.info("Waiting...")
try {
HttpURLConnection http = (HttpURLConnection)url.openConnection();
int statusCode = http.getResponseCode();
if (statusCode < 400) {
logger.lifecycle("Service: ${service}, url: ${urlStr} replied with 200.")
break;
}
else {
sleep(5000)
}
}
catch (Exception ex) {
sleep(5000)
}
}
}
}
into UpTask and extend configuration with parameter:
waitForOkHttp = [
"application-cotainer": "/rest/ping"
]
this will allow to "wait" while application running on application servers is actually ready to receive the calls.
Hi,
if I understand the error message of my build output right, the plugin gets confused by a compose file starting with
version: '2'
or did I oversee something?
Executing task ':composeUp' (up-to-date check took 0.0 secs) due to:
Task has not declared any outputs.
Starting process 'command 'docker-compose''. Working directory: /Users/src/server Command: docker-compose -p default build
:composeUp FAILED
$ docker-compose -p default build
mysql uses an image, skipping
With exit code 0
It looks like docker-compose supports these health statuses:
I configured healthcheck in compose.yml:
healthcheck:
test: ["CMD", "check-health.sh"]
interval: 10s
timeout: 5s
retries: 3
If the container status changes from -> "health: starting" to "unhealthy" it would be great to fail the composeUp Task (including shutting down the stack: composeDown). Otherwise the gradleTask "composeUp" will hang forever and the compose stack will stay alive.
Currently if running this scenario (with unhealthy containers) on a CI server again and again - may finally kill the (dockerhost) CI server.
Would be great to have this feature. What do you think?
Since more functionality is added with newer versions and the old implementation is sometimes changed, I think it might be handy to be able to test the plugin on multiple docker versions...
Thanks for the great plugin.
I just wonder if there is a chance to get a features that allows selecting services from a docker-compose.yml (e.g.: web, db -vs- web)
Currently, I would need to hack around by specifying a collection of compose.yml files. It feels weird somehow.
Hi all,
I saw on other issues about the support for docker-compose v2 and that the serviceInfos is empty when there is yaml
instead of yml
docker-compose file but i think my case is different.
My docker-compose.yml is this
version: '2'
services:
redis:
image: redis:alpine
volumes:
- redis-data:/var/lib/redis
ports:
- "6379:6379"
volumes:
redis-data:
driver: local
Also tried with version 1
redis:
image: redis:alpine
ports:
- "6379:6379"
My project is a multiproject with structure
.
├── admin-panel
│ ├── build
│ └── src
├── api
│ ├── build
│ └── src
├── build
│ └── classes
├── core
│ ├── build
│ ├── lib
│ └── src
├── domain
│ ├── build
│ └── src
My gragle build is this
dockerCompose.isRequiredBy(test)
dockerCompose {
//useComposeFiles = ['../docker-compose.yml'] // like 'docker-compose -f <file>'
stopContainers = true // useful for debugging
removeContainers = true
// removeImages = "None" // Other accepted values are: "All" and "Local"
// removeVolumes = false
//environment.put 'BACKEND_ADDRESS', '192.168.1.100' // Pass environment variable to 'docker-compose' for substitution in compose file
environment local_environment
}
...
test.doFirst {
// exposes "${serviceName}_HOST" and "${serviceName}_TCP_${exposedPort}" environment variables
// for example exposes "WEB_HOST" and "WEB_TCP_80" environment variables for service named `web` with exposed port `80`
dockerCompose.exposeAsEnvironment(test)
// exposes "${serviceName}.host" and "${serviceName}.tcp.${exposedPort}" system properties
// for example exposes "web.host" and "web.tcp.80" system properties for service named `web` with exposed port `80`
dockerCompose.exposeAsSystemProperties(test)
dockerCompose.servicesInfos
// get information about container of service `web` (declared in docker-compose.yml)
def redis = dockerCompose.servicesInfos
// pass host and exposed TCP port 80 as custom-named Java System properties
println "REDIS-------------------"
println redis;
systemProperty 'docker.redisIP', redis.getHost()
//systemProperty 'myweb.port', webInfo.ports[80]
}
And when i run the tests like that
./gradlew test --tests com..... -p api
I am getting this output
:domain:compileJava UP-TO-DATE
:domain:processResources UP-TO-DATE
:domain:classes UP-TO-DATE
:domain:jar UP-TO-DATE
....
REDIS-------------------
{}
Also to point out that the line redis.getHost()
i have tried with redis.host
and got the same errors and def redis = dockerCompose.servicesInfos.redis
If you need further details let me know.
Regards
Alex.
First, thanks for your great work on this plugin, it's helped my project a lot.
I have a docker-compose config with a couple of services. I use plugin version 0.4.5.
On Mac, composeUp
gives this output:
foo uses an image, skipping
elasticsearch uses an image, skipping
mailhog uses an image, skipping
postgres uses an image, skipping
bar uses an image, skipping
Creating network "e2eunversioned_default" with the default driver
Creating e2eunversioned_foo_1 ...
Creating e2eunversioned_mailhog_1 ...
Creating e2eunversioned_elasticsearch_1 ...
Creating e2eunversioned_postgres_1 ...
Creating e2eunversioned_foo_1
Creating e2eunversioned_elasticsearch_1
Creating e2eunversioned_mailhog_1
Creating e2eunversioned_elasticsearch_1 ... done
Creating e2eunversioned_bar_1 ...
Creating e2eunversioned_bar_1 ... done
<-------------> 0% EXECUTING Will use localhost as host of foo
Will use localhost as host of elasticsearch
Will use localhost as host of mailhog
Will use localhost as host of postgres
Will use localhost as host of bar
Probing TCP socket on localhost:32950 of service 'foo_1'
Waiting for TCP socket on localhost:32950 of service 'foo_1' (TCP connection on localhost:32950 of service 'foo_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32950 of service 'foo_1' (TCP connection on localhost:32950 of service 'foo_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32950 of service 'foo_1' (TCP connection on localhost:32950 of service 'foo_1' was disconnected right after connected)
TCP socket on localhost:32950 of service 'foo_1' is ready
Probing TCP socket on localhost:32955 of service 'elasticsearch_1'
TCP socket on localhost:32955 of service 'elasticsearch_1' is ready
Probing TCP socket on localhost:32954 of service 'elasticsearch_1'
TCP socket on localhost:32954 of service 'elasticsearch_1' is ready
Probing TCP socket on localhost:32953 of service 'mailhog_1'
TCP socket on localhost:32953 of service 'mailhog_1' is ready
Probing TCP socket on localhost:32952 of service 'mailhog_1'
TCP socket on localhost:32952 of service 'mailhog_1' is ready
Probing TCP socket on localhost:32951 of service 'postgres_1'
TCP socket on localhost:32951 of service 'postgres_1' is ready
Probing TCP socket on localhost:32956 of service 'bar_1'
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
TCP socket on localhost:32956 of service 'bar_1' is ready
BUILD SUCCESSFUL
As you can see, my own artifacts, 'foo' and 'bar', take some time to startup. But everything works as expected, great.
On Linux, I see this:
:browser-testing:composeUp
foo uses an image, skipping
elasticsearch uses an image, skipping
mailhog uses an image, skipping
postgres uses an image, skipping
bar uses an image, skipping
Creating network "e2euntagged5695g1834d8a_default" with the default driver
Creating e2euntagged5695g1834d8a_foo_1 ...
Creating e2euntagged5695g1834d8a_postgres_1 ...
Creating e2euntagged5695g1834d8a_elasticsearch_1 ...
Creating e2euntagged5695g1834d8a_mailhog_1 ...
Creating e2euntagged5695g1834d8a_foo_1
Creating e2euntagged5695g1834d8a_elasticsearch_1
Creating e2euntagged5695g1834d8a_postgres_1
Creating e2euntagged5695g1834d8a_mailhog_1
Creating e2euntagged5695g1834d8a_foo_1 ... done
Creating e2euntagged5695g1834d8a_postgres_1 ... done
Creating e2euntagged5695g1834d8a_mailhog_1 ... done
Creating e2euntagged5695g1834d8a_bar_1 ...
Creating e2euntagged5695g1834d8a_bar_1
Will use 172.20.0.1 (network e2euntagged5695g1834d8a_default) as host of foo
Will use 172.20.0.1 (network e2euntagged5695g1834d8a_default) as host of elasticsearch
Will use 172.20.0.1 (network e2euntagged5695g1834d8a_default) as host of mailhog
Will use 172.20.0.1 (network e2euntagged5695g1834d8a_default) as host of postgres
Will use 172.20.0.1 (network e2euntagged5695g1834d8a_default) as host of bar
Probing TCP socket on 172.20.0.1:10381 of service 'foo_1'
TCP socket on 172.20.0.1:10381 of service 'foo_1' is ready
Probing TCP socket on 172.20.0.1:10383 of service 'elasticsearch_1'
TCP socket on 172.20.0.1:10383 of service 'elasticsearch_1' is ready
Probing TCP socket on 172.20.0.1:10382 of service 'elasticsearch_1'
TCP socket on 172.20.0.1:10382 of service 'elasticsearch_1' is ready
Probing TCP socket on 172.20.0.1:10386 of service 'mailhog_1'
TCP socket on 172.20.0.1:10386 of service 'mailhog_1' is ready
Probing TCP socket on 172.20.0.1:10385 of service 'mailhog_1'
TCP socket on 172.20.0.1:10385 of service 'mailhog_1' is ready
Probing TCP socket on 172.20.0.1:10384 of service 'postgres_1'
TCP socket on 172.20.0.1:10384 of service 'postgres_1' is ready
Probing TCP socket on 172.20.0.1:10387 of service 'bar_1'
TCP socket on 172.20.0.1:10387 of service 'bar_1' is ready
BUILD SUCCESSFUL
As you can see, here services foo
and bar
seem to come up almost instantly, which is suspect.
Further inspection of those containers shows that the plugin seems to report a false positive: I can see that the foo and bar are not up yet. The ports however are open, but there's never any reply. The peer also does not close the connection. On mac, the peer close the connection without a reply.
Do you have a clue what's going on?
This may very well be due to something I don't quite understand about docker networking, in which case I guess I should ask the question somewhere else.
Now, if someone uses Docker for Mac then they must manually set useNetworkGateway=false
. It would be great to automatically detect presence of Docker for Mac and if present then automatically use localhost
.
Issue #72 added support for the --remove-orphans
option to both ComposeUp and ComposeDown tasks. However, the implementation of the ComposeDown task adds the said option in the buildup of the 'stop' command and not 'down' where the said option is supported.
I have prepared a PR to fix the said issue:
#78
It would be really beneficial for us (ff :)) to have a support for -p parameter. It would solve us a problem when there are multiple jobs running concurrently and trying to create containers with same name.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.