faradayio / cage Goto Github PK
View Code? Open in Web Editor NEWDevelop and deploy complex Docker applications
Home Page: http://cage.faraday.io
License: Apache License 2.0
Develop and deploy complex Docker applications
Home Page: http://cage.faraday.io
License: Apache License 2.0
Our current argument convention looks like this:
cage [options] run [exec options] <pod> [<command> [--] [<args>...]]
cage [options] exec [exec options] <pod> <service> <command> [--] [<args>..]
cage [options] shell [exec options] <pod> <service>
cage [options] test <pod> <service>
Nobody likes typing <pod> <service>
, especially when service names are often globally unique. Additionally, the run
command needs some way to specify a pod normally, but to also allow the specification of a service. And logs
(#7) raises some design issues of its own.
After some discussion with @dkastner, we'd like to propose three argument types:
<pod>
, for commands which can only accept a pod.<service>
, for commands which can accept a service. This can be specified as either $POD/$SERVICE
, or if the service name is unique, just $SERVICE
.<pod_or_service>
looks for $POD
, $POD/$SERVICE
and a unique $SERVICE
, in that order.With this design, our subcommands would look like:
cage [options] logs [log options] [<pods_or_services>..]
cage [options] run [exec options] <pod_or_service> [<command> [--] [<args>...]]
cage [options] exec [exec options] <service> <command> [--] [<args>..]
cage [options] shell [exec options] <service>
cage [options] test <service>
Comments and feedback are welcome!
@emk my naive opinion is that this is needless bureaucracy. But I'm guessing you have a reason. What is it?
Not everybody will have have one git repository per Docker image. For smaller projects, you may just want to keep the Dockerfile
and supporting source code directly under src
without using an external repo.
To make this work, we need to:
examples
containing a tiny Node.js project.abs_path
plugin to handle relative paths in build:
.Repo
→ Repos
and cage repo
→ cage source
..gitignore
and the src
directory.libraries.yml
for our release while we're at it. Maybe as config/sources.yml
?all the pretty tree views in the docs only show 1 service - frontend
it would be illuminating to
Split out of #24.
Right now, our default .gitignore
ignores the entire src
directory. This isn't what we want for projects with in-tree source code. For these projects, we probably want a src/.gitignore
that contains entries for all remote Source
objects.
Should we handle this using a generator? Should we update it automatically by appending new entries as needed if they're not already there?
Original discussion here. We have two GoCD pipelines:
example/base
and tags it with a build number. This build number is automatically provided to later pipeline stages as $GO_DEPENDENCY_LABEL_BASE
.example/base
version specified in $GO_DEPENDENCY_LABEL_BASE
, and uses it to build example/app
.So when we're making official builds, we want:
FROM example/base:$GO_DEPENDENCY_LABEL_BASE
When we're developing locally, it's safe to use:
FROM example/base:latest
One obvious way to implement this would be:
ARG GO_DEPENDENCY_LABEL_BASE=latest
FROM example/base:${GO_DEPENDENCY_LABEL_BASE}
We could then invoke docker build
with --build-arg GO_DEPENDENCY_LABEL_BASE=$GO_DEPENDENCY_LABEL_BASE
when running under GoCD, allowing us to lock to specified base image.
cage
We could support using Handlebars templates in Dockerfile.hbs
, as follows:
FROM example/base:{{default env.GO_DEPENDENCY_LABEL_BASE 'latest'}}
We could then automatically pre-process any *.hbs
file before building.
We might want this on all exported containers, I think. It's less useful when running under the app itself.
For folks using ecs-compose to deploy to ECS, this is moot, because ecs-compose ignores restart
and marks all containers as Essential
, which isn't entirely equivalent. Not sure what other deployment tools do.
We might want to have some sort of general-purpose mechanism for allowing the user to specify per-override service defaults. I'm not sure if this would be useful or confusing.
@emk here's another "I'm sure you've thought of this, but why . . ." question.
If I want to make changes to my web
service, I feel like the natural incantation would be:
$ cage mount web
$ $EDITOR src/web
I shouldn't have to worry about the fact that web
code is provided, at the moment, by the faradayio/rails_hello
GitHub repo. Perhaps one of my colleagues replaces rails_hello
with node_hello
someday—that's fine! What matters to me is that I'm checking out a service for inspection, local testing, and possibly amendment. cage source mount rails_hello
makes me go through extra hoops to do that.
So why did you do it that way?
update by @seamusabshere: s/frontend/web/g
as specified in the comments
At the moment, cage
really doesn't have any particularly good support for extends
.
We could support this as follows:
extends:
file: "templates/webapp.yml"
service: "webapp"
Optionally, we could also have support for overriding templates:
pods/templates/webapp.yml
pods/targets/$TARGET/templates/webapp.yml
The trick would be to merge these two files before we use them as input to the extends
. This would allow us to handle cases where we use one image
in development, and a different image
in production.
From the help:
cage up # Restart application
This only restarts containers that are down, or whose configuration has changed. I propose a new command:
cage restart
cage restart my_app
cage restart
would restart all containers, and cage restart x
would restart a given container
@emk I think we discussed finding a new name for this, maybe target
IIRC?
The various warnings that we print via warn!
are easy to miss. There's a way to fix this IIRC.
There are some corner cases in docker-compose.yml
that we should handle better.
nil
instead of structs (especially under volumes
)environment
and driver_opts
Map
structuresIt's easier to write the ugly serde deserializers for all of these than it is to explain the restrictions to people.
currently, as far as i know, the only way to put a service into image mode is to rm
/mv
it from the src
dir.
it would be nice to just touch src/mything1/CAGE-IGNORE
to make cage pretend it's not in src/
Found by @seamusabshere. It looks like docker-compose
has especially weak --entrypoint
support, and it doesn't treat an empty --entrypoint
as "reset to default" like I somehow thought it did.
DEBUG:cage::command_runner: Running ["docker-compose", "-p", "fdy", "-f", "/home/emk/foo/.cage/pods/diamond.yml", "run", "--entrypoint", "", "diamond", "ls"]
Could not find command "ls".
Error: error running 'docker-compose -p fdy -f /home/emk/foo/.cage/pods/diamond.yml run --entrypoint diamond ls'
I'm not sure what to do here, other than document how to override --entrypoint
. It seems like ENTRYPOINT
causes lots of problems for a very minor syntactic gain.
i think we should bake more intelligence into cage test
Easy way to get a false negative: forgetting to run cage --target test up
Proposed solution: detect somehow if the test target isn't "up" (yes, this makes assumptions about target naming)
Easy way to get a false positive: forgetting to cage source mount
after you change a test
Proposed solution: warn if testing against an unmounted service
There's an interesting pattern that shows up in several of our internal applications, especially those using Go or Rust. In particular, we have two Dockerfile
s:
Dockerfile.build
contains a complete development toolchain, which may be several hundred megabytes in size, and which may rely on a full-fledged distro like Ubuntu. We use this image to build a statically-linked binary.Dockerfile
contains a minimalistic Alpine image. We simply drop our static binary into this image, and the result is often less than 30MB in size.We may want to make this pattern "official", and provide support for handling it automatically when we invoke cage build
.
The design could be fairly simple. We might just add something like the following to an individual service:
label:
io.fdy.cage.build_outputs: "/app/my_static_binary"
...or perhaps instead to config/sources.yml
:
myimage:
prebuild:
dockerfile: "Dockerfile.prebuild"
outputs:
my_static_binary: "/app/my_static_binary"
Then we could simply define a Dockerfile.prebuild
in the source directory.
Per appveyor
---- project::new_from_example_uses_example_and_target stdout ----
thread 'project::new_from_example_uses_example_and_target' panicked at 'assertion failed: output_dir.starts_with("target/test_output/hello-")', src\project.rs:511
note: Run with `RUST_BACKTRACE=1` for a backtrace.
Reported on Windows:
$ cage source ls rails_hello https://github.com/faradayio/rails_hello.git
Error: prefix not found
This is caused by abuse of path prefix stripping with no fallback in the ls
routine.
(not that everybody uses rspec, but explaining how to use it would help people with other frameworks)
i have set io.fdy.cage.test: "rspec"
$ cage test myservice
run
) - failed$ cage run frontend myservice rspec spec/requests/campaigns_controller_spec.rb
Error: Can only `run` pods with 1 service, frontend has 10
exec
) - failed$ cage exec myservice rspec spec/requests/campaigns_controller_spec.rb
ERROR: No container found for myservice_1
Error: error running 'docker-compose -p myapp -f /Users/seamus/code/myapp/.cage/pods/frontend.yml exec myservice rspec spec/requests/campaigns_controller_spec.rb'
test
) - most logical, concise, but failedyou would expect it just appends the args to whatever is in io.fdy.cage.test
$ cage test myservice spec/requests/campaigns_controller_spec.rb
ERROR: Cannot start service myservice: oci runtime error: exec: "spec/requests/campaigns_controller_spec.rb": permission denied
Error: error running 'docker-compose -p myapptest -f /Users/seamus/code/myapp/.cage/pods/frontend.yml run --rm --no-deps myservice spec/requests/campaigns_controller_spec.rb'
test
) - works, but not super logical, or maybe it isi guess it is logical if you consider test
to set up the env, but not necessarily run the command
$ cage test myservice rspec spec/requests/campaigns_controller_spec.rb
Randomized with seed 28123
..................................................
See https://github.com/faradayio/cage.faraday.io
We have an internal tracking card in Trello for this issue as well.
We need these for our example projects and cage new
command. I thought this was already set up, but apparently not.
Suggested by @seamusabshere.
We need some way to remove all containers (and possibly images) associated with a project. docker-compose rm
has the following semantics:
$ docker-compose rm --help
Removes stopped service containers.
By default, anonymous volumes attached to containers will not be removed. You
can override this with `-v`. To list all volumes, use `docker volume ls`.
Any data which is not in a volume will be lost.
Usage: rm [options] [SERVICE...]
Options:
-f, --force Don't ask to confirm removal
-v Remove any anonymous volumes attached to containers
-a, --all Obsolete. Also remove one-off containers created by
docker-compose run
We could support this with a [<pods_or_services>...]
argument, as described in #8. The --all
flag is now included.
The semantics of this command are:
if options.get('--all'):
log.warn(
'--all flag is obsolete. This is now the default behavior '
'of `docker-compose rm`'
)
one_off = OneOffFilter.include
all_containers = self.project.containers(
service_names=options['SERVICE'], stopped=True, one_off=one_off
)
stopped_containers = [c for c in all_containers if not c.is_running]
So this is internally doing something like:
docker ps --all --latest \
--filter label=com.docker.compose.project=$PROJECT \
--format '{{.Names}}'
In other words, I think we can use the thin, obvious wrapper over docker-compose rm
and get the right results, including reasonable semantics for a bare cage rm
.
Windows supports Docker! There are two potential use cases here:
git
for Windows will check out text files using Windows line endings by default, which breaks container builds. (2) Windows can't represent the +x
bit marking files as executable, which means that scripts built into the container will generally not be runnable.docker-compose
works for this scenario. If you have experience with this, please feel to chime in!In addition to the limitations mentioned above, cage
itself has one limitation that affects use case (1) significantly:
cage
uses Rust's std::path
type represent paths both outside and inside containers. This works fine as long as the host OS and the container OS use similar file naming conventions. But if we want to support Linux containers on a Windows system, then we'll need to figure out how to handle path names for two different OSes in a single Rust application, and convert between them as needed.If there's demand for cage
on Windows, we'd love to see it happen. But this would almost certainly require contributions from somebody who uses Docker under Windows.
starts_with("target/test_output/hello-")
test failureIf you have a library defined that gets mapped to a volume, any container that runs that uses that volume will create ./src/library
, which means src
is now owned by root. This breaks cage repo clone
.
seamus@pirlo:~/code/conductor_test$ conductor new myapp
Generating: .gitignore
Generating: pods/common.env
Generating: pods/db.yml
Generating: pods/frontend.yml
Generating: pods/migrate.yml
Generating: common.env
Generating: common.env
Generating: common.env
seamus@pirlo:~/code/conductor_test$ cd myapp/
seamus@pirlo:~/code/conductor_test/myapp$ tree .
.
└── pods
├── common.env
├── db.yml
├── frontend.yml
├── migrate.yml
└── overrides
├── development
│ └── common.env
├── production
│ └── common.env
└── test
└── common.env
5 directories, 7 files
seamus@pirlo:~/code/conductor_test/myapp$ conductor repo list
rails_hello https://github.com/faradayio/rails_hello.git
seamus@pirlo:~/code/conductor_test/myapp$ conductor up
Creating network "myapp_default" with the default driver
Pulling db (postgres:latest)...
latest: Pulling from library/postgres
8ad8b3f87b37: Pull complete
c5f4a4b21ab6: Pull complete
ba05db8b0a52: Pull complete
47b491cd21ab: Pull complete
d70407e3e64d: Pull complete
295c246dd69f: Pull complete
89bc4bb8bcfd: Pull complete
106ff44c5f06: Pull complete
867cd91e76bb: Pull complete
a227948d6d8c: Pull complete
fc2ec20bdaf0: Pull complete
Digest: sha256:1115f095242a490cb79561124a79125e25b0595d5ae47d44fab5b4c1cd10735f
Status: Downloaded newer image for postgres:latest
Creating myapp_db_1
WARNING: Found orphan containers (myapp_db_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Building web
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "compose/cli/main.py", line 61, in main
File "compose/cli/main.py", line 113, in perform_command
File "compose/cli/main.py", line 835, in up
File "compose/project.py", line 382, in up
File "compose/service.py", line 305, in ensure_image_exists
File "compose/service.py", line 727, in build
File "site-packages/docker/api/build.py", line 104, in build
File "site-packages/docker/utils/decorators.py", line 46, in inner
AttributeError: 'NoneType' object has no attribute 'update'
docker-compose returned -1
Error: Error running docker-compose
seamus@pirlo:~/code/conductor_test/myapp$ conductor up
myapp_db_1 is up-to-date
WARNING: Found orphan containers (myapp_db_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Building web
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "compose/cli/main.py", line 61, in main
File "compose/cli/main.py", line 113, in perform_command
File "compose/cli/main.py", line 835, in up
File "compose/project.py", line 382, in up
File "compose/service.py", line 305, in ensure_image_exists
File "compose/service.py", line 727, in build
File "site-packages/docker/api/build.py", line 104, in build
File "site-packages/docker/utils/decorators.py", line 46, in inner
AttributeError: 'NoneType' object has no attribute 'update'
docker-compose returned -1
Error: Error running docker-compose
sometimes i want to ignore the entrypoint in a dockerfile without editing a file
Right now, docker run
only works with task containers. This turns out to be too restrictive in practice.
CC @dkastner
I propose a command that watches a mounted source and restarts the container when files change.
This would, by default, watch all files in the dir, excluding what's in .gitignore
It would also be nice to configure it to watch a given file, directory, or glob. In node apps, this configuration would probably look something like dist/**/*.js
The restart itself would probably have to be debounced because lots of file changes can happen at once.
Some containers can auto-reload their own source code, but that can be a lot of work to set up, unreliable, even dangerous in certain languages and environments.
We can ask Docker what services are running. Some things we could with this:
run
if the pods of pod_type: "placeholder"
or pod_type: "service"
associated with that environment aren't up?Right now, to start a project for the first time, we need to run something like:
cage up db
cage run rake db:create
cage run rake db:migrate
cage up
The exact series of steps varies, and internally, we wrap cage in shell scripts so we don't get confused by which steps to take when. But it would be really nice to be able to call:
cage up --init
...and have cage do all that other stuff for us.
This could be accomplished by creating config/hooks/init.d/10_db_init.hook
and filling it with:
#!/bin/bash
set -euo pipefail
case $POD of
db)
cage run rake db:create
cage run rake db:migrate
;;
*)
# Do nothing
esac
The sneaky bit is that if we write:
cage --target=test up --init
...then the --target=test
option should be trivially available to cage inside the hook script, perhaps using an environment variable (to minimize the amount of magic):
cage $CAGE_OPTS run rake db:create
cage $CAGE_OPTS run rake db:migrate
Update: We have a new design in the thread below! These implementation tasks assume that design.
run_on_init
to *.metadata.yml
CommandCompose
--init
argument to up
and implement.Is it just anything that is not docker-compose v2 standard must go in metadata?
unclear how to tell it to stop test placeholders, for example
Right now, the largest real-world cage
project has a special helper script that wraps many common commands. In many cases, this wrapper could be replaced by a well-defined system of "hooks". Known use-cases include:
docker-compose pull
(or docker-compose push
). This login is only good for an hour and it needs to be renewed frequently, so it's probably best if we just go ahead and run it before all affected commands.config/secrets.yml
using some organization-specific process, and updating it periodically.In some cases, these hooks are going to want to call cage
recursively, so we need to be careful not to stomp our .cage/pods
directory.
One possibility is to implement hooks as shell scripts in config/hooks
, possibly with .d
directories to allow several scripts to trigger for the same hook. We might also consider a cage script myscript
command, which could run oddball scripts that don't fit into the hook system. One advantage of a script
command is that it would work in a subdirectory, which is a use case I've witnessed constantly with other developers.
Some ideas:
pull
?), probably as --ff-only
to keep life simpleLarger cage projects often have some associated shell scripts, many of which call back into cage repeatedly to perform some task like database seeding with specific (larger than usual) data sets.
@dkastner and I have discussed having a cage script
command:
cage script load-data full
This would call $PROJ/scripts/load-data full
, but with a few wrinkles:
--target
and -p
options by default..cage/pods
and other cage state, perhaps sharing them between separate invocations of cage
.Would this offer enough value to be worth the complexity?
This only affects the binary distributions. It looks like our statically linked OpenSSL is still looking for certain cert-related files in musl directories.
strace -Ff -tt cage --override="staging" export export 2>&1 | tee cage.log
18:29:46.176436 stat("/usr/local/musl/ssl/certs/157753a5.0", 0x7fffcc0c1490) = -1 ENOENT (No such file or directory)
18:29:46.176560 stat("/usr/local/musl/ssl/certs/d6325660.0", 0x7fffcc0c1490) = -1 ENOENT (No such file or directory)
18:29:46.176826 stat("/usr/local/musl/ssl/certs/8d28ae65.0", 0x7fffcc0c1490) = -1 ENOENT (No such file or directory)
There's an easy workaround:
mkdir /usr/local/musl
ln -s /etc/ssl /usr/local/musl/ssl
The real fix will require a look at https://github.com/emk/rust-musl-builder to figure out what's going wrong.
There may be similar issues with Mac binaries. Again, this only affects the vault plugin.
@seamusabshere points out that we generate sample pods/overrides/production/common.env
files containing:
RAILS_ENV=production
RACK_ENV=production
DATABASE_URL=postgres://postgres@db:5432/vault_integration_production
But it's a bad idea to do this, because some database configurations will include a password in the DATABASE_URL
, and somebody might be tempted to just add it directly too this file instead of moving it to config/secrets.yml
(or vault) where it belongs.
Fixing this requires a better template engine, which I've been trying to avoid:
config/secrets.yml
when running conductor new
.generate_new
? Hmm.@emk I feel like a Cage repo could simply be:
_config.yml
common.env
frontend.yml
overrides/
development/
common.env
Why all the fuss with the config/
dir and the demotion of the pod.yml
files to a pods/
dir?
After running the mount
and unmount
commands, automatically re-run up
.
Potential drawbacks: We might start services that weren't running before.
Hello!
Thank you for creating and releasing this project to the world! 😄 😍
Was playing around and found that while docker-compose v2 file format is supported (version: 2
) it is not possible to define named volumes instead of locally mapped ones.
Take the following change to example db.yml
:
version: "2"
services:
db:
image: "mini/postgresql:9.3"
volumes:
- "postgresql:/data"
volumes:
postgresql:
Result in the following error:
$ cage pull
Error: Error parsing /home/luis/code/_experiments/myproject/pods/db.yml: error reading file '/home/luis/code/_experiments/myproject/pods/db.yml'
Removal of volumes
section result in the following error from docker-compose:
ERROR: Named volume "postgresql:/data:rw" is used in service "db" but no declaration was found in the volumes section.
Error: error running 'docker-compose -p myproject -f /home/luis/code/_experiments/myproject/.cage/pods/db.yml pull'
The following is the entire debug output:
DEBUG:cage: Arguments: ArgMatches { args: {}, subcommand: Some(SubCommand { name: "pull", matches: ArgMatches { args: {}, subcommand: None, usage: Some("USAGE:\n cage pull [<POD_OR_SERVICE>]") } }), usage: Some("USAGE:\n cage [OPTIONS] [SUBCOMMAND]") }
DEBUG:cage::pod: Parsing /home/luis/code/_experiments/myproject/pods/db.yml
Error: Error parsing /home/luis/code/_experiments/myproject/pods/db.yml: error reading file '/home/luis/code/_experiments/myproject/pods/db.yml'
stack backtrace:
0: 0x6b1e1d - backtrace::backtrace::trace::h35ac923e26dc1b92
1: 0x6b1da5 - backtrace::capture::Backtrace::new::h3a3d5e9defd7d407
2: 0x6b1115 - error_chain::make_backtrace::h886cfbd0fcafaf76
3: 0x4c0f48 - cage::pod::FileInfo::unnormalized::h790320cb80fb0307
4: 0x5481c6 - cage::pod::Pod::new::hacf75a1c55be9ec8
5: 0x5365e9 - cage::project::Project::from_dirs::h89ca20e1a0d8447f
6: 0x544690 - cage::project::Project::from_current_dir::h9cd0302df0062b3e
7: 0x403835 - cage::run::h5d2f081c3f60c97d
8: 0x4416f3 - cage::main::hc0adf28a6e0d6b0d
9: 0x917a68 - std::panicking::try::call::hca715a47aa047c49
10: 0x91f89b - __rust_try
11: 0x91f83e - __rust_maybe_catch_panic
12: 0x9176c1 - std::rt::lang_start::h162055cb2e4b9fe7
Platform details: Linux Ubuntu 15.10 (Willy) x64
Cage: 0.1.2
Docker: 1.12.2
Docker-Compose: 1.8.1
Please note that I'm also using docker-machine and not native docker service. While still is possible to mount local directories across VM/native, I was aiming to ease usage of existing data in some projects.
Please let me know if other details are required.
Once again, thank you for creating and making this tool available! ❤️ ❤️ ❤️
For things which don't need to do I/O, rayon looks really nice. But we also need some way to display output to the console from multiple std::process::Command
objects in parallel.
sebk and @mbrubeck on Mozilla IRC #rust suggest that I probably want to use a channel:
<sebk> BufReader(stdout).lines().map(|l| channel.send(l))
<mbrubeck> .map is lazy; you probably want a `for` loop :)
See ChildStdout, etc. and wrap it in a BufReader, basically.
Environment:
ProductName: Mac OS X
ProductVersion: 10.10.5
BuildVersion: 14F2009
Docker version 1.12.3, build 6b644ec
docker-compose version 1.9.0, build 2585387
docker-machine version 0.8.1, build 41b3b25
cage 0.1.10
Cage binary was downloaded from the releases page and not compiled locally.
Short Error:
Error: error getting the project's state from Docker
could not connected to Docker at 'tcp://192.168.99.100:2376'
Docker SSL support was disabled at compile time
Full error with debug flags enabled:
$ cage status
DEBUG:cage: Arguments: ArgMatches { args: {}, subcommand: Some(SubCommand { name: "status", matches: ArgMatches { args: {}, subcommand: None, usage: Some("USAGE:\n cage status [<POD_OR_SERVICE>]") } }), usage: Some("USAGE:\n cage [OPTIONS] [SUBCOMMAND]") }
DEBUG:cage::pod: Parsing /Users/mansfield/Dev/cage/test_project/pods/db.yml
DEBUG:cage::pod: Parsing /Users/mansfield/Dev/cage/test_project/pods/frontend.yml
DEBUG:cage::pod: Parsing /Users/mansfield/Dev/cage/test_project/pods/rake.yml
DEBUG:cage::plugins: vault generator was disabled at build time
DEBUG:cage::plugins: vault transform was disabled at build time
DEBUG:cage::project: Outputting /Users/mansfield/Dev/cage/test_project/.cage/pods/db.yml
DEBUG:cage::pod: Merging pod db with target development
DEBUG:cage::project: Outputting /Users/mansfield/Dev/cage/test_project/.cage/pods/frontend.yml
DEBUG:cage::project: Outputting /Users/mansfield/Dev/cage/test_project/.cage/pods/rake.yml
DEBUG:cage::pod: Merging pod frontend with target development
DEBUG:cage::pod: Merging pod rake with target development
Error: error getting the project's state from Docker
could not connected to Docker at 'tcp://192.168.99.100:2376'
Docker SSL support was disabled at compile time
stack backtrace:
0: 0x104e9ca1e - backtrace::backtrace::trace::hbb3527c862dcb156
1: 0x104e9cd2c - backtrace::capture::Backtrace::new::hb88c898ead0c41a6
2: 0x104e9c754 - error_chain::make_backtrace::hf6780bdef7b8a72b
3: 0x104daf805 - boondock::docker::Docker::connect_with_defaults::h6ee39577b20b0ecf
4: 0x104d3b7a5 - cage::runtime_state::RuntimeState::for_project::he7f15f11730daafc
5: 0x104c872dc - cage::run::hc3b00895a655f469
6: 0x104c91943 - cage::main::h1f9b8c28fd949fc8
7: 0x104f7178a - __rust_maybe_catch_panic
8: 0x104f6fbf6 - std::rt::lang_start::h538f8960e7644c80
This makes ecs-compose
sad.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.