Giter Club home page Giter Club logo

cage's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cage's Issues

Replace `<pod> <service>` with shorter naming conventions & rationalize

Our current argument convention looks like this:

cage [options] run [exec options] <pod> [<command> [--] [<args>...]]
cage [options] exec [exec options] <pod> <service> <command> [--] [<args>..]
cage [options] shell [exec options] <pod> <service>
cage [options] test <pod> <service>

Nobody likes typing <pod> <service>, especially when service names are often globally unique. Additionally, the run command needs some way to specify a pod normally, but to also allow the specification of a service. And logs (#7) raises some design issues of its own.

After some discussion with @dkastner, we'd like to propose three argument types:

  • <pod>, for commands which can only accept a pod.
  • <service>, for commands which can accept a service. This can be specified as either $POD/$SERVICE, or if the service name is unique, just $SERVICE.
  • <pod_or_service> looks for $POD, $POD/$SERVICE and a unique $SERVICE, in that order.

With this design, our subcommands would look like:

cage [options] logs [log options] [<pods_or_services>..]
cage [options] run [exec options] <pod_or_service> [<command> [--] [<args>...]]
cage [options] exec [exec options] <service> <command> [--] [<args>..]
cage [options] shell [exec options] <service>
cage [options] test <service>

Comments and feedback are welcome!

Provide a working Node+yarn example with in-tree Dockerfiles and source

Not everybody will have have one git repository per Docker image. For smaller projects, you may just want to keep the Dockerfile and supporting source code directly under src without using an external repo.

To make this work, we need to:

  • Add a new subdirectory of examples containing a tiny Node.js project.
  • Fix the abs_path plugin to handle relative paths in build:.
  • Allow mounting local source directories into containers.
  • Rename RepoRepos and cage repocage source.
  • Allow not mounting local source directories into containers (aka #18).
  • Moved to #27. Figure out what to do about our default .gitignore and the src directory.
  • Redesign libraries.yml for our release while we're at it. Maybe as config/sources.yml?

canonical examples are pathological

all the pretty tree views in the docs only show 1 service - frontend

it would be illuminating to

  1. have a few more
  2. have names that indicate they are specific to your project and not canonical - "alice", "bob" instead of "frontend"
  3. have a couple canonically named placeholders - postgres (or db), redis

What should we do `.gitignore` and the `src` directory when we have in-tree source?

Split out of #24.

Right now, our default .gitignore ignores the entire src directory. This isn't what we want for projects with in-tree source code. For these projects, we probably want a src/.gitignore that contains entries for all remote Source objects.

Should we handle this using a generator? Should we update it automatically by appending new entries as needed if they're not already there?

`Dockerfile` templating

What we would like upstream, in a perfect world

Original discussion here. We have two GoCD pipelines:

  • base: This builds example/base and tags it with a build number. This build number is automatically provided to later pipeline stages as $GO_DEPENDENCY_LABEL_BASE.
  • app: This takes the example/base version specified in $GO_DEPENDENCY_LABEL_BASE, and uses it to build example/app.

So when we're making official builds, we want:

FROM example/base:$GO_DEPENDENCY_LABEL_BASE

When we're developing locally, it's safe to use:

FROM example/base:latest

One obvious way to implement this would be:

ARG GO_DEPENDENCY_LABEL_BASE=latest
FROM example/base:${GO_DEPENDENCY_LABEL_BASE}

We could then invoke docker build with --build-arg GO_DEPENDENCY_LABEL_BASE=$GO_DEPENDENCY_LABEL_BASE when running under GoCD, allowing us to lock to specified base image.

What we could do in cage

We could support using Handlebars templates in Dockerfile.hbs, as follows:

FROM example/base:{{default env.GO_DEPENDENCY_LABEL_BASE 'latest'}}

We could then automatically pre-process any *.hbs file before building.

Do we want to generate `restart: yes` by default (at least for export)?

We might want this on all exported containers, I think. It's less useful when running under the app itself.

For folks using ecs-compose to deploy to ECS, this is moot, because ecs-compose ignores restart and marks all containers as Essential, which isn't entirely equivalent. Not sure what other deployment tools do.

We might want to have some sort of general-purpose mechanism for allowing the user to specify per-override service defaults. I'm not sure if this would be useful or confusing.

Why mount the source instead of the service?

@emk here's another "I'm sure you've thought of this, but why . . ." question.

If I want to make changes to my web service, I feel like the natural incantation would be:

$ cage mount web
$ $EDITOR src/web

I shouldn't have to worry about the fact that web code is provided, at the moment, by the faradayio/rails_hello GitHub repo. Perhaps one of my colleagues replaces rails_hello with node_hello someday—that's fine! What matters to me is that I'm checking out a service for inspection, local testing, and possibly amendment. cage source mount rails_hello makes me go through extra hoops to do that.

So why did you do it that way?

update by @seamusabshere: s/frontend/web/g as specified in the comments

Implement `extends`

At the moment, cage really doesn't have any particularly good support for extends.

We could support this as follows:

extends:
  file: "templates/webapp.yml"
  service: "webapp"

Optionally, we could also have support for overriding templates:

  • pods/templates/webapp.yml
  • pods/targets/$TARGET/templates/webapp.yml

The trick would be to merge these two files before we use them as input to the extends. This would allow us to handle cases where we use one image in development, and a different image in production.

cage restart

From the help:

cage up     # Restart application

This only restarts containers that are down, or whose configuration has changed. I propose a new command:

cage restart
cage restart my_app

cage restart would restart all containers, and cage restart x would restart a given container

Deal with more `docker-compose.yml` corner cases

There are some corner cases in docker-compose.yml that we should handle better.

  • nil instead of structs (especially under volumes)
  • integers and booleans instead of string values in environment and driver_opts
  • variable interpolation in various Map structures

It's easier to write the ugly serde deserializers for all of these than it is to explain the restrictions to people.

docker-compose doesn't allow resetting entrypoint with `--entrypoint=""`

Found by @seamusabshere. It looks like docker-compose has especially weak --entrypoint support, and it doesn't treat an empty --entrypoint as "reset to default" like I somehow thought it did.

DEBUG:cage::command_runner: Running ["docker-compose", "-p", "fdy", "-f", "/home/emk/foo/.cage/pods/diamond.yml", "run", "--entrypoint", "", "diamond", "ls"]
Could not find command "ls".
Error: error running 'docker-compose -p fdy -f /home/emk/foo/.cage/pods/diamond.yml run --entrypoint  diamond ls'

I'm not sure what to do here, other than document how to override --entrypoint. It seems like ENTRYPOINT causes lots of problems for a very minor syntactic gain.

`cage test` caveats and possible affordances

i think we should bake more intelligence into cage test

Easy way to get a false negative: forgetting to run cage --target test up

Proposed solution: detect somehow if the test target isn't "up" (yes, this makes assumptions about target naming)

Easy way to get a false positive: forgetting to cage source mount after you change a test

Proposed solution: warn if testing against an unmounted service

`cage status` emphásis on the wrong sylláble

screen shot 2016-10-27 at 4 53 15 pm

i would argue that you should leave everything the default color EXCEPT exceptional things (EXIT, STOPPED, etc.)

for example, green RUNNING is the normal state, it should be unexceptional

and the pod/service names are hard to read

Multi-stage `Dockerfile` builds

There's an interesting pattern that shows up in several of our internal applications, especially those using Go or Rust. In particular, we have two Dockerfiles:

  • Dockerfile.build contains a complete development toolchain, which may be several hundred megabytes in size, and which may rely on a full-fledged distro like Ubuntu. We use this image to build a statically-linked binary.
  • Dockerfile contains a minimalistic Alpine image. We simply drop our static binary into this image, and the result is often less than 30MB in size.

We may want to make this pattern "official", and provide support for handling it automatically when we invoke cage build.

A possible design

The design could be fairly simple. We might just add something like the following to an individual service:

label:
  io.fdy.cage.build_outputs: "/app/my_static_binary"

...or perhaps instead to config/sources.yml:

myimage:
  prebuild:
    dockerfile: "Dockerfile.prebuild"
    outputs:
      my_static_binary: "/app/my_static_binary"

Then we could simply define a Dockerfile.prebuild in the source directory.

"cage new" unit test broken on windows

Per appveyor

---- project::new_from_example_uses_example_and_target stdout ----
    thread 'project::new_from_example_uses_example_and_target' panicked at 'assertion failed: output_dir.starts_with("target/test_output/hello-")', src\project.rs:511
note: Run with `RUST_BACKTRACE=1` for a backtrace.

`Error: prefix not found` from `source ls`

Reported on Windows:

$ cage source ls rails_hello               https://github.com/faradayio/rails_hello.git
Error: prefix not found

This is caused by abuse of path prefix stripping with no fallback in the ls routine.

clarify rspec usage (for example)

(not that everybody uses rspec, but explaining how to use it would help people with other frameworks)

i have set io.fdy.cage.test: "rspec"

no arguments - works as expected

$ cage test myservice

first try (run) - failed

$ cage run frontend myservice rspec spec/requests/campaigns_controller_spec.rb
Error: Can only `run` pods with 1 service, frontend has 10

second try (exec) - failed

$ cage exec myservice rspec spec/requests/campaigns_controller_spec.rb
ERROR: No container found for myservice_1
Error: error running 'docker-compose -p myapp -f /Users/seamus/code/myapp/.cage/pods/frontend.yml exec myservice rspec spec/requests/campaigns_controller_spec.rb'

third try (test) - most logical, concise, but failed

you would expect it just appends the args to whatever is in io.fdy.cage.test

$ cage test myservice spec/requests/campaigns_controller_spec.rb
ERROR: Cannot start service myservice: oci runtime error: exec: "spec/requests/campaigns_controller_spec.rb": permission denied
Error: error running 'docker-compose -p myapptest -f /Users/seamus/code/myapp/.cage/pods/frontend.yml run --rm --no-deps myservice spec/requests/campaigns_controller_spec.rb'

fourth try (test) - works, but not super logical, or maybe it is

i guess it is logical if you consider test to set up the env, but not necessarily run the command

$ cage test myservice rspec spec/requests/campaigns_controller_spec.rb

Randomized with seed 28123
..................................................

Subcommand for `rm`

We need some way to remove all containers (and possibly images) associated with a project. docker-compose rm has the following semantics:

$ docker-compose rm --help
Removes stopped service containers.

By default, anonymous volumes attached to containers will not be removed. You
can override this with `-v`. To list all volumes, use `docker volume ls`.

Any data which is not in a volume will be lost.

Usage: rm [options] [SERVICE...]

Options:
    -f, --force   Don't ask to confirm removal
    -v            Remove any anonymous volumes attached to containers
    -a, --all     Obsolete. Also remove one-off containers created by
                  docker-compose run

We could support this with a [<pods_or_services>...] argument, as described in #8. The --all flag is now included.

The semantics of this command are:

        if options.get('--all'):
            log.warn(
                '--all flag is obsolete. This is now the default behavior '
                'of `docker-compose rm`'
            )
        one_off = OneOffFilter.include

        all_containers = self.project.containers(
            service_names=options['SERVICE'], stopped=True, one_off=one_off
        )
        stopped_containers = [c for c in all_containers if not c.is_running]

So this is internally doing something like:

docker ps --all --latest \
             --filter label=com.docker.compose.project=$PROJECT \
             --format '{{.Names}}'

In other words, I think we can use the thin, obvious wrapper over docker-compose rm and get the right results, including reasonable semantics for a bare cage rm.

What about Windows?

Windows supports Docker! There are two potential use cases here:

  1. Linux containers. You can run Linux containers using Docker for Windows. This has some pretty seriously limitations, mostly related to working with container source code. In particular, two problems come up frequently: (1) git for Windows will check out text files using Windows line endings by default, which breaks container builds. (2) Windows can't represent the +x bit marking files as executable, which means that scripts built into the container will generally not be runnable.
  2. Windows containers. Windows now supports Windows guests! I've never used this but it sounds interesting. But to support this, we'd need contributions from somebody who actually uses Windows containers for real projects, and we'd need to verify that docker-compose works for this scenario. If you have experience with this, please feel to chime in!

In addition to the limitations mentioned above, cage itself has one limitation that affects use case (1) significantly:

  • cage uses Rust's std::path type represent paths both outside and inside containers. This works fine as long as the host OS and the container OS use similar file naming conventions. But if we want to support Linux containers on a Windows system, then we'll need to figure out how to handle path names for two different OSes in a single Rust application, and convert between them as needed.

If there's demand for cage on Windows, we'd love to see it happen. But this would almost certainly require contributions from somebody who uses Docker under Windows.

Initial Windows port

  • Figure out emk/compose_yml#1
  • Fix the starts_with("target/test_output/hello-") test failure

Don't let docker create the src directory

If you have a library defined that gets mapped to a volume, any container that runs that uses that volume will create ./src/library, which means src is now owned by root. This breaks cage repo clone.

AttributeError: 'NoneType' object has no attribute 'update'

seamus@pirlo:~/code/conductor_test$ conductor new myapp
Generating: .gitignore
Generating: pods/common.env
Generating: pods/db.yml
Generating: pods/frontend.yml
Generating: pods/migrate.yml
Generating: common.env
Generating: common.env
Generating: common.env

seamus@pirlo:~/code/conductor_test$ cd myapp/

seamus@pirlo:~/code/conductor_test/myapp$ tree .
.
└── pods
    ├── common.env
    ├── db.yml
    ├── frontend.yml
    ├── migrate.yml
    └── overrides
        ├── development
        │   └── common.env
        ├── production
        │   └── common.env
        └── test
            └── common.env

5 directories, 7 files

seamus@pirlo:~/code/conductor_test/myapp$ conductor repo list
rails_hello               https://github.com/faradayio/rails_hello.git

seamus@pirlo:~/code/conductor_test/myapp$ conductor up
Creating network "myapp_default" with the default driver
Pulling db (postgres:latest)...
latest: Pulling from library/postgres
8ad8b3f87b37: Pull complete
c5f4a4b21ab6: Pull complete
ba05db8b0a52: Pull complete
47b491cd21ab: Pull complete
d70407e3e64d: Pull complete
295c246dd69f: Pull complete
89bc4bb8bcfd: Pull complete
106ff44c5f06: Pull complete
867cd91e76bb: Pull complete
a227948d6d8c: Pull complete
fc2ec20bdaf0: Pull complete
Digest: sha256:1115f095242a490cb79561124a79125e25b0595d5ae47d44fab5b4c1cd10735f
Status: Downloaded newer image for postgres:latest
Creating myapp_db_1
WARNING: Found orphan containers (myapp_db_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Building web
Traceback (most recent call last):
  File "<string>", line 3, in <module>
  File "compose/cli/main.py", line 61, in main
  File "compose/cli/main.py", line 113, in perform_command
  File "compose/cli/main.py", line 835, in up
  File "compose/project.py", line 382, in up
  File "compose/service.py", line 305, in ensure_image_exists
  File "compose/service.py", line 727, in build
  File "site-packages/docker/api/build.py", line 104, in build
  File "site-packages/docker/utils/decorators.py", line 46, in inner
AttributeError: 'NoneType' object has no attribute 'update'
docker-compose returned -1
Error: Error running docker-compose

seamus@pirlo:~/code/conductor_test/myapp$ conductor up
myapp_db_1 is up-to-date
WARNING: Found orphan containers (myapp_db_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Building web
Traceback (most recent call last):
  File "<string>", line 3, in <module>
  File "compose/cli/main.py", line 61, in main
  File "compose/cli/main.py", line 113, in perform_command
  File "compose/cli/main.py", line 835, in up
  File "compose/project.py", line 382, in up
  File "compose/service.py", line 305, in ensure_image_exists
  File "compose/service.py", line 727, in build
  File "site-packages/docker/api/build.py", line 104, in build
  File "site-packages/docker/utils/decorators.py", line 46, in inner
AttributeError: 'NoneType' object has no attribute 'update'
docker-compose returned -1
Error: Error running docker-compose

cage watch

I propose a command that watches a mounted source and restarts the container when files change.

This would, by default, watch all files in the dir, excluding what's in .gitignore

It would also be nice to configure it to watch a given file, directory, or glob. In node apps, this configuration would probably look something like dist/**/*.js

The restart itself would probably have to be debounced because lots of file changes can happen at once.

Some containers can auto-reload their own source code, but that can be a lot of work to set up, unreliable, even dangerous in certain languages and environments.

Query status of running services and use appropriately

We can ask Docker what services are running. Some things we could with this:

  • Print status of containers in pretty colors.
  • Warn on run if the pods of pod_type: "placeholder" or pod_type: "service" associated with that environment aren't up?

`cage up --init` to simplify first-time startup

Right now, to start a project for the first time, we need to run something like:

cage up db
cage run rake db:create
cage run rake db:migrate
cage up

The exact series of steps varies, and internally, we wrap cage in shell scripts so we don't get confused by which steps to take when. But it would be really nice to be able to call:

cage up --init

...and have cage do all that other stuff for us.

How it could work

This could be accomplished by creating config/hooks/init.d/10_db_init.hook and filling it with:

#!/bin/bash

set -euo pipefail
case $POD of
    db)
        cage run rake db:create
        cage run rake db:migrate
        ;;
    *)
        # Do nothing
esac

The sneaky bit is that if we write:

cage --target=test up --init

...then the --target=test option should be trivially available to cage inside the hook script, perhaps using an environment variable (to minimize the amount of magic):

cage $CAGE_OPTS run rake db:create
cage $CAGE_OPTS run rake db:migrate

Implementation tasks

Update: We have a new design in the thread below! These implementation tasks assume that design.

  • Add run_on_init to *.metadata.yml
  • Refactor CommandCompose
  • Add --init argument to up and implement.
  • Get running container information from Docker.
  • Poll for open port on container (see babariviere/port_scanner-rs#1).

Hooks to reduce the need for per-project helper scripts

Right now, the largest real-world cage project has a special helper script that wraps many common commands. In many cases, this wrapper could be replaced by a well-defined system of "hooks". Known use-cases include:

  • Logging into Amazon ECR before running docker-compose pull (or docker-compose push). This login is only good for an hour and it needs to be renewed frequently, so it's probably best if we just go ahead and run it before all affected commands.
  • Generating a local config/secrets.yml using some organization-specific process, and updating it periodically.
  • Updating local git repositories (but see #13).
  • Initializing the database when bringing up new pods for the first time, and the equivalent for the test database. (And forcing re-initialization latter.)
  • (Running different variants of database initialization: A minimal working data set versus a more expensive set of typical test data.)

In some cases, these hooks are going to want to call cage recursively, so we need to be careful not to stomp our .cage/pods directory.

One possibility is to implement hooks as shell scripts in config/hooks, possibly with .d directories to allow several scripts to trigger for the same hook. We might also consider a cage script myscript command, which could run oddball scripts that don't fit into the hook system. One advantage of a script command is that it would work in a subdirectory, which is a use case I've witnessed constantly with other developers.

Current plan

  • Implement initial up hook as a demo.
  • Consider getting rid of the up hook.
  • Implement a pull hook.
  • Talk to @dkastner about whether we want cage script foo support. (Maybe, moved to #33.)
  • Talk to @dkastner about optionally performing initialization on up. (Moved to #34.)

`cage script` support?

Larger cage projects often have some associated shell scripts, many of which call back into cage repeatedly to perform some task like database seeding with specific (larger than usual) data sets.

@dkastner and I have discussed having a cage script command:

cage script load-data full

This would call $PROJ/scripts/load-data full, but with a few wrinkles:

  • It would be able to run recursive cage commands, preserving the --target and -p options by default.
  • It would have some sort of sensible policy about .cage/pods and other cage state, perhaps sharing them between separate invocations of cage.

Would this offer enough value to be worth the complexity?

Error: plugin 'vault' failed: could not generate token for '*': hyper error: An error in the OpenSSL library: certificate verify failed

This only affects the binary distributions. It looks like our statically linked OpenSSL is still looking for certain cert-related files in musl directories.

strace -Ff -tt cage --override="staging" export export 2>&1 | tee cage.log
18:29:46.176436 stat("/usr/local/musl/ssl/certs/157753a5.0", 0x7fffcc0c1490) = -1 ENOENT (No such file or directory)
18:29:46.176560 stat("/usr/local/musl/ssl/certs/d6325660.0", 0x7fffcc0c1490) = -1 ENOENT (No such file or directory)
18:29:46.176826 stat("/usr/local/musl/ssl/certs/8d28ae65.0", 0x7fffcc0c1490) = -1 ENOENT (No such file or directory)

There's an easy workaround:

mkdir /usr/local/musl
ln -s /etc/ssl /usr/local/musl/ssl

The real fix will require a look at https://github.com/emk/rust-musl-builder to figure out what's going wrong.

There may be similar issues with Mac binaries. Again, this only affects the vault plugin.

Improve secret-handling in examples & `conductor new` output

@seamusabshere points out that we generate sample pods/overrides/production/common.env files containing:

RAILS_ENV=production
RACK_ENV=production
DATABASE_URL=postgres://postgres@db:5432/vault_integration_production

But it's a bad idea to do this, because some database configurations will include a password in the DATABASE_URL, and somebody might be tempted to just add it directly too this file instead of moving it to config/secrets.yml (or vault) where it belongs.

Fixing this requires a better template engine, which I've been trying to avoid:

  • Always generate config/secrets.yml when running conductor new.
  • Provide an API for querying whether a given template is actually available, and use it to determine which overrides we have specific templates for, and which should be defaulted? Or should we special-case this in generate_new? Hmm.

Why is the structure so deep?

@emk I feel like a Cage repo could simply be:

_config.yml
common.env
frontend.yml
overrides/
  development/
    common.env

Why all the fuss with the config/ dir and the demotion of the pod.yml files to a pods/ dir?

Support docker-compose named volumes

Hello!

Thank you for creating and releasing this project to the world! 😄 😍

Was playing around and found that while docker-compose v2 file format is supported (version: 2) it is not possible to define named volumes instead of locally mapped ones.

Take the following change to example db.yml:

version: "2"
services:
  db:
    image: "mini/postgresql:9.3"
    volumes:
    - "postgresql:/data"

volumes:
  postgresql:

Result in the following error:

$ cage pull
Error: Error parsing /home/luis/code/_experiments/myproject/pods/db.yml: error reading file '/home/luis/code/_experiments/myproject/pods/db.yml'

Removal of volumes section result in the following error from docker-compose:

ERROR: Named volume "postgresql:/data:rw" is used in service "db" but no declaration was found in the volumes section.
Error: error running 'docker-compose -p myproject -f /home/luis/code/_experiments/myproject/.cage/pods/db.yml pull'

The following is the entire debug output:

DEBUG:cage: Arguments: ArgMatches { args: {}, subcommand: Some(SubCommand { name: "pull", matches: ArgMatches { args: {}, subcommand: None, usage: Some("USAGE:\n    cage pull [<POD_OR_SERVICE>]") } }), usage: Some("USAGE:\n    cage [OPTIONS] [SUBCOMMAND]") }
DEBUG:cage::pod: Parsing /home/luis/code/_experiments/myproject/pods/db.yml
Error: Error parsing /home/luis/code/_experiments/myproject/pods/db.yml: error reading file '/home/luis/code/_experiments/myproject/pods/db.yml'
stack backtrace:
   0:           0x6b1e1d - backtrace::backtrace::trace::h35ac923e26dc1b92
   1:           0x6b1da5 - backtrace::capture::Backtrace::new::h3a3d5e9defd7d407
   2:           0x6b1115 - error_chain::make_backtrace::h886cfbd0fcafaf76
   3:           0x4c0f48 - cage::pod::FileInfo::unnormalized::h790320cb80fb0307
   4:           0x5481c6 - cage::pod::Pod::new::hacf75a1c55be9ec8
   5:           0x5365e9 - cage::project::Project::from_dirs::h89ca20e1a0d8447f
   6:           0x544690 - cage::project::Project::from_current_dir::h9cd0302df0062b3e
   7:           0x403835 - cage::run::h5d2f081c3f60c97d
   8:           0x4416f3 - cage::main::hc0adf28a6e0d6b0d
   9:           0x917a68 - std::panicking::try::call::hca715a47aa047c49
  10:           0x91f89b - __rust_try
  11:           0x91f83e - __rust_maybe_catch_panic
  12:           0x9176c1 - std::rt::lang_start::h162055cb2e4b9fe7

Platform details: Linux Ubuntu 15.10 (Willy) x64
Cage: 0.1.2
Docker: 1.12.2
Docker-Compose: 1.8.1

Please note that I'm also using docker-machine and not native docker service. While still is possible to mount local directories across VM/native, I was aiming to ease usage of existing data in some projects.

Please let me know if other details are required.

Once again, thank you for creating and making this tool available! ❤️ ❤️ ❤️

Use lots more parallelism, including when running shell commands

For things which don't need to do I/O, rayon looks really nice. But we also need some way to display output to the console from multiple std::process::Command objects in parallel.

sebk and @mbrubeck on Mozilla IRC #rust suggest that I probably want to use a channel:

<sebk> BufReader(stdout).lines().map(|l| channel.send(l))
<mbrubeck> .map is lazy; you probably want a `for` loop :)

See ChildStdout, etc. and wrap it in a BufReader, basically.

cage status -> Error: error getting the project's state from Docker

Environment:

ProductName:    Mac OS X
ProductVersion: 10.10.5
BuildVersion:   14F2009

Docker version 1.12.3, build 6b644ec
docker-compose version 1.9.0, build 2585387
docker-machine version 0.8.1, build 41b3b25
cage 0.1.10

Cage binary was downloaded from the releases page and not compiled locally.

Short Error:

Error: error getting the project's state from Docker
could not connected to Docker at 'tcp://192.168.99.100:2376'
Docker SSL support was disabled at compile time

Full error with debug flags enabled:

$ cage status
DEBUG:cage: Arguments: ArgMatches { args: {}, subcommand: Some(SubCommand { name: "status", matches: ArgMatches { args: {}, subcommand: None, usage: Some("USAGE:\n    cage status [<POD_OR_SERVICE>]") } }), usage: Some("USAGE:\n    cage [OPTIONS] [SUBCOMMAND]") }
DEBUG:cage::pod: Parsing /Users/mansfield/Dev/cage/test_project/pods/db.yml
DEBUG:cage::pod: Parsing /Users/mansfield/Dev/cage/test_project/pods/frontend.yml
DEBUG:cage::pod: Parsing /Users/mansfield/Dev/cage/test_project/pods/rake.yml
DEBUG:cage::plugins: vault generator was disabled at build time
DEBUG:cage::plugins: vault transform was disabled at build time
DEBUG:cage::project: Outputting /Users/mansfield/Dev/cage/test_project/.cage/pods/db.yml
DEBUG:cage::pod: Merging pod db with target development
DEBUG:cage::project: Outputting /Users/mansfield/Dev/cage/test_project/.cage/pods/frontend.yml
DEBUG:cage::project: Outputting /Users/mansfield/Dev/cage/test_project/.cage/pods/rake.yml
DEBUG:cage::pod: Merging pod frontend with target development
DEBUG:cage::pod: Merging pod rake with target development
Error: error getting the project's state from Docker
could not connected to Docker at 'tcp://192.168.99.100:2376'
Docker SSL support was disabled at compile time
stack backtrace:
   0:        0x104e9ca1e - backtrace::backtrace::trace::hbb3527c862dcb156
   1:        0x104e9cd2c - backtrace::capture::Backtrace::new::hb88c898ead0c41a6
   2:        0x104e9c754 - error_chain::make_backtrace::hf6780bdef7b8a72b
   3:        0x104daf805 - boondock::docker::Docker::connect_with_defaults::h6ee39577b20b0ecf
   4:        0x104d3b7a5 - cage::runtime_state::RuntimeState::for_project::he7f15f11730daafc
   5:        0x104c872dc - cage::run::hc3b00895a655f469
   6:        0x104c91943 - cage::main::h1f9b8c28fd949fc8
   7:        0x104f7178a - __rust_maybe_catch_panic
   8:        0x104f6fbf6 - std::rt::lang_start::h538f8960e7644c80

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.