basecamp / kamal Goto Github PK
View Code? Open in Web Editor NEWDeploy web apps anywhere.
Home Page: https://kamal-deploy.org
License: MIT License
Deploy web apps anywhere.
Home Page: https://kamal-deploy.org
License: MIT License
Error message is obtuse:
INFO Building image may take a while (run with VERBOSE=1 for progress logging)
fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
There will be cases when a user is asked to copy the output of mrsk config
for debugging. Currently it's a bad idea since all secrets are exposed in env_args
, which also makes the following statement in README false:
Marking an ENV as secret currently only redacts its value in the output for MRSK.
I just hacked together bin/mrsk as a bash binstub that calls rake. Which actually works fine for our purposes, except for the fact that a bare call to bin/mrsk will show all commands in their rake form with the full namespace. We want a list that looks like "mrsk app:start ". Maybe could just regex the output on the bare call?
Longer path could be to consider making a full Rake application, ala Capistrano. But not sure that's worth the complication, unless there are other important benefits that I'm not seeing at the moment.
Would be nice if the logs
commands for both app and traefik were tailing rather than just last 100. Needs to merge all the log streams together in one continuous output.
First, thanks for building MRSK.
On NodeJS
/JS
stack, there are many (😬) config files in the root
folder, closed to package.json
and tsconfig.json
.
Having a config
folder isn't a common practice (I think).
So having a new config
folder just for deploy.yml
doesn't seem optimal.
It would be great to be able to configure (via CLI
) where deploy.yml
is (+ be able to renamed deploy.yml
).
Hey, thanks for mrsk. Before setting up my environments, do you support ipv6 adresses for the server lists or do I have to use ipv4?
Some people may want to avoid using third party repositories. It's pretty easy to spin up a repository using docker itself so one idea would be that mrsk would start a repository on the server during deploy, push the container to the repo, then stop the repo.
It could optionally also delete older images based on some strategy (count or age or whatnot).
Just an idea.
When deploying an application on a single host, we can't have multiple containers right now. Having this in config/deploy.yml:
service: foo
servers:
web:
hosts:
- 1.2.3.4
cmd: bundle exec puma
job:
hosts:
- 1.2.3.4
cmd: bundle exec sidekiq
… will not work as mrsk tries to run two containers with the same name "foo-". Deploying on a single server sounds like a thing the library should support. Do we want the role as part of the docker name or have a config for it?
rails db:migrate
or wait/abort
if there are pending migrations?@dhh Thanks for the gem, looks promising. Are there any plans to support multiple env like staging
and production
?
Would be nice to run mrsk deploy staging
and mrsk deploy production
If you make a mistake in deploy.yml, especially with the ERB definitions, like referencing missing credentials, the error message is obtuse and the backtrace hard to parse. Let's improve this and make it easier to know exactly what went wrong.
I would like to have something like mrsk deploy
but for local dev machine. It could be called mrsk run
. It should download the image from container repository and then run it on local docker with same variables as it would be when running mrsk deploy
against servers. It should spin up Traefik and everything on localhost.
The use case I'm looking for is to be able to try the image on local dev machine before deploying it to servers.
Hey, I was trying to test this out and got an error when running ./bin/rake mrsk:init
:
bundle add mrsk
./bin/rake mrsk:init
rake aborted!
Configuration file not found in /home/tony/Code/docker-rails/config/deploy.yml
/home/tony/Code/docker-rails/Rakefile:6:in `<main>'
(See full trace by running task with --trace)
It looks like the Configuration.load_file()
call here is what breaks based on the load_file
implementation here (I don't know much about building gems).
Is there a way to only require the file when running any of the mrsk
commands? Or do we need to restructure the Configuration class so it lazily loads the file?
Btw, I was able to bypass this by creating the stub file manually.
The readme suggests that you can buy the baseline with your own hardware, then deploy to a cloud before a big seasonal spike to get more capacity.
I could very well be missing some functionality or the broader goal here, but it is my impression that there isn't any kubernetes-like autoscaling capabilities in MRSK - it is meant for just deploying containers to a fixed set of machines, therefore that seasonal-cloud VM still needs to be manually deployed and scaled.
If this is the case, it would be wonderful if you could use cloud providers' auto scaling mechanisms (e.g. Azure VM Scale Sets) so as to have a static baseline of owned or rented bare metal servers, but then also have the flexibility of the cloud to meet unexpected traffic spikes. After the spike, you could deploy more bare metal according to how much of the new traffic might stick around.
Would be great to have step-by-step guides on how to get setup with Digital Ocean, OVH, Hetzner, AWS, etc. All the major cloud providers. Show how to create a handful of servers, setup a load balancer, configure a domain, etc. Then maybe put things behind a Cloudflare CDN. Everything needed from "I have a Hello World app" to "I'm serving 2K req/sec across X servers, and can easily upgrade the code with MRSK".
Are there any plans for adding support to scale containers (on the same host) up and down? Something you can do with docker-compose for example docker-compose up -d --scale service=2
As far as I can see traefik already supports this. Not sure if this is the direction you want to go.
Thanks, keep up the good work
Start containers with some logging options so that we don't clog up the hosts by default: https://www.freecodecamp.org/news/how-to-setup-log-rotation-for-a-docker-container-a508093912b2/
To help guard you against deploying a bad version, it'd be nice if we could find a way to run a health check on the new version containers before we stop the old one. Not trivial, though, given the fact that traefik auto configures on the basis of static labels. But let's see if we can't figure it out!
👋 Hey!
I was exploring mrsk
and stepped on few rakes that resulted in errors that weren't very clear. Maybe checks for them could be added?
This could be a simple which docker
check, but that might not work for other OS.
Not sure how this could be alleviated though, since the build happens locally and multiple OS issue prevails.
If buildx
isn't installed, the docker buildx
will fail with weird errors that aren't easy to parse. Maybe a command status check for docker buildx
could be added? Regarding fixing that - ditto as above.
Didn't have a Dockerfile
in my current directory and the docker buildx build ..... --file Dockerfile .
failed with rather cryptic message. Simple presence check for Dockerfile
would work.
When having an accessory (like redis or mysql), I'd like to access that from within my app using its docker name (like "my-app-redis"), similar to how it works with docker-compose.
This doesn't work right now as the service and the accessory are both using the default docker network and this kind of hostname resolution/dns only seem to work for user-defined networks. Creating a docker network and using that for services and accessories works, but then traefik won't be able to route traffic to the service without further configuration or connecting traefik to the same network.
Is the idea of creating a shared default docker network for traefik, the service and accessories something we want to pursue in the gem or is this something the user should take care of? A simple network: <network>
deploy.yml option for service, traefik and accessories would suffice to generate the required --network
docker args I guess.
In some cases it is useful to run a script when the deployment is finished, for example for use Airbrake Deploy Tracking or notify by Slack.
Great to see a new approach for deploying Rails apps 👏🏼
I haven’t tested mrsk out, but based on the documentation I expect there is an issue with the assets during deploys. Capistrano has this solved. We run our own deployment tooling and needed to solve this issue ourselves a few years ago.
The issue starts occurring once you have more than a couple of instances you are deploying to. The first instance can run a newer version than other servers. This new version wants to load new versions of the assets. Because they are timestamped/versioned, these assets are only available on the fresh instances. With more instances to deploy, the chance of hitting an old version increases. This means the assets 404 and no styling is present. In the end we reached a scale that we almost always had this issue with our deploys.
Capistrano solves this by sharing the assets between deploys. So older assets are still available. We have solved this by serving our assets from an s3 bucket with a CDN in front of it. By uploading the assets to the bucket before any instances are updated, you assure that all new versions can request the correct assets.
Like I said, I haven’t tested mrsk so maybe there is a hidden solution to this issue and this can’t occur. But from my understanding of the deployment process, there is a risk of missing assets. Might be interesting to work this out before the issue starts happing at scale.
It is usually something common in projects that tasks are executed periodically.
What plan is there to do this with mrsk?
I'd love to be able to deploy an accessory to every server used by the service. This would be similar in spirit to a kubernetes daemonset. My motivation for this is primarily around monitoring applications, e.g. a datadog agent or log forwarding agent.
I can workaround this right now by configuring these "node" level processes with terraform on server creation, but this strikes me as a blocker for many projects.
This would be a powerful tool to have in smoothing the transition from something like Heroku to IaaS imo. By combining this with traefik.args.accesslog: true
we'd be able to enable a very simple mechanism for users to forward request logs to a log aggregator.
Currently. each accessory is deployed to a single host. I could do a hacky change to enable this particular use-case by editing https://github.com/mrsked/mrsk/blob/main/lib/mrsk/cli/accessory.rb#L11 to accept an array of hosts rather than manually pasting the IP addresses in this array, but it feels pretty hacky.
accessories:
datadog:
image: gcr.io/datadoghq/agent:7
host:
- 1.1.1.1
- 1.1.1.2
...
Or I guess I could configure some conditional logic and add a new config variable like this:
accessories:
datadog:
image: gcr.io/datadoghq/agent:7
daemon: true
Which triggered some conditional logic to grab the IPs from the top-level servers config in lib/mrsk/commands/accessory.rb
?
I'd probably want this to blow up at initialization time if the host
and daemon
config variable were both set.
While writing this, I've started to wonder whether accessories should remain single instance containers like they are currently. Maybe what I'm describing is a completely new kind of entity? If someone from 37signals can weigh in how you're handling monitoring for services deployed with mrsk right now then that would be helpful.
Firstly, I'm thrilled to work on a tool such as this – modernizing the workflow inspired by Capistrano to work with containers is fantastic!
As a developer who would love to use and contribute to this library, but lacking a Ruby background, it would be great to see an addition to the README that explains a few basic steps regarding:
Many thanks!
Very cool project. It reminds me a bit of docker machine for host configuration.
I can see that the current mrsk configuration is already growing with complex features like custom networking between hosts, accessories (running persistent services like a database), logging, service restarts / monitoring, secret management, etc.
Many of these challenges are already solved by Docker via Docker Swarm, with secrets management, network isolation, volumes, logging, rollout logic and much more. It has also solved many common cases and there is online help for many problems.
At the same time, I see that mrsk can be used to streamline swarm configuration and node setup, provide tooling in areas that are lacking, and let docker handle all other issues.
My questions is how mrsk is different from docker swarm as many of the challenges are already solved natively by docker ecosystem.
I've followed the instructions to deploy a sample app and everything seemed to be going well, the image built, the healthcheck passed etc. and then the service was started with:
INFO [edcee286] Running docker run --detach --restart unless-stopped --log-opt max-size=10m --name hello-c73fc14473aa3ca274abf38c540c712b01a948f8 -e [REDACTED] -e PORT="3000" --label service="hello" --label role="web" --label traefik.http.routers.hello.rule="PathPrefix(\`/\`)" --label traefik.http.services.hello.loadbalancer.healthcheck.path="/" --label traefik.http.services.hello.loadbalancer.healthcheck.interval="1s" --label traefik.http.middlewares.hello.retry.attempts="5" --label traefik.http.middlewares.hello.retry.initialinterval="500ms" ghcr.io/moomerman/hello:c73fc14473aa3ca274abf38c540c712b01a948f8 on 161.35.166.215
There were no errors in the deployment, but traefik was returning a 404.
The only error that stood out in the traefik logs was:
2023-02-24T16:25:09.330396846Z time="2023-02-24T16:25:09Z" level=error msg="service \"hello\" error: port is missing" providerName=docker container=hello-c73fc14473aa3ca274abf38c540c712b01a948f8-19bf7b05f0db1f6909ab953596f609d3fb0dcd73af0a1088b128242b54a6dacc
Looking at the launch command I can see that we're not telling traefik what the backend port is. Going on that hunch I found there is a config option to specify the port, so I added that to configuration/role.rb
(port hardcoded to 3000 temporarily)
if running_traefik?
{
...
"traefik.http.services.#{config.service}.loadbalancer.server.port" => "3000"
...
}
...
and deployed and it works now, the traefik logs are showing is UP
2023-02-24T16:33:56.154280237Z time="2023-02-24T16:33:56Z" level=warning msg="Health check up: returning to server list. Backend: \"hello@docker\" URL: \"http://172.17.0.4:3000\" Weight: 1"
2023-02-24T16:33:56.154329165Z time="2023-02-24T16:33:56Z" level=debug msg="child http://172.17.0.4:3000 now UP"
2023-02-24T16:33:56.154336240Z time="2023-02-24T16:33:56Z" level=debug msg="Propagating new UP status"
I just wanted to check that I'm on the right track and I'm happy to submit a fix if so.
Caddy would be an excellent, fast and easy way to add a load balancer when multiple servers are used. On top of that, it generates Automatic SSL/HTTPS certificates, making life easier for many...
In the config we could specify the load balancer server and here is a basic example.
service: hey
image: 37s/hey
load_balancer:
host: example.com
server: xxx.xxx.xxx.xxx
encode: gzip br
servers:
- xxx.xxx.xxx.xxx
- xxx.xxx.xxx.xxx
....
A Caddy server will be configured on the load balancer server and will reverse proxy to all hosts.
docker run -d --name p-caddy --restart always \
--network host -v /home/caddy/Caddyfile:/etc/caddy/Caddyfile -v /home/caddy-data:/data caddy
example.com # Your site's domain name
# Compress responses according to Accept-Encoding headers
encode gzip zstd
# Load balance between three backends with custom health checks
reverse_proxy 10.0.0.1:9000 10.0.0.2:9000 10.0.0.3:9000 {
lb_policy random_choose 2
health_path /ok
health_interval 10s
}
I had to run docker context remove <context>
first, before running mrsk deploy
again. But I feel that when updating config values for the remote, the docker context should update as well?
app: foo
builder:
remote:
arch: amd64
host: ssh://root@***
$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT ERROR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock
desktop-linux unix:///Users/rasmus/.docker/run/docker.sock
mrsk-app-native-remote-amd64 mrsk-app-native-remote amd64 native host ssh://root@***
builder:
remote:
arch: amd64
host: ssh://changed-user-to-something-else-than-root@***
$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT ERROR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock
desktop-linux unix:///Users/rasmus/.docker/run/docker.sock
mrsk-app-native-remote-amd64 mrsk-app-native-remote amd64 native host ssh://root@***
We should be pruning old images and containers on the basis of numbers rather than time. You might well deploy a gazillion times within 30 days. All we need is, say, 5-10 old revisions available for quick rollback.
I'm getting this error whenever MRSK tries to establish an SSH connection to a server through Teleport, even though it works when I run ssh root@******
.
I'm trying to deploy a non-rails service job in it's own container that doesn't use http. It looks like web role is always required and the deploy command is still trying to run traefik, health checks, getting error ERROR (NoMethodError): undefined method 'hosts' for nil:NilClass
when not specifying web role.
What's the right process to deploy non http services?
Hey,
I'd love to have a way to configure traefik to listen on multiple ports for multiple entrypoints to enable services that might listen for other kinds of traffic - e.g. a port open for HTTP traffic and another for TCP traffic. My current thinking is that this would be an escape hatch from the default configuration so mrsk doesn't need to support this as packaged up feature.
Would something like this:
traefik:
host_port: 8080
additional_ports:
- 9000
entrypoints.mytcp.address: ':9000'
be of interest?
Alternatively, maybe it could look like this:
traefik:
host_port: 8080
additional_entrypoints:
myentrypointname: ':9000'
producing a traefik arg like --entrypoints.myentrypointname.address=:9000
.
This would also enable the traefik admin dashboard for interested users.
It would be nice to be able to download and use the tool as a CLI without depending on Ruby being on the system (requiring gem install
).
Any plans on releasing this as a standalone CLI tool?
I want to know what the unique service + version name is inside the docker container and I can't find a way of getting that information at present.
I propose a MRSK_CONTAINER_NAME
(name tbd) ENV var is set when running the container so that it can be read by the running service inside the container.
Happy to implement if it is desired.
I've got a couple of scenarios where I want to provide additional arguments to the docker run command, but at present it doesn't look like there's a way to provide them via configuration.
One example is where I want to provide additional capabilities to the container:
--cap-add=NET_ADMIN --cap-add=SYS_ADMIN --device=/dev/net/tun
Rather than supporting capabilities and devices as special cases, perhaps we can just provide a run_args
config where you can specify additional arguments?
Happy to implement it if desired.
I would like to be able to use MRSK in other projects and environments (ie. Laravel). While Laravel also stores it's project configuration in the /config
folder, it uses .php
files as opposed to .yml
. Within the context of a Laravel app it would make more sense to me to store the deploy.yml
file in the project root which is why I would like to be able to configure it.
Think this will be a nice issue for me to dabble with Ruby a bit. There are also quite a few directions we could take this in. I'm currently thinking about writing out a .mrskrc
on init that will contain this and similar configuration settings in the future. Thoughts?
From DigitalOcean docs:
For CI systems that support configuring registry authentication via username and password, use a DigitalOcean API token as both the username and the password. The API token must have read/write privileges to push to your registry.
The username config option should allow a secret reference, the same way as password does. This doesn't work:
# Credentials for your image host.
registry:
# Specify the registry server, if you're not using Docker Hub
server: registry.digitalocean.com
username:
- SECRET_REGISTRY_TOKEN
password:
- SECRET_REGISTRY_TOKEN
I know it's more of a general thing rather than a project specific question, but is there a "default/preferred way" of how database migrations (both data and schema) are gonna be handled? What about rolling back? I'm not looking for a specific answer, but just in general a thoughts of how (if at all) it could be handled from mrsk point of view.
Hello!
Do we open to this idea to define a JSON schema (https://json-schema.org) for deploy.yml
?
With this schema, we can write deploy.yml
faster.
Related tutorial: https://dev.to/brpaz/how-to-create-your-own-auto-completion-for-json-and-yaml-files-on-vs-code-with-the-help-of-json-schema-k1i
if you want to run on someone else's platform, like Render or Fly
You might want to change this to something like AWS or Google Cloud.
Fly definitely doesn't do Kubernetes. And as far as I know, neither does Render, but I'm not an authority there.
Fly's model is fairly close to MRSK's. Instead of msrk init
, the command is fly launch
. Instead of msrk deploy
, the command is fly deploy
. Instead of a. config/deploy.yml
, we have a fly.toml
file. Instead of ip addresses, developers specify what regions they want to deploy to and how many servers they want there (including how much RAM). We take care of assigning IP addresses and TLS certificates (via certbot).
Ignoring those differences, the model is that the developer works on their Rails app and maintains a Dockerfile, and when they are ready to ship, they run a deploy command.
In fact, there is no reason that a Rails application couldn't have both a config/deploy.yml
and a fly.toml
file, and deploy the same application using the same Dockerfile wherever they want.
Thoughts about supporting Podman instead of Docker as well?
SSHKit can configure Netssh to use a bastion SSH server. We should change our setup so that passes through. We could change the top level ssh_user configuration option to be:
ssh:
user: app
# whatever is needed for bastion
Adding PostgreSQL, Redis or KeyDB would make this gem more complete.
Many developers are working on side projects and don't need a managed or HA Redis/PostgreSQL database, so adding these features would be enough to have the app shipped.
docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
docker run -p 6379:6379 --restart always --name keydb -d eqalpha/keydb keydb-server /etc/keydb/keydb.conf --server-threads 4 --requirepass password --appendonly yes
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
Hi,
I am trying to test mrsk on a spring boot app :) Thanks for sharing it!
Any thoughts on these errors? The application seems to work as it is running migrations (with flyway) on the database and it is updated on the remote host.
result of mrsk deploy:
➜ accountingApp git:(master) ✗ ./gradlew build && mrsk deploy
BUILD SUCCESSFUL in 400ms
5 actionable tasks: 5 up-to-date
Ensure Docker is installed...
INFO [5f7d2803] Running which docker || (apt-get update -y && apt-get install docker.io -y) on 158.39.48.231
INFO [5f7d2803] Finished in 0.949 seconds with exit status 0 (successful).
Log into image registry...
INFO [84d4db95] Running docker login -u [REDACTED] -p [REDACTED] as gregtaube@localhost
INFO [84d4db95] Finished in 1.328 seconds with exit status 0 (successful).
INFO [f6f0436c] Running docker login -u [REDACTED] -p [REDACTED] on 158.39.48.231
INFO [f6f0436c] Finished in 1.128 seconds with exit status 0 (successful).
Build and push app image...
INFO [8189d79c] Running docker buildx build --push --platform linux/amd64,linux/arm64 --builder mrsk-accounting-app-multiarch -t gregtau/accounting-app:928c6acb67558b1f147ded5e40cb306cf58d8912 -t gregtau/accounting-app:latest --label service="accounting-app" . as gregtaube@localhost
DEBUG [8189d79c] Command: docker buildx build --push --platform linux/amd64,linux/arm64 --builder mrsk-accounting-app-multiarch -t gregtau/accounting-app:928c6acb67558b1f147ded5e40cb306cf58d8912 -t gregtau/accounting-app:latest --label service="accounting-app" .
DEBUG [8189d79c] #1 [internal] load .dockerignore
DEBUG [8189d79c] #1 transferring context: 2B done
DEBUG [8189d79c] #1 DONE 0.0s
DEBUG [8189d79c]
DEBUG [8189d79c] #2 [internal] load build definition from Dockerfile
DEBUG [8189d79c] #2 transferring dockerfile: 191B done
DEBUG [8189d79c] #2 DONE 0.0s
DEBUG [8189d79c]
DEBUG [8189d79c] #3 [linux/amd64 internal] load metadata for docker.io/library/eclipse-temurin:19
DEBUG [8189d79c] #3 ...
DEBUG [8189d79c]
DEBUG [8189d79c] #4 [auth] library/eclipse-temurin:pull token for registry-1.docker.io
DEBUG [8189d79c] #4 DONE 0.0s
DEBUG [8189d79c]
DEBUG [8189d79c] #5 [linux/arm64 internal] load metadata for docker.io/library/eclipse-temurin:19
DEBUG [8189d79c] #5 ...
DEBUG [8189d79c]
DEBUG [8189d79c] #3 [linux/amd64 internal] load metadata for docker.io/library/eclipse-temurin:19
DEBUG [8189d79c] #3 DONE 1.5s
DEBUG [8189d79c]
DEBUG [8189d79c] #5 [linux/arm64 internal] load metadata for docker.io/library/eclipse-temurin:19
DEBUG [8189d79c] #5 DONE 1.8s
DEBUG [8189d79c]
DEBUG [8189d79c] #6 [internal] load build context
DEBUG [8189d79c] #6 transferring context: 224B done
DEBUG [8189d79c] #6 DONE 0.0s
DEBUG [8189d79c]
DEBUG [8189d79c] #7 [linux/arm64 1/2] FROM docker.io/library/eclipse-temurin:19@sha256:9b0ae96f84fc58a2db048671f381d110195fb150c83eebb200ae39eb47fe6e62
DEBUG [8189d79c] #7 resolve docker.io/library/eclipse-temurin:19@sha256:9b0ae96f84fc58a2db048671f381d110195fb150c83eebb200ae39eb47fe6e62 done
DEBUG [8189d79c] #7 DONE 0.0s
DEBUG [8189d79c]
DEBUG [8189d79c] #8 [linux/arm64 2/2] COPY build/libs/accountingapp-1.0.0.jar app.jar
DEBUG [8189d79c] #8 CACHED
DEBUG [8189d79c]
DEBUG [8189d79c] #9 [linux/amd64 1/2] FROM docker.io/library/eclipse-temurin:19@sha256:9b0ae96f84fc58a2db048671f381d110195fb150c83eebb200ae39eb47fe6e62
DEBUG [8189d79c] #9 resolve docker.io/library/eclipse-temurin:19@sha256:9b0ae96f84fc58a2db048671f381d110195fb150c83eebb200ae39eb47fe6e62 done
DEBUG [8189d79c] #9 DONE 0.0s
DEBUG [8189d79c]
DEBUG [8189d79c] #10 [linux/amd64 2/2] COPY build/libs/accountingapp-1.0.0.jar app.jar
DEBUG [8189d79c] #10 CACHED
DEBUG [8189d79c]
DEBUG [8189d79c] #11 exporting to image
DEBUG [8189d79c] #11 exporting layers done
DEBUG [8189d79c] #11 exporting manifest sha256:8bedafa4334b618bd1f2af280cfaa7795100eae8eb3ca52392c36955064e6e1b done
DEBUG [8189d79c] #11 exporting config sha256:7ef10808d7f539e18294b52748a407e5ba7fe4fda03ffb0a334d482716f3bcca done
DEBUG [8189d79c] #11 exporting attestation manifest sha256:a15a0992854652ff132d2e6f3acfa3450b7cf5d2584f252d733e7a0a1edd9c13 done
DEBUG [8189d79c] #11 exporting manifest sha256:8c11a85848d3cf0e6c3f420ca14b228c3186b632ed79d2bff4112b1d2439ccd6 done
DEBUG [8189d79c] #11 exporting config sha256:c2b4ced759426b9127dbf44599c920be8c8812daebf1a5f699373bd929be9c55 done
DEBUG [8189d79c] #11 exporting attestation manifest sha256:84f0c41c07c455f9624dc5fe22310f0ac69052827f86b3d017e21c2bf7897e2b done
DEBUG [8189d79c] #11 exporting manifest list sha256:230f091075176092261d289b4eabd16511cc583ec694bd254d34259c272db405 done
DEBUG [8189d79c] #11 pushing layers
DEBUG [8189d79c] #11 ...
DEBUG [8189d79c]
DEBUG [8189d79c] #12 [auth] gregtau/accounting-app:pull,push token for registry-1.docker.io
DEBUG [8189d79c] #12 DONE 0.0s
DEBUG [8189d79c]
DEBUG [8189d79c] #11 exporting to image
DEBUG [8189d79c] #11 pushing layers 1.9s done
DEBUG [8189d79c] #11 pushing manifest for docker.io/gregtau/accounting-app:928c6acb67558b1f147ded5e40cb306cf58d8912@sha256:230f091075176092261d289b4eabd16511cc583ec694bd254d34259c272db405
DEBUG [8189d79c] #11 pushing manifest for docker.io/gregtau/accounting-app:928c6acb67558b1f147ded5e40cb306cf58d8912@sha256:230f091075176092261d289b4eabd16511cc583ec694bd254d34259c272db405 1.4s done
DEBUG [8189d79c] #11 pushing layers 0.7s done
DEBUG [8189d79c] #11 pushing manifest for docker.io/gregtau/accounting-app:latest@sha256:230f091075176092261d289b4eabd16511cc583ec694bd254d34259c272db405
DEBUG [8189d79c] #11 pushing manifest for docker.io/gregtau/accounting-app:latest@sha256:230f091075176092261d289b4eabd16511cc583ec694bd254d34259c272db405 0.9s done
DEBUG [8189d79c] #11 DONE 4.9s
INFO [8189d79c] Finished in 6.990 seconds with exit status 0 (successful).
INFO [c5b48ef5] Running docker image rm --force gregtau/accounting-app:928c6acb67558b1f147ded5e40cb306cf58d8912 on 158.39.48.231
INFO [c5b48ef5] Finished in 0.142 seconds with exit status 0 (successful).
INFO [e1d41f11] Running docker pull gregtau/accounting-app:928c6acb67558b1f147ded5e40cb306cf58d8912 on 158.39.48.231
INFO [e1d41f11] Finished in 3.280 seconds with exit status 0 (successful).
Ensure Traefik is running...
INFO [60f62209] Running docker run --name traefik --detach --restart unless-stopped --log-opt max-size=10m --publish 80:80 --volume /var/run/docker.sock:/var/run/docker.sock traefik --providers.docker --log.level=DEBUG on 158.39.48.231
INFO [60f62209] Finished in 0.109 seconds with exit status 125 (failed).
Ensure app can pass healthcheck...
INFO [ddd2e7ad] Running docker run --detach --name healthcheck-accounting-app-928c6acb67558b1f147ded5e40cb306cf58d8912 --publish 3999:3000 --label service=healthcheck-accounting-app gregtau/accounting-app:928c6acb67558b1f147ded5e40cb306cf58d8912 on 158.39.48.231
INFO [ddd2e7ad] Finished in 0.616 seconds with exit status 0 (successful).
INFO [525966b7] Running /usr/bin/env curl --silent --output /dev/null --write-out '%{http_code}' --max-time 2 http://localhost:3999/up on 158.39.48.231
INFO Health check against /up failed to respond, retrying in 1s...
INFO [79d55280] Running /usr/bin/env curl --silent --output /dev/null --write-out '%{http_code}' --max-time 2 http://localhost:3999/up on 158.39.48.231
INFO Health check against /up failed to respond, retrying in 2s...
INFO [eda5bfc3] Running /usr/bin/env curl --silent --output /dev/null --write-out '%{http_code}' --max-time 2 http://localhost:3999/up on 158.39.48.231
INFO Health check against /up failed to respond, retrying in 3s...
INFO [5bf72310] Running /usr/bin/env curl --silent --output /dev/null --write-out '%{http_code}' --max-time 2 http://localhost:3999/up on 158.39.48.231
INFO Health check against /up failed to respond, retrying in 4s...
INFO [658146f3] Running /usr/bin/env curl --silent --output /dev/null --write-out '%{http_code}' --max-time 2 http://localhost:3999/up on 158.39.48.231
INFO Health check against /up failed to respond, retrying in 5s...
INFO [39f02a93] Running /usr/bin/env curl --silent --output /dev/null --write-out '%{http_code}' --max-time 2 http://localhost:3999/up on 158.39.48.231
INFO Health check against /up failed to respond, retrying in 6s...
INFO [02f69c15] Running /usr/bin/env curl --silent --output /dev/null --write-out '%{http_code}' --max-time 2 http://localhost:3999/up on 158.39.48.231
INFO Health check against /up failed to respond, retrying in 7s...
INFO [b6da7dbc] Running /usr/bin/env curl --silent --output /dev/null --write-out '%{http_code}' --max-time 2 http://localhost:3999/up on 158.39.48.231
INFO [1c83ebdb] Running docker container ls --all --filter name=healthcheck-accounting-app --quiet | xargs docker logs --tail 50 2>&1 on 158.39.48.231
INFO [1c83ebdb] Finished in 0.156 seconds with exit status 0 (successful).
ERROR . ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v3.0.1)
2023-03-02T15:06:25.296Z INFO 1 --- [ main] c.respiroc.accountingapp.AccountingApp : Starting AccountingApp using Java 19.0.2 with PID 1 (/app.jar started by root in /)
2023-03-02T15:06:25.300Z INFO 1 --- [ main] c.respiroc.accountingapp.AccountingApp : No active profile set, falling back to 1 default profile: "default"
2023-03-02T15:06:26.299Z INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
2023-03-02T15:06:26.389Z INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 79 ms. Found 5 JPA repository interfaces.
2023-03-02T15:06:26.524Z WARN 1 --- [ main] ocalVariableTableParameterNameDiscoverer : Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.springframework.session.config.annotation.web.http.SpringHttpSessionConfiguration
2023-03-02T15:06:27.254Z INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2023-03-02T15:06:27.269Z INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2023-03-02T15:06:27.269Z INFO 1 --- [ main] o.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/10.1.4]
2023-03-02T15:06:27.363Z INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2023-03-02T15:06:27.366Z INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 2008 ms
2023-03-02T15:06:27.535Z INFO 1 --- [ main] o.f.c.internal.license.VersionPrinter : Flyway Community Edition 9.5.1 by Redgate
2023-03-02T15:06:27.535Z INFO 1 --- [ main] o.f.c.internal.license.VersionPrinter : See what's new here: https://flywaydb.org/documentation/learnmore/releaseNotes#9.5.1
2023-03-02T15:06:27.535Z INFO 1 --- [ main] o.f.c.internal.license.VersionPrinter :
2023-03-02T15:06:27.546Z INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2023-03-02T15:06:27.672Z INFO 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@482d776b
2023-03-02T15:06:27.674Z INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2023-03-02T15:06:27.694Z INFO 1 --- [ main] o.f.c.i.database.base.BaseDatabaseType : Database: jdbc:postgresql://158.39.48.231:5432/accountingapp (PostgreSQL 15.2)
2023-03-02T15:06:27.741Z INFO 1 --- [ main] o.f.core.internal.command.DbValidate : Successfully validated 2 migrations (execution time 00:00.026s)
2023-03-02T15:06:27.766Z INFO 1 --- [ main] o.f.core.internal.command.DbMigrate : Current version of schema "public": 2
2023-03-02T15:06:27.767Z INFO 1 --- [ main] o.f.core.internal.command.DbMigrate : Schema "public" is up to date. No migration necessary.
2023-03-02T15:06:28.015Z INFO 1 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default]
2023-03-02T15:06:28.090Z INFO 1 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 6.1.6.Final
2023-03-02T15:06:28.393Z WARN 1 --- [ main] org.hibernate.orm.deprecation : HHH90000021: Encountered deprecated setting [javax.persistence.sharedCache.mode], use [jakarta.persistence.sharedCache.mode] instead
2023-03-02T15:06:28.594Z INFO 1 --- [ main] SQL dialect : HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect
2023-03-02T15:06:29.593Z INFO 1 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
2023-03-02T15:06:29.605Z INFO 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2023-03-02T15:06:30.310Z WARN 1 --- [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
2023-03-02T15:06:30.723Z INFO 1 --- [ main] o.s.b.a.w.s.WelcomePageHandlerMapping : Adding welcome page template: index
2023-03-02T15:06:31.238Z INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2023-03-02T15:06:31.249Z INFO 1 --- [ main] c.respiroc.accountingapp.AccountingApp : Started AccountingApp in 6.445 seconds (process running for 7.017)
INFO [9a223885] Running docker container ls --all --filter name=healthcheck-accounting-app --quiet | xargs docker stop on 158.39.48.231
INFO [9a223885] Finished in 0.574 seconds with exit status 0 (successful).
INFO [7f676ee9] Running docker container ls --all --filter name=healthcheck-accounting-app --quiet | xargs docker container rm on 158.39.48.231
INFO [7f676ee9] Finished in 0.174 seconds with exit status 0 (successful).
Finished all in 43.8 seconds
ERROR (SSHKit::Command::Failed): Health check against /up failed to return 200 OK!
mrsk details:
➜ accountingApp git:(master) ✗ mrsk details
INFO [3c7ea0f6] Running docker ps --filter name=traefik on 158.39.48.231
INFO [3c7ea0f6] Finished in 0.315 seconds with exit status 0 (successful).
Traefik Host: 158.39.48.231
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a7a486701ec traefik "/entrypoint.sh --pr…" 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp, :::80->80/tcp traefik
INFO [9e0e0311] Running docker ps --filter label=service=accounting-app on 158.39.48.231
INFO [9e0e0311] Finished in 0.107 seconds with exit status 0 (successful).
App Host: 158.39.48.231
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
INFO [480cf922] Running docker ps --filter label=service=accounting-app-postgres on 158.39.48.231
INFO [480cf922] Finished in 0.111 seconds with exit status 0 (successful).
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74f43dbfa3ab postgres:15 "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp accounting-app-postgres
deploy.yml:
service: accounting-app
image: gregtau/accounting-app
servers:
- 158.39.48.231
registry:
username: gregtau
password:
- MRSK_REGISTRY_PASSWORD
accessories:
postgres:
image: postgres:15
host: 158.39.48.231
port: 5432
env:
clear:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: accountingapp
POSTGRES_USER:
POSTGRES_PASSWORD:
directories:
- data:/var/lib/postgresql/data
hey 👋🏻
I'm seeing this issue now where during the deploy/redeploy, assets:precompile
gets stuck and nothing happens. I'm using M1 for coding and am trying to deploy it on Digital Ocean. I configured only a remote builder since I don't really use docker locally
builder:
multiarch: false
args:
RAILS_ENV: production
remote:
arch: amd64
host: ssh://[email protected]
and the logs stop here:
DEBUG [3440b1cd] #30 [stage-4 7/7] RUN bin/rails assets:precompile
DEBUG [3440b1cd] #30 sha256:b7949315762e210b0c381eb1953df2b58233dd4ee2de4952b632b1de691344c1
DEBUG [3440b1cd] #30 3.338 yarn install v1.22.19
DEBUG [3440b1cd] #30 3.474 [1/4] Resolving packages...
DEBUG [3440b1cd] #30 3.864 success Already up-to-date.
DEBUG [3440b1cd] #30 3.872 Done in 0.54s.
DEBUG [3440b1cd] #30 4.451 yarn run v1.22.19
DEBUG [3440b1cd] #30 4.509 $ postcss ./app/assets/stylesheets/application.postcss.css -o ./app/assets/builds/style.css
DEBUG [3440b1cd] #30 6.739
DEBUG [3440b1cd] #30 6.739 🌼 daisyUI components 2.51.3 https://daisyui.com
after this, nothing else happens. I tried to leave it running for 10+ minutes with no success
my Dockerfile is a slightly modified version of Fly's dockerfile:
# syntax = docker/dockerfile:experimental
ARG RUBY_VERSION=3.2.1
ARG VARIANT=jemalloc-bullseye-slim
FROM quay.io/evl.ms/fullstaq-ruby:${RUBY_VERSION}-${VARIANT} as base
ENV RUBYOPT='--yjit'
ARG NODE_VERSION=19.7
ARG YARN_VERSION=1.22.19
ARG BUNDLER_VERSION=2.4.7
ARG RAILS_ENV=production
ENV RAILS_ENV=${RAILS_ENV}
ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true
ARG BUNDLE_WITHOUT=development:test
ARG BUNDLE_PATH=vendor/bundle
ENV BUNDLE_PATH ${BUNDLE_PATH}
ENV BUNDLE_WITHOUT ${BUNDLE_WITHOUT}
WORKDIR /app
RUN mkdir -p tmp/pids
RUN curl https://get.volta.sh | bash
ENV VOLTA_HOME /root/.volta
ENV PATH $VOLTA_HOME/bin:/usr/local/bin:$PATH
RUN volta install node@${NODE_VERSION} yarn@${YARN_VERSION}
# Add latest PostgreSQL repository
RUN --mount=type=cache,id=dev-apt-cache,sharing=locked,target=/var/cache/apt \
--mount=type=cache,id=dev-apt-lib,sharing=locked,target=/var/lib/apt \
apt-get update -qq && \
apt-get install --no-install-recommends -y wget \
&& rm -rf /var/lib/apt/lists /var/cache/apt/archives
RUN sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt bullseye-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
#######################################################################
# install packages only needed at build time
FROM base as build_deps
ARG BUILD_PACKAGES="git build-essential libpq-dev curl gzip xz-utils"
ENV BUILD_PACKAGES ${BUILD_PACKAGES}
RUN --mount=type=cache,id=dev-apt-cache,sharing=locked,target=/var/cache/apt \
--mount=type=cache,id=dev-apt-lib,sharing=locked,target=/var/lib/apt \
apt-get update -qq && \
apt-get install --no-install-recommends -y ${BUILD_PACKAGES} \
&& rm -rf /var/lib/apt/lists /var/cache/apt/archives
#######################################################################
# install gems
FROM build_deps as gems
RUN gem update --system --no-document
RUN gem install -N bundler -v ${BUNDLER_VERSION}
COPY Gemfile* ./
RUN bundle install && rm -rf vendor/bundle/ruby/*/cache
#######################################################################
# install node modules
FROM build_deps as node_modules
COPY package*json ./
COPY yarn.* ./
RUN yarn install
#######################################################################
# install deployment packages
FROM base
ARG DEPLOY_PACKAGES="postgresql-client file vim curl gzip"
ENV DEPLOY_PACKAGES=${DEPLOY_PACKAGES}
RUN --mount=type=cache,id=prod-apt-cache,sharing=locked,target=/var/cache/apt \
--mount=type=cache,id=prod-apt-lib,sharing=locked,target=/var/lib/apt \
apt-get update -qq && \
apt-get install --no-install-recommends -y \
${DEPLOY_PACKAGES} \
&& rm -rf /var/lib/apt/lists /var/cache/apt/archives
# copy installed gems
COPY --from=gems /app /app
COPY --from=gems /usr/lib/fullstaq-ruby/versions /usr/lib/fullstaq-ruby/versions
COPY --from=gems /usr/local/bundle /usr/local/bundle
# copy installed node modules
COPY --from=node_modules /app/node_modules /app/node_modules
#######################################################################
# Deploy your application
COPY . .
# Adjust binstubs to run on Linux and set current working directory
# RUN chmod +x /app/bin/* && \
# sed -i 's/ruby.exe/ruby/' /app/bin/* && \
# sed -i '/^#!/aDir.chdir File.expand_path("..", __dir__)' /app/bin/*
# The following enable assets to precompile on the build server. Adjust
# as necessary. If no combination works for you, see:
# https://fly.io/docs/rails/getting-started/existing/#access-to-environment-variables-at-build-time
ENV SECRET_KEY_BASE 1
# ENV AWS_ACCESS_KEY_ID=1
# ENV AWS_SECRET_ACCESS_KEY=1
ARG BUILD_COMMAND="bin/rails assets:precompile"
RUN ${BUILD_COMMAND}
# Default server start instructions. Generally Overridden by fly.toml.
ENV PORT 8080
ARG SERVER_COMMAND="bundle exec puma -C config/puma.rb"
ENV SERVER_COMMAND ${SERVER_COMMAND}
CMD ${SERVER_COMMAND}
I run this exact same Dockerfile on Fly and also ran it on Render in the past and the assets always compiled without issues. I can also compile the assets locally so I have no idea how to even start investigating this
Congratulations on starting this fantastic and very long waited method to deploy Rails apps!!! 🥇
I would like to have a way to build the images using pack and buildpacks
.
Sometimes is more straightforward than using the Dockerfile as it just works.
Of course, I'm not sure what the plans are for this beautiful gem, but I would like to start with a few ideas.
Having multiple ways to build the image would give more freedom to anyone to build the docker/cloudnative images however they want.
While playing with rollback, I found the terminology of VERSION quite confusing.
The concept of the rollback is based on the image tag, such as
[snip from README]
App Host: 192.168.0.1 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1d3c91ed1f51 registry.digitalocean.com/user/app:6ef8a6a84c525b123c5245345a8483f86d05a123 "/rails/bin/docker-e…" 19 minutes ago Up 19 minutes 3000/tcp chat-6ef8a6a84c525b123c5245345a8483f86d05a123 539f26b28369 registry.digitalocean.com/user/app:e5d9d7c2b898289dfbc5f7f1334140d984eedae4 "/rails/bin/docker-e…" 31 minutes ago Exited (1) 27 minutes ago chat-e5d9d7c2b898289dfbc5f7f1334140d984eedae4
From the example above, we can see that
e5d9d7c2b898289dfbc5f7f1334140d984eedae4
was the last version, so it's available as a rollback target.
[/snip]
While the tag can represent the app's version, in this case there's semantically no thing like a container version. Changing it to tag will align this to Docker's (or in general, container) terminology and cause less confusion around what's meant by this. Note that this is mainly a cosmetic change, but could help understand what's actually being targeted a lot.
What is the best way to add monitoring to MRSK?
Coming from K8s, it was easy to add kube-prometheus-stack
and get Prometheus and Grafana out of the box, so I was wondering what would be the best way to add something like that to MRSK?
» mrsk build create
INFO [f6131a7e] Running docker context create mrsk-testing-app-arm64 --description 'mrsk-testing-app arm64 native host' --docker 'host=unix:///Users/nathan/.docker/run/docker.sock' && docker context create mrsk-testing-app-amd64 --description 'mrsk-testing-app amd64 native host' --docker 'host=ssh://app@remote-builder-host' && docker buildx create --use --name mrsk-testing-app mrsk-testing-app-arm64 --platform linux/arm64 && docker buildx create --append --name mrsk-testing-app mrsk-testing-app-amd64 --platform linux/amd64 as nathan@localhost
/Users/nathan/.rbenv/versions/3.2.0/lib/ruby/gems/3.2.0/gems/sshkit-1.21.3/lib/sshkit/command.rb:97:in `exit_status=': docker exit status: 256 (SSHKit::Command::Failed)
docker stdout: mrsk-testing-app-arm64
mrsk-testing-app-amd64
docker stderr: Successfully created context "mrsk-testing-app-arm64"
Successfully created context "mrsk-testing-app-amd64"
ERROR: failed to initialize builder mrsk-testing-app (mrsk-testing-app0): Cannot connect to the Docker daemon at unix:///Users/nathan/.docker/run/docker.sock. Is the docker daemon running?
Here, the remote connection was successful, but the local docker daemon was not running. Since mrsk
is heavily reliant on docker, that seems like something we should catch and bail early on.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.