Giter Club home page Giter Club logo

kuby-core's People

Contributors

caffkane avatar camertron avatar dependabot[bot] avatar ibrahima avatar lazyatom avatar mvz avatar palkan avatar ps-ruby avatar rept avatar traels avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

kuby-core's Issues

generate kuby causes error

When I run this command

bundle exec rails generate kuby

the kuby.rb gets created but I get the following:

create  config/initializers/kuby.rb
Traceback (most recent call last):
    19: from bin/rails:4:in `<main>'
18: from bin/rails:4:in `require'
17: from /instalane/vendor/bundle/ruby/2.5.0/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<top (required)>'
16: from /instalane/vendor/bundle/ruby/2.5.0/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke'
15: from /instalane/vendor/bundle/ruby/2.5.0/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform'
14: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
13: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
12: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
11: from /instalane/vendor/bundle/ruby/2.5.0/gems/railties-5.2.4.3/lib/rails/commands/generate/generate_command.rb:26:in `perform'
10: from /instalane/vendor/bundle/ruby/2.5.0/gems/railties-5.2.4.3/lib/rails/generators.rb:276:in `invoke'
 9: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor/base.rb:485:in `start'
 8: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor/group.rb:232:in `dispatch'
 7: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor/invocation.rb:134:in `invoke_all'
 6: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor/invocation.rb:134:in `map'
 5: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor/invocation.rb:134:in `each'
 4: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor/invocation.rb:134:in `block in invoke_all'
 3: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
 2: from /instalane/vendor/bundle/ruby/2.5.0/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
 1: from /instalane/vendor/bundle/ruby/2.5.0/gems/kuby-core-0.11.4/lib/kuby/plugins/rails_app/generators/kuby.rb:24:in `create_config_file'
/instalane/vendor/bundle/ruby/2.5.0/gems/railties-5.2.4.3/lib/rails/railtie.rb:192:in `method_missing': undefined method `module_parent_name' for Instalane::Application:Class (NoMethodError)

I suspect it has something to with the version of thor (which I cannot update because of another dependency).

I installed kuby-core (0.11.4) and kuby-digitalocean (0.4.2). Thor is version 1.0.1 like you can see from the error.

Solution might be to update minimum version required in the Gemspec of thor?

Side question: should kuby be installed in the development group of the Gemfile? Probably not since you mentioned elsewhere that there is middleware that responds to health checks. Does this impact memory / performance while running in production? Or could it be a plan to split up the gem so most part goes into development except for the health check?

Docker Hub login does not work

I found a related issue (#5), but I still had trouble in the latest version of kuby (0.14.0)

$ bundle exec kuby -e production push
Pushing image kingdonb/kuby-test with tags 20211206144259, latest
Attempting to log in to registry at index.docker.io:443
Error response from daemon: login attempt to https://index.docker.io:443/v2/ failed with status: 503 Service Unavailable
Couldn't log in to the registry at index.docker.io:443
build failed: docker command exited with status code 1
Pushing image kingdonb/kuby-test with tags 20211206144259-assets, latest-assets
Attempting to log in to registry at index.docker.io:443
Error response from daemon: login attempt to https://index.docker.io:443/v2/ failed with status: 503 Service Unavailable
Couldn't log in to the registry at index.docker.io:443
build failed: docker command exited with status code 1

The issue appears to be some obscure detail about the index.docker.io service that prohibits accessing it in this way:

https://index.docker.io:443/v2/

If you hit it this way, it doesn't have the error:

https://index.docker.io/v2/

No idea why, but updating this method to look like that solved the issue for me:

sig { returns(String) }
def image_host
  # @image_host ||= "#{image_uri.host}:#{image_uri.port}"
  @image_host ||= "#{image_uri.host}"
end

This is not a reasonable change so I have obviously not packed it into a PR as it will not help anyone other than Docker Hub users, but maybe an appropriate workaround can be incorporated somehow anyway?

Now inexplicably I am getting an error when I try to push the image to the registry, but I get the same error outside of kuby, so it is unlikely to be kuby's fault ;)

Database user and password for new setup

Hi,

I used to have my database.yml like this:

production:
  adapter:  postgresql
  host:     <%= ENV['DB_HOST'] %>
  encoding: unicode
  database: <%= ENV['DB_NAME'] %>
  pool:     25
  username: <%= ENV['DB_USER'] %>
  password: <%= ENV['DB_PASS'] %>
  template: template0

Now I see in the kuby.rb file:

  add_plugin :rails_app do
    # configure database credentials
    database do
      user app_creds[:KUBY_DB_USER]
      password app_creds[:KUBY_DB_PASSWORD]
    end
  end

How does this work now? Do I make up some username and password for the Postgres (since the container needs to be created anyway)? Do I include that password in kuby.rb and in my database.yml or does kuby take care of the database.yml?

Or does kuby expose an environment variable that contains the DB_user, password, etc?

Sorbet sig in anonymous module potentially breaks CLI

I think this line might have broken the CLI; when I run bundle exec kuby -e production build using 0.11.3, I get the following error:

NoMethodError: undefined method `sig' for #<Module:0x00007f9bb6427198>
  /Users/james/.gem/ruby/2.6.5/gems/kuby-core-0.11.3/lib/kuby/commands.rb:22:in `block in <class:Commands>'
  /Users/james/.gem/ruby/2.6.5/gems/kuby-core-0.11.3/lib/kuby/commands.rb:21:in `initialize'
  /Users/james/.gem/ruby/2.6.5/gems/kuby-core-0.11.3/lib/kuby/commands.rb:21:in `new'
  /Users/james/.gem/ruby/2.6.5/gems/kuby-core-0.11.3/lib/kuby/commands.rb:21:in `<class:Commands>'
  /Users/james/.gem/ruby/2.6.5/gems/kuby-core-0.11.3/lib/kuby/commands.rb:7:in `<module:Kuby>'
  /Users/james/.gem/ruby/2.6.5/gems/kuby-core-0.11.3/lib/kuby/commands.rb:6:in `<top (required)>'
  /Users/james/.gem/ruby/2.6.5/gems/kuby-core-0.11.3/bin/kuby:4:in `require'
  /Users/james/.gem/ruby/2.6.5/gems/kuby-core-0.11.3/bin/kuby:4:in `<top (required)>'
  /Users/james/.gem/ruby/2.6.5/bin/kuby:23:in `load'
  /Users/james/.gem/ruby/2.6.5/bin/kuby:23:in `<top (required)>'

Possible you need to add an explicit extend T::Sig just before this line?

What kind of apps should we (not) deploy with Kuby?

Hi,

it would be nice to say somewhere (I did not see anywhere, correct me if I am wrong) what kind of app we should and should not deploy with Kuby.

I would imagine it might get expensive to deploy hobby app to EKS? Never used it but their website says: You pay $0.10 per hour for each Amazon EKS cluster that you create. which would be around 72 USD per month, right? Not sure if it's with servers and databases and stuff.

In this case something like Dokku or Heroku would be much cheaper.

Anyway, such a discussion should probably be not only about money, but overall technical overkills and so on.

Create a basic "bare metal" provider

Right now, the Kuby ecosystem supports a number of hosting providers like DigitalOcean, Linode, etc, but there's no way currently to deploy to a "bare metal" cluster, i.e. one you or your company manages.

It would be great to have a generic provider that stubs out all the necessary methods. I think the only ones you will need to worry about are kubeconfig_path and storage_class_name. The user would configure the provider with the path to their kubeconfig (by default stored in ~/.kube/config), the storage class name (default could be "standard") and... that's basically it.

Inspired by issue #1.

Krane::FatalDeploymentError: Template validation failed

When deploying krane blows up. This is a deploy with v. 0.7.0

KUBY_DOCKER_TAG=latest bundle exec rake kuby:deploy --trace I've set the tag to latest to bypass the other issue :)

This is what I get:

** Invoke kuby:deploy (first_time)
** Execute kuby:deploy
Validating global resource, namespace 'my-app-production'
namespace/my-app-production configured (dry run)
Deploying namespace 'my-app-production'
namespace/my-app-production unchanged
[INFO][2020-08-11 09:15:10 +0200]
[INFO][2020-08-11 09:15:10 +0200] ------------------------------------Phase 1: Initializing deploy------------------------------------
[INFO][2020-08-11 09:15:11 +0200] All required parameters and files are present
[INFO][2020-08-11 09:15:11 +0200] Discovering resources:
[INFO][2020-08-11 09:15:13 +0200] - Deployment/my-app-web
[INFO][2020-08-11 09:15:13 +0200] - Secret/my-app-web-mysql-secret
[INFO][2020-08-11 09:15:13 +0200] - ServiceAccount/my-app-sa
[INFO][2020-08-11 09:15:13 +0200] - ConfigMap/my-app-config
[INFO][2020-08-11 09:15:13 +0200] - Secret/my-app-registry-secret
[INFO][2020-08-11 09:15:13 +0200] - Ingress/my-app-ingress
[INFO][2020-08-11 09:15:13 +0200] - Secret/my-app-secrets
[INFO][2020-08-11 09:15:13 +0200] - ClusterIssuer/letsencrypt-production
[INFO][2020-08-11 09:15:13 +0200] - MySQL/my-app-web-mysql
[INFO][2020-08-11 09:15:13 +0200] - Secret/my-app-web-mysql-secret
[INFO][2020-08-11 09:15:13 +0200] - MySQL/my-app-web-mysql
[INFO][2020-08-11 09:15:13 +0200] - Service/my-app-svc
[INFO][2020-08-11 09:15:15 +0200]
[INFO][2020-08-11 09:15:15 +0200] ------------------------------------------Result: FAILURE-------------------------------------------
[FATAL][2020-08-11 09:15:16 +0200] Template validation failed
[FATAL][2020-08-11 09:15:16 +0200]
[FATAL][2020-08-11 09:15:16 +0200] Invalid template: ClusterIssuer-letsencrypt-production20200811-60914-uybj4y.yml
[FATAL][2020-08-11 09:15:16 +0200] > Error message:
[FATAL][2020-08-11 09:15:16 +0200] W0811 09:15:13.103161 60953 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
[FATAL][2020-08-11 09:15:16 +0200] error: unable to recognize "/var/folders/9l/3dw7rcl51pq4jfs0kjg99f7c0000gn/T/ClusterIssuer-letsencrypt-production20200811-60914-uybj4y.yml": no matches for kind "ClusterIssuer" in version "cert-manager.io/v1alpha2"
[FATAL][2020-08-11 09:15:16 +0200] > Template content:
[FATAL][2020-08-11 09:15:16 +0200] ---
[FATAL][2020-08-11 09:15:16 +0200] apiVersion: cert-manager.io/v1alpha2
[FATAL][2020-08-11 09:15:16 +0200] kind: ClusterIssuer
[FATAL][2020-08-11 09:15:16 +0200] metadata:
[FATAL][2020-08-11 09:15:16 +0200] name: letsencrypt-production
[FATAL][2020-08-11 09:15:16 +0200] namespace: cert-manager
[FATAL][2020-08-11 09:15:16 +0200] spec:
[FATAL][2020-08-11 09:15:16 +0200] acme:
[FATAL][2020-08-11 09:15:16 +0200] server: https://acme-v02.api.letsencrypt.org/directory
[FATAL][2020-08-11 09:15:16 +0200] email: [email protected]
[FATAL][2020-08-11 09:15:16 +0200] privateKeySecretRef:
[FATAL][2020-08-11 09:15:16 +0200] name: letsencrypt-production
[FATAL][2020-08-11 09:15:16 +0200] solvers:
[FATAL][2020-08-11 09:15:16 +0200] - http01:
[FATAL][2020-08-11 09:15:16 +0200] ingress:
[FATAL][2020-08-11 09:15:16 +0200] class: nginx
[FATAL][2020-08-11 09:15:16 +0200]
[FATAL][2020-08-11 09:15:16 +0200]
[FATAL][2020-08-11 09:15:16 +0200] Invalid template: MySQL-my-app-web-mysql20200811-60914-1rs7vqx.yml
[FATAL][2020-08-11 09:15:16 +0200] > Error message:
[FATAL][2020-08-11 09:15:16 +0200] W0811 09:15:13.122295 60956 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
[FATAL][2020-08-11 09:15:16 +0200] error: unable to recognize "/var/folders/9l/3dw7rcl51pq4jfs0kjg99f7c0000gn/T/MySQL-my-app-web-mysql20200811-60914-1rs7vqx.yml": no matches for kind "MySQL" in version "kubedb.com/v1alpha1"
[FATAL][2020-08-11 09:15:16 +0200] > Template content:
[FATAL][2020-08-11 09:15:16 +0200] ---
[FATAL][2020-08-11 09:15:16 +0200] kind: MySQL
[FATAL][2020-08-11 09:15:16 +0200] apiVersion: kubedb.com/v1alpha1
[FATAL][2020-08-11 09:15:16 +0200] spec:
[FATAL][2020-08-11 09:15:16 +0200] terminationPolicy: DoNotTerminate
[FATAL][2020-08-11 09:15:16 +0200] storageType: Durable
[FATAL][2020-08-11 09:15:16 +0200] version: 5.7-v2
[FATAL][2020-08-11 09:15:16 +0200] storage:
[FATAL][2020-08-11 09:15:16 +0200] accessModes:
[FATAL][2020-08-11 09:15:16 +0200] - ReadWriteOnce
[FATAL][2020-08-11 09:15:16 +0200] storageClassName: do-block-storage
[FATAL][2020-08-11 09:15:16 +0200] resources:
[FATAL][2020-08-11 09:15:16 +0200] requests:
[FATAL][2020-08-11 09:15:16 +0200] storage: 10Gi
[FATAL][2020-08-11 09:15:16 +0200] databaseSecret:
[FATAL][2020-08-11 09:15:16 +0200] secretName: my-app-web-mysql-secret
[FATAL][2020-08-11 09:15:16 +0200] metadata:
[FATAL][2020-08-11 09:15:16 +0200] name: my-app-web-mysql
[FATAL][2020-08-11 09:15:16 +0200] namespace: my-app-production
[FATAL][2020-08-11 09:15:16 +0200]
[FATAL][2020-08-11 09:15:16 +0200]
[FATAL][2020-08-11 09:15:16 +0200] Invalid template: MySQL-my-app-web-mysql20200811-60914-1na8no.yml
[FATAL][2020-08-11 09:15:16 +0200] > Error message:
[FATAL][2020-08-11 09:15:16 +0200] W0811 09:15:14.705164 60971 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
[FATAL][2020-08-11 09:15:16 +0200] error: unable to recognize "/var/folders/9l/3dw7rcl51pq4jfs0kjg99f7c0000gn/T/MySQL-my-app-web-mysql20200811-60914-1na8no.yml": no matches for kind "MySQL" in version "kubedb.com/v1alpha1"
[FATAL][2020-08-11 09:15:16 +0200] > Template content:
[FATAL][2020-08-11 09:15:16 +0200] ---
[FATAL][2020-08-11 09:15:16 +0200] kind: MySQL
[FATAL][2020-08-11 09:15:16 +0200] apiVersion: kubedb.com/v1alpha1
[FATAL][2020-08-11 09:15:16 +0200] spec:
[FATAL][2020-08-11 09:15:16 +0200] terminationPolicy: DoNotTerminate
[FATAL][2020-08-11 09:15:16 +0200] storageType: Durable
[FATAL][2020-08-11 09:15:16 +0200] version: 5.7-v2
[FATAL][2020-08-11 09:15:16 +0200] storage:
[FATAL][2020-08-11 09:15:16 +0200] accessModes:
[FATAL][2020-08-11 09:15:16 +0200] - ReadWriteOnce
[FATAL][2020-08-11 09:15:16 +0200] storageClassName: do-block-storage
[FATAL][2020-08-11 09:15:16 +0200] resources:
[FATAL][2020-08-11 09:15:16 +0200] requests:
[FATAL][2020-08-11 09:15:16 +0200] storage: 10Gi
[FATAL][2020-08-11 09:15:16 +0200] databaseSecret:
[FATAL][2020-08-11 09:15:16 +0200] secretName: my-app-web-mysql-secret
[FATAL][2020-08-11 09:15:16 +0200] metadata:
[FATAL][2020-08-11 09:15:16 +0200] name: my-app-web-mysql
[FATAL][2020-08-11 09:15:16 +0200] namespace: my-app-production
[FATAL][2020-08-11 09:15:16 +0200]
rake aborted!
Krane::FatalDeploymentError: Template validation failed

Unable to deploy from GitHub Package Registry

I'm getting the following error when trying to deploy:

$ bundle exec kuby -e production deploy
error: bad URI(is not URI?): "GitHub Package Registry"

With debugging, I get:

$ GLI_DEBUG=true bundle exec kuby -e production deploy
error: bad URI(is not URI?): "GitHub Package Registry"
bundler: failed to load command: kuby (/home/matijs/.rbenv/versions/2.7.1/bin/kuby)
URI::InvalidURIError: bad URI(is not URI?): "GitHub Package Registry"
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/2.7.0/uri/rfc3986_parser.rb:67:in `split'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/2.7.0/uri/rfc3986_parser.rb:73:in `parse'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/2.7.0/uri/common.rb:234:in `parse'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/docker-remote-0.1.0/lib/docker/remote/client.rb:45:in `token'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/docker-remote-0.1.0/lib/docker/remote/client.rb:93:in `block in make_get'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/docker-remote-0.1.0/lib/docker/remote/client.rb:92:in `tap'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/docker-remote-0.1.0/lib/docker/remote/client.rb:92:in `make_get'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/docker-remote-0.1.0/lib/docker/remote/client.rb:21:in `tags'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/docker/remote_tags.rb:28:in `tags'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/docker/remote_tags.rb:39:in `timestamp_tags'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/docker/tags.rb:68:in `timestamp_tags'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/docker/tags.rb:73:in `latest_timestamp_tag'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/docker/spec.rb:185:in `block in tag'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/docker/spec.rb:184:in `fetch'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/docker/spec.rb:184:in `tag'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/kubernetes/spec.rb:75:in `before_deploy'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/kubernetes/spec.rb:106:in `deploy'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/tasks.rb:78:in `deploy'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/commands.rb:153:in `block (2 levels) in <class:Commands>'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/gli-2.19.2/lib/gli/command_support.rb:131:in `execute'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/gli-2.19.2/lib/gli/app_support.rb:296:in `block in call_command'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/gli-2.19.2/lib/gli/app_support.rb:309:in `call_command'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/gli-2.19.2/lib/gli/app_support.rb:83:in `run'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/lib/kuby/commands.rb:31:in `run'
  /home/matijs/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/kuby-core-0.11.8/bin/kuby:4:in `<top (required)>'
  /home/matijs/.rbenv/versions/2.7.1/bin/kuby:23:in `load'
  /home/matijs/.rbenv/versions/2.7.1/bin/kuby:23:in `<top (required)>'

How do multiple nodes in a cluster work?

Hi,

If I create a cluster with two nodes (for high availability), how does Kuby handle this?

If I have the sidekiq plugin, and therefore redis, do these get installed and run simultaneously on both nodes? What about the database? Does it get setup on both nodes and that might cause issues?

I don't know anything about kubernetes, so maybe these are things that should be obvious to me, but aren't.

Thank you for your time!

master.key can be compromised in Docker image

I don't know if I followed the instructions to the letter, but I have a test repo that I have used from rails new and it has a config/master.key in the working directory (in the usual spot), which was quite unceremoniously built into the docker image and pushed up to the hub.

This is especially dangerous because docker hub credentials are embedded according to the instructions, so compromise of the master.key probably means compromising the image host and/or cloud platform keys (!!!)

I'm not sure if kuby generated my .dockerignore but I feel like config/master.key should definitely be listed in there as well, for a first step. Maybe there should be more involved process for ensuring that master keys don't get baked into images but I'm fairly certain that this one enhancement would have stopped me from compromising mine 👍

Skip assets compilation

Is there any legal way to skip assets compilation and copying (and also asset resource running at all)?

I have api-only app, so there's no even assets:precompile task available.

So, my deployment stops at the moment:

Continuing to wait for: Deployment/xxxxxx-assets
Still waiting for: Deployment/xxxxxx-assets
Still waiting for: Deployment/xxxxxx-assets

Expose more Kubernetes configuration options

Right now, these are the hoops you have to jump through if, for example, you want to change the deployment's readiness probe:

Kuby.definition('myapp') do
  environment(:production) do
    kubernetes do
      add_plugin(:rails_app) do
        deployment.spec.template.spec.container(:web).readiness_probe do
          timeout_seconds 1  # custom value goes here
        end
      end
    end
  end
end

Obviously that's neither obvious or discoverable, unless you're really familiar with Kuby's source code. Some config options like tls_enabled are already conveniently exposed as properties. It would be nice to expose a bunch of others as properties too. That way, modifying the readiness probe could be as easy as

add_plugin(:rails_app) do
  readiness_timeout 2
end

or even

add_plugin(:rails_app) do
  web_readiness_probe do
    timeout_seconds 2
  end
end

where the call to web_readiness_probe is just a convenience method that returns the web pod's readiness object (which is an instance of KubeDSL::DSL::V1::Probe).

Include .npmrc file

I'm using Font Awesome pro and have set the key it uses in the .npmrc file (in the root of the project). (see here for more info: https://fontawesome.com/how-to-use/on-the-web/setup/using-package-managers)

Locally the yarn install is working fine. However when building the docker it's giving an unauthenticated error. Probably because that file isn't copied into the container. Is there a way to force it to copy that before executing the yarn command?

undefined method `credentials' and undefined method `sig'

I have this problem when configuring kuby with linode

My repository - Branch

# Gemfile
# Kuby Deployment
gem 'kuby-core', '~> 0.11'
gem 'kuby-linode', '< 1.0'

I already verified that the docker credentials and the kubernetes cluster are read correctly

# log rails server
undefined method `credentials' for #<Kuby::Docker::DevSpec:0x000055b4c3076840>
/app/kuby.rb:15:in `block (3 levels) in <top (required)>'
/usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby/environment.rb:21:in `instance_eval'
/usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby/environment.rb:21:in `docker'
/app/kuby.rb:14:in `block (2 levels) in <top (required)>'
/usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby/definition.rb:28:in `instance_eval'
/usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby/definition.rb:28:in `environment'
/app/kuby.rb:13:in `block in <top (required)>'
/usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby.rb:53:in `instance_eval'
/usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby.rb:53:in `define'
/app/kuby.rb:5:in `<top (required)>'
/usr/local/bundle/ruby/2.5.0/gems/bootsnap-1.4.6/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
/usr/local/bundle/ruby/2.5.0/gems/bootsnap-1.4.6/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
/usr/local/bundle/ruby/2.5.0/gems/bootsnap-1.4.6/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
/usr/local/bundle/ruby/2.5.0/gems/bootsnap-1.4.6/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
/usr/local/bundle/ruby/2.5.0/gems/bootsnap-1.4.6/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
/usr/local/bundle/ruby/2.5.0/gems/zeitwerk-2.3.0/lib/zeitwerk/kernel.rb:23:in `require'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/dependencies.rb:324:in `block in require'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/dependencies.rb:291:in `load_dependency'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/dependencies.rb:324:in `require'
/usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby.rb:46:in `load!'
/app/config/initializers/kuby.rb:2:in `<top (required)>'
/usr/local/bundle/ruby/2.5.0/gems/bootsnap-1.4.6/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:55:in `load'
/usr/local/bundle/ruby/2.5.0/gems/bootsnap-1.4.6/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:55:in `load'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/dependencies.rb:318:in `block in load'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/dependencies.rb:291:in `load_dependency'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/dependencies.rb:318:in `load'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/engine.rb:666:in `block in load_config_initializer'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/notifications.rb:182:in `instrument'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/engine.rb:665:in `load_config_initializer'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/engine.rb:625:in `block (2 levels) in <class:Engine>'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/engine.rb:624:in `each'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/engine.rb:624:in `block in <class:Engine>'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/initializable.rb:32:in `instance_exec'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/initializable.rb:32:in `run'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/initializable.rb:61:in `block in run_initializers'
/usr/local/lib/ruby/2.5.0/tsort.rb:228:in `block in tsort_each'
/usr/local/lib/ruby/2.5.0/tsort.rb:350:in `block (2 levels) in each_strongly_connected_component'
/usr/local/lib/ruby/2.5.0/tsort.rb:422:in `block (2 levels) in each_strongly_connected_component_from'
/usr/local/lib/ruby/2.5.0/tsort.rb:431:in `each_strongly_connected_component_from'
/usr/local/lib/ruby/2.5.0/tsort.rb:421:in `block in each_strongly_connected_component_from'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/initializable.rb:50:in `each'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/initializable.rb:50:in `tsort_each_child'
/usr/local/lib/ruby/2.5.0/tsort.rb:415:in `call'
/usr/local/lib/ruby/2.5.0/tsort.rb:415:in `each_strongly_connected_component_from'
/usr/local/lib/ruby/2.5.0/tsort.rb:349:in `block in each_strongly_connected_component'
/usr/local/lib/ruby/2.5.0/tsort.rb:347:in `each'
/usr/local/lib/ruby/2.5.0/tsort.rb:347:in `call'
/usr/local/lib/ruby/2.5.0/tsort.rb:347:in `each_strongly_connected_component'
/usr/local/lib/ruby/2.5.0/tsort.rb:226:in `tsort_each'
/usr/local/lib/ruby/2.5.0/tsort.rb:205:in `tsort_each'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/initializable.rb:60:in `run_initializers'
/usr/local/bundle/ruby/2.5.0/gems/railties-6.0.3/lib/rails/application.rb:363:in `initialize!'
/app/config/environment.rb:5:in `<top (required)>'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/dependencies.rb:324:in `require'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/dependencies.rb:324:in `block in require'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/dependencies.rb:291:in `load_dependency'
/usr/local/bundle/ruby/2.5.0/gems/activesupport-6.0.3/lib/active_support/dependencies.rb:324:in `require'
/usr/local/bundle/ruby/2.5.0/gems/sidekiq-6.0.7/lib/sidekiq/cli.rb:252:in `boot_system'
/usr/local/bundle/ruby/2.5.0/gems/sidekiq-6.0.7/lib/sidekiq/cli.rb:37:in `run'
/usr/local/bundle/ruby/2.5.0/gems/sidekiq-6.0.7/bin/sidekiq:31:in `<top (required)>'
/usr/local/bundle/ruby/2.5.0/bin/sidekiq:23:in `load'
/usr/local/bundle/ruby/2.5.0/bin/sidekiq:23:in `<top (required)>'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:63:in `load'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:63:in `kernel_load'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:28:in `run'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli.rb:476:in `exec'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor.rb:399:in `dispatch'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli.rb:30:in `dispatch'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/base.rb:476:in `start'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli.rb:24:in `start'
/usr/local/bundle/gems/bundler-2.1.4/exe/bundle:46:in `block in <top (required)>'
/usr/local/bundle/gems/bundler-2.1.4/lib/bundler/friendly_errors.rb:123:in `with_friendly_errors'
/usr/local/bundle/gems/bundler-2.1.4/exe/bundle:34:in `<top (required)>'
/usr/local/bundle/bin/bundle:23:in `load'
/usr/local/bundle/bin/bundle:23:in `<main>'

This is my kuby config file:

# kuby.rb
require 'active_support/core_ext'
require 'active_support/encrypted_configuration'
require 'kuby/linode'

Kuby.define('last-ubication-deploy') do
 app_creds = ActiveSupport::EncryptedConfiguration.new(
     config_path: File.join('config', 'credentials.yml.enc'),
     key_path: File.join('config', 'master.key'),
     env_key: 'RAILS_MASTER_KEY',
     raise_if_missing_key: true
 )

 environment(:development) do
   docker do
     credentials do
       username app_creds[:docker][:username]
       password app_creds[:docker][:password]
       email app_creds[:docker][:email]
     end

     image_url 'docker.io/rudolfaraya/last-ubication-vehicle-app'
   end

   kubernetes do
     provider :linode do
       access_token app_creds[:linode][:access_token]
       cluster_id app_creds[:linode][:cluster_id]
     end

     add_plugin :rails_app do
       database do
         user app_creds[:DATABASE_USER]
         password app_creds[:DATABASE_PASSWORD]
       end
     end
   end
 end
end

If I comment out the docker configuration lines, I run this command inside the container and it throws me:

/app # bundle exec kuby
bundler: failed to load command: kuby (/usr/local/bundle/ruby/2.5.0/bin/kuby)
NoMethodError: undefined method `sig' for #<Module:0x0000559d0e6c5060>
  /usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby/commands.rb:22:in `block in <class:Commands>'
  /usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby/commands.rb:21:in `initialize'
  /usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby/commands.rb:21:in `new'
  /usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby/commands.rb:21:in `<class:Commands>'
  /usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby/commands.rb:7:in `<module:Kuby>'
  /usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/lib/kuby/commands.rb:6:in `<top (required)>'
  /usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/bin/kuby:4:in `require'
  /usr/local/bundle/ruby/2.5.0/gems/kuby-core-0.11.3/bin/kuby:4:in `<top (required)>'
  /usr/local/bundle/ruby/2.5.0/bin/kuby:23:in `load'
  /usr/local/bundle/ruby/2.5.0/bin/kuby:23:in `<top (required)>'

I hope you can help me

Kuby::Docker::MissingTagError

The tags still will not work. Running deploy (v. 0.7.0) I get this:

** Invoke kuby:deploy (first_time)
** Execute kuby:deploy
rake aborted!
Kuby::Docker::MissingTagError: Could not find tag 'could not find latest timestamped tag'.

I've done some spelunking and had kuby print out the list of tags: ["20200811075435", "latest", "20200810165134"]

Using buildx

I was trying to add support for invoking docker build as docker buildx build but I'm afraid it might be a little more complicated than that

My intention is to add these type of params:

https://github.com/fluxcd/image-reflector-controller/blob/62c06ea58cd14072fde2c9ada7c6970dedf580e5/.github/workflows/build.yaml#L57-L59

Then we can cache Docker build layers in a more efficient way. Anyway the --cache-to and --cache-from options are only available in Docker Buildx, like a lot of other options that I'd maybe like to use

In the past I've successfully enabled cache volumes for bundler inside of a Dockerfile, so that changes to the Gemfile and Gemfile.lock would not result in a complete recalculation of the image's build-time dependencies, but would instead build from cache so that only new dependencies were recalculated.

While this is obviously a niche optimization, there are other more common use cases that can only be handled well with buildx. The buildkit toolset is becoming pretty standard now IMHO. I'd love it if it were possible to invoke Kuby build in a way that uses buildx!

I might try implementing this again later, it might be easier now that 0.15.0 is solidly released. In the meanwhile, there is a functioning example of GitHub Actions that is WIP but currently looking pretty good at: https://github.com/kingdonb/kuby_test

With caching properly configured in this one last place, and a couple of other changes, I think this is nearly ready to tag and make an example out of it for contribution back to kuby-core, or to live somewhere in the getkuby project 👍

Multiple apps in one cluster?

Hi, thanks for this very promising project!

I am wondering if I can run my staging and production environments in the same cluster. If so, is there anything special I should know about how I need to configure the environments?

Kuby Ingress Error: Could not apply https://raw.githubusercontent.com/kubernetes/ingress-nginx....

I'm getting this error when we are trying to setup the provider on Digital Ocean.

Kuby config

require 'active_support/core_ext'
require 'active_support/encrypted_configuration'

# Define a production Kuby deploy environment
Kuby.define('KubyApp') do
  environment(:production) do
    # Because the Rails environment isn't always loaded when
    # your Kuby config is loaded, provide access to Rails
    # credentials manually.
    app_creds = ActiveSupport::EncryptedConfiguration.new(
      config_path: File.join('config', 'credentials', 'production.yml.enc'),
      key_path: File.join('config', 'credentials', 'production.key'),
      env_key: 'RAILS_MASTER_KEY',
      raise_if_missing_key: true
    )

    docker do
      # Configure your Docker registry credentials here. Add them to your
      # Rails credentials file by running `bundle exec rake credentials:edit`.
      credentials do
        username app_creds[:KUBY_DOCKER_USERNAME]
        password app_creds[:KUBY_DOCKER_PASSWORD]
        email app_creds[:KUBY_DOCKER_EMAIL]
      end

      # distro :alpine

      # Configure the URL to your Docker image here, eg:
      # image_url 'foo.bar.com/me/myproject'
      #
      # If you're using Gitlab's Docker registry, try something like this:
      image_url 'registry.gitlab.com/user/repo'
    end

    kubernetes do

      provider :digitalocean do
        access_token app_creds[:DIGITALOCEAN_ACCESS_TOKEN]
        cluster_id app_creds[:CLUSTER_ID]
      end

      # Add a plugin that facilitates deploying a Rails app.
      add_plugin :rails_app do
        hostname 'app2.domain.com'
        manage_database false

        env do
          data do
            add "DATABASE_URL", app_creds[:DATABASE_URL]
          end
        end
        # configure database credentials
        # database do
        #   user app_creds[:KUBY_DB_USER]
        #   password app_creds[:KUBY_DB_PASSWORD]
        # end
      end

      # Use Docker Desktop as the provider.
      # See: https://www.docker.com/products/docker-desktop
      #
      # Note: you will likely want to use a different provider when deploying
      # your application into a production environment. To configure a different
      # provider, add the corresponding gem to your gemfile and update the
      # following line according to the provider gem's README.

     
    end
  end
end

Kuby terminal error on kuby setup

bundle exec kuby -e production setup
Refreshing kubeconfig...
Successfully refreshed kubeconfig!
Deploying nginx ingress resources
Error from server (NotFound): namespaces "ingress-nginx" not found
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
unable to recognize "https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
unable to recognize "https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml": no matches for kind "Role" in version "rbac.authorization.k8s.io/v1beta1"
unable to recognize "https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml": no matches for kind "RoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
unable to recognize "https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
Could not apply https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml: kubectl exited with status code 1
error: Could not apply https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml: kubectl exited with status code 1

Kuby terminal error on kuby deploy

bundle exec kuby -e production deploy
Error from server (NotFound): namespaces "app-production" not found
Validating global resource, namespace 'app-production'
namespace/app-production created (dry run)
Deploying namespace 'app-production'
namespace/app-production created
[INFO][2022-03-11 17:41:57 +0200]
[INFO][2022-03-11 17:41:57 +0200]	------------------------------------Phase 1: Initializing deploy------------------------------------
[INFO][2022-03-11 17:42:00 +0200]	All required parameters and files are present
[INFO][2022-03-11 17:42:00 +0200]	Discovering resources:
[INFO][2022-03-11 17:42:03 +0200]	  - Service/app-assets-svc
[INFO][2022-03-11 17:42:03 +0200]	  - Ingress/app-ingress
[INFO][2022-03-11 17:42:03 +0200]	  - Deployment/app-assets
[INFO][2022-03-11 17:42:03 +0200]	  - ConfigMap/app-config
[INFO][2022-03-11 17:42:03 +0200]	  - ConfigMap/app-assets-nginx-config
[INFO][2022-03-11 17:42:03 +0200]	  - ServiceAccount/app-assets-sa
[INFO][2022-03-11 17:42:03 +0200]	  - Service/app-svc
[INFO][2022-03-11 17:42:03 +0200]	  - ClusterIssuer/letsencrypt-production
[INFO][2022-03-11 17:42:03 +0200]	  - ServiceAccount/app-sa
[INFO][2022-03-11 17:42:03 +0200]	  - Deployment/app-web
[INFO][2022-03-11 17:42:03 +0200]	  - Secret/app-secrets
[INFO][2022-03-11 17:42:03 +0200]	  - Secret/app-registry-secret
[INFO][2022-03-11 17:42:07 +0200]
[INFO][2022-03-11 17:42:07 +0200]	------------------------------------------Result: FAILURE-------------------------------------------
[FATAL][2022-03-11 17:42:07 +0200]	Template validation failed
[FATAL][2022-03-11 17:42:07 +0200]
[FATAL][2022-03-11 17:42:07 +0200]	Invalid template: ClusterIssuer-letsencrypt-production20220311-21350-rc7rha.yml
[FATAL][2022-03-11 17:42:07 +0200]	> Error message:
[FATAL][2022-03-11 17:42:07 +0200]	    W0311 17:42:03.089014   21380 helpers.go:557] --dry-run is deprecated and can be replaced with --dry-run=client.
[FATAL][2022-03-11 17:42:07 +0200]	    error: unable to recognize "/var/folders/hh/z_vjqk3j3dl5vw0whd7h7bp80000gn/T/ClusterIssuer-letsencrypt-production20220311-21350-rc7rha.yml": no matches for kind "ClusterIssuer" in version "cert-manager.io/v1alpha2"
[FATAL][2022-03-11 17:42:07 +0200]	> Template content:
[FATAL][2022-03-11 17:42:07 +0200]	    ---
[FATAL][2022-03-11 17:42:07 +0200]	    apiVersion: cert-manager.io/v1alpha2
[FATAL][2022-03-11 17:42:07 +0200]	    kind: ClusterIssuer
[FATAL][2022-03-11 17:42:07 +0200]	    metadata:
[FATAL][2022-03-11 17:42:07 +0200]	      name: letsencrypt-production
[FATAL][2022-03-11 17:42:07 +0200]	      namespace: cert-manager
[FATAL][2022-03-11 17:42:07 +0200]	    spec:
[FATAL][2022-03-11 17:42:07 +0200]	      acme:
[FATAL][2022-03-11 17:42:07 +0200]	        server: https://acme-v02.api.letsencrypt.org/directory
[FATAL][2022-03-11 17:42:07 +0200]	        email: [email protected]
[FATAL][2022-03-11 17:42:07 +0200]	        privateKeySecretRef:
[FATAL][2022-03-11 17:42:07 +0200]	          name: letsencrypt-production
[FATAL][2022-03-11 17:42:07 +0200]	        solvers:
[FATAL][2022-03-11 17:42:07 +0200]	        - http01:
[FATAL][2022-03-11 17:42:07 +0200]	            ingress:
[FATAL][2022-03-11 17:42:07 +0200]	              class: nginx
[FATAL][2022-03-11 17:42:07 +0200]
[FATAL][2022-03-11 17:42:07 +0200]
[FATAL][2022-03-11 17:42:07 +0200]	Invalid template: Ingress-app-ingress20220311-21350-ht7nke.yml
[FATAL][2022-03-11 17:42:07 +0200]	> Error message:
[FATAL][2022-03-11 17:42:07 +0200]	    W0311 17:42:05.241076   21398 helpers.go:557] --dry-run is deprecated and can be replaced with --dry-run=client.
[FATAL][2022-03-11 17:42:07 +0200]	    error: unable to recognize "/var/folders/hh/z_vjqk3j3dl5vw0whd7h7bp80000gn/T/Ingress-app-ingress20220311-21350-ht7nke.yml": no matches for kind "Ingress" in version "extensions/v1beta1"
[FATAL][2022-03-11 17:42:07 +0200]	> Template content:
[FATAL][2022-03-11 17:42:07 +0200]	    ---
[FATAL][2022-03-11 17:42:07 +0200]	    apiVersion: extensions/v1beta1
[FATAL][2022-03-11 17:42:07 +0200]	    kind: Ingress
[FATAL][2022-03-11 17:42:07 +0200]	    metadata:
[FATAL][2022-03-11 17:42:07 +0200]	      annotations:
[FATAL][2022-03-11 17:42:07 +0200]	        kubernetes.io/ingress.class: nginx
[FATAL][2022-03-11 17:42:07 +0200]	        cert-manager.io/cluster-issuer: letsencrypt-production
[FATAL][2022-03-11 17:42:07 +0200]	      name: app-ingress
[FATAL][2022-03-11 17:42:07 +0200]	      namespace: app-production
[FATAL][2022-03-11 17:42:07 +0200]	    spec:
[FATAL][2022-03-11 17:42:07 +0200]	      rules:
[FATAL][2022-03-11 17:42:07 +0200]	      - host: app2.domain.com
[FATAL][2022-03-11 17:42:07 +0200]	        http:
[FATAL][2022-03-11 17:42:07 +0200]	          paths:
[FATAL][2022-03-11 17:42:07 +0200]	          - backend:
[FATAL][2022-03-11 17:42:07 +0200]	              serviceName: app-svc
[FATAL][2022-03-11 17:42:07 +0200]	              servicePort: 8080
[FATAL][2022-03-11 17:42:07 +0200]	            path: "/"
[FATAL][2022-03-11 17:42:07 +0200]	      - host: app2.domain.com
[FATAL][2022-03-11 17:42:07 +0200]	        http:
[FATAL][2022-03-11 17:42:07 +0200]	          paths:
[FATAL][2022-03-11 17:42:07 +0200]	          - backend:
[FATAL][2022-03-11 17:42:07 +0200]	              serviceName: app-assets-svc
[FATAL][2022-03-11 17:42:07 +0200]	              servicePort: 8082
[FATAL][2022-03-11 17:42:07 +0200]	            path: "/assets"
[FATAL][2022-03-11 17:42:07 +0200]	          - backend:
[FATAL][2022-03-11 17:42:07 +0200]	              serviceName: app-assets-svc
[FATAL][2022-03-11 17:42:07 +0200]	              servicePort: 8082
[FATAL][2022-03-11 17:42:07 +0200]	            path: "/packs"
[FATAL][2022-03-11 17:42:07 +0200]	      tls:
[FATAL][2022-03-11 17:42:07 +0200]	      - hosts:
[FATAL][2022-03-11 17:42:07 +0200]	        - app2.domain.com
[FATAL][2022-03-11 17:42:07 +0200]	        secretName: app-tls
[FATAL][2022-03-11 17:42:07 +0200]
error: Template validation failed

Is there anything I can do to fix it?

Support for Rack compatible apps other than Rails

Kuby is a very nice piece of work with support for different cloud providers – including the ability to add your own provider.

The other end, however, appears to be less flexible: According to the docs, «Kuby is designed to work with Rails 5.1 and up». It would be very cool if Kuby supported any Rack-compatible app (such as Hanami, Sinatra etc) as well, maybe using a modular approach similar to the provider end.

Is this in any way desirable and feasible?

Pass RAILS_MASTER_KEY to Docker build

Currently, running RAILS_MASTER_KEY=abc123 bundle exec rake kuby:build fails during the database config rewrite phase because config/master.key doesn't exist. Kuby needs to pass RAILS_MASTER_KEY in as a build argument.

Add option to write Dockerfiles to disk

Whether you're using git-ops, want to version your Dockerfiles in source control, or just want to store them in a file, Kuby should support emitting them via the CLI. Perhaps something like kuby dockerfiles -o /path/to/output_dir.

Bare metal clusters?

Have you thought what it would take to make this work for bare metal clusters? I have a couple of k3s clusters running in VPSs and kuby sounds like a great tool for managing my apps.

Support recent versions of kubedb

Unfortunately the fine people over at AppsCode (the makers of kubedb) now require each user to obtain a free license. While kuby could do some tricky things to obtain a license automatically, that doesn't feel particularly moral (or maybe even legal?) The only solution I can think of is to point the user to the AppsCode website during cluster setup and then prompt them for the contents of the license file. Obviously that's not ideal, but I think it's the only option we have if we want to keep using kubedb with kuby (which I think we do - it's awesome).

RailsConf interest?

Any interest in presenting Kuby for RailsConf?

Screenshot_20220204-063445

I'm thinking about submitting something about Kuby and GitOps for the CFP that's open now!

Pods can't spread across nodes with ReadWriteOnce volumes

I'm seeing this on Linode but I suspect it's general; when I increase the number of nodes in my pool from one to two, and increase the replicas for the rails app in kuby.rb, then pods that are started on the new node fail to start because they cannot mount the block storage ("volume" in Linode parlance). I believe this is because Linode (and other providers, from what I can tell) limit volume connection to a single node.

I believe the volume is only being used for serving assets, once the deployment is actually running, so in theory, after deployment, only the assets pod(s) need to be connected to that volume, and they could be connected read-only. I'm not sure how to handle during deployments though. And generally my understanding might have huge holes or mistakes.

But the gist of it is, while we're mounting the assets volume as ReadWriteOnce, I don't think we can scale assets or web pods across multiple nodes.

Make it easy to stand up redis and memcached instances

Redis and memcached are technically supported via the kuby-kube-db gem, but they require a lot of knowledge to implement (see #28). It would be great to be able to stand up new Redis and memcached instances some standard way, eg:

kubernetes do
  add_plugin(:redis) do
    instance(:my_redis) do
      version '5.0.3-v1'
      storage '1Gi'
      # etc
    end
  end
end

That said, I'm not sure how many Rails devs use Redis and memcached (Redis especially) for anything other than caching. For that reason it might make more sense to introspect what the app is using for its Rails cache and automatically stand up the right instance. I'm not sure we could tell Rails to connect to the instance automatically tho since the Rails environment is often not completely initialized when Kuby.load! is called. We get around that for the database connection by rewriting database.yml, but no such mechanism exists for caching (caching config is defined in code in config/.rb).

Certain rake tasks attempt to connect to pods that are "Terminating"

The rake tasks in tasks/kuby.rake (and by proxy some of the methods in tasks.rb) query Kubernetes for the first running web pod, regardless of that pod's current state. Pods that are "Terminating" or "Pending" should generally be thought of as unusable and we shouldn't select them for executing remote commands, tailing logs, etc.

Unable to deploy

I tried to deploy, but got the following error:

[FATAL][2020-10-15 18:44:38 -0400]      Deployment/testapp-assets: TIMED OUT (progress deadline: 600s)
[FATAL][2020-10-15 18:44:38 -0400]      Timeout reason: ProgressDeadlineExceeded
[FATAL][2020-10-15 18:44:38 -0400]      Latest ReplicaSet: testapp-assets-86d95fd9c9
[FATAL][2020-10-15 18:44:38 -0400]
[FATAL][2020-10-15 18:44:38 -0400]      The following containers have not passed their readiness probes on at least one pod:
[FATAL][2020-10-15 18:44:38 -0400]      > testapp-assets must respond with a good status code at '/500.html'
[FATAL][2020-10-15 18:44:38 -0400]
[FATAL][2020-10-15 18:44:38 -0400]        - Final status: 1 replica, 1 updatedReplica, 1 unavailableReplica
[FATAL][2020-10-15 18:44:38 -0400]        - Events (common success events excluded):
[FATAL][2020-10-15 18:44:38 -0400]            [Deployment/testapp-assets]     ScalingReplicaSet: Scaled up replica set testapp-assets-86d95fd9c9 to 1 (1 events)
[FATAL][2020-10-15 18:44:38 -0400]            [Pod/testapp-assets-86d95fd9c9-zjmzz]   SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-fd6f0799-ee5b-4948-bcb3-1a9acf8c8a1d"  (1 events)
[FATAL][2020-10-15 18:44:38 -0400]            [Pod/testapp-assets-86d95fd9c9-zjmzz]   Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 404 (99 events)
[FATAL][2020-10-15 18:44:38 -0400]        - Logs from container 'testapp-assets' (last 25 lines shown):
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:43:53 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:43:56 [error] 6#6: *184 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:43:56 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:43:59 [error] 6#6: *185 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:43:59 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:44:02 [error] 6#6: *186 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:44:02 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:44:05 [error] 6#6: *187 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:44:05 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:44:08 [error] 6#6: *188 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:44:08 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:44:11 [error] 6#6: *189 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:44:11 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:44:14 [error] 6#6: *190 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:44:14 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:44:17 [error] 6#6: *191 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:44:17 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:44:20 [error] 6#6: *192 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:44:20 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:44:23 [error] 6#6: *193 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:44:23 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:44:26 [error] 6#6: *194 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:44:26 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]            2020/10/15 22:44:29 [error] 6#6: *195 open() "/usr/share/nginx/assets/current/500.html" failed (2: No such file or directory), client: 10.244.0.68, server: localhost, request: "GET /500.html HTTP/1.1", host: "10.244.0.37:8082"
[FATAL][2020-10-15 18:44:38 -0400]            10.244.0.68 - - [15/Oct/2020:22:44:29 +0000] "GET /500.html HTTP/1.1" 404 143 "-" "kube-probe/1.18" "-"
[FATAL][2020-10-15 18:44:38 -0400]
[FATAL][2020-10-15 18:44:38 -0400]      Deployment/testapp-web: FAILED
[FATAL][2020-10-15 18:44:38 -0400]      Latest ReplicaSet: testapp-web-78567d78fb
[FATAL][2020-10-15 18:44:38 -0400]
[FATAL][2020-10-15 18:44:38 -0400]      The following containers are in a state that is unlikely to be recoverable:
[FATAL][2020-10-15 18:44:38 -0400]      > testapp-create-db: Crashing repeatedly (exit 1). See logs for more information.
[FATAL][2020-10-15 18:44:38 -0400]
[FATAL][2020-10-15 18:44:39 -0400]        - Final status: 1 replica, 1 updatedReplica, 1 unavailableReplica
[FATAL][2020-10-15 18:44:39 -0400]        - Events (common success events excluded):
[FATAL][2020-10-15 18:44:39 -0400]            [Deployment/testapp-web]        ScalingReplicaSet: Scaled up replica set testapp-web-78567d78fb to 1 (1 events)
[FATAL][2020-10-15 18:44:39 -0400]            [Pod/testapp-web-78567d78fb-nqvdd]      SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-fd6f0799-ee5b-4948-bcb3-1a9acf8c8a1d"  (1 events)
[FATAL][2020-10-15 18:44:39 -0400]            [Pod/testapp-web-78567d78fb-nqvdd]      BackOff: Back-off restarting failed container (23 events)
[FATAL][2020-10-15 18:44:39 -0400]        - Logs from container 'testapp-web': None found. Please check your usual logging service (e.g. Splunk).
[FATAL][2020-10-15 18:44:39 -0400]        - Logs from container 'testapp-migrate-db': None found. Please check your usual logging service (e.g. Splunk).
[FATAL][2020-10-15 18:44:39 -0400]        - Logs from container 'testapp-copy-assets': None found. Please check your usual logging service (e.g. Splunk).
[FATAL][2020-10-15 18:44:39 -0400]        - Logs from container 'testapp-create-db' (last 25 lines shown):
[FATAL][2020-10-15 18:44:39 -0400]            /usr/src/app/bundle/ruby/2.7.0/gems/zeitwerk-2.4.0/lib/zeitwerk/kernel.rb:34:in `require'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/src/app/bundle/ruby/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/dependencies.rb:324:in `block in require'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/src/app/bundle/ruby/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/dependencies.rb:291:in `load_dependency'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/src/app/bundle/ruby/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/dependencies.rb:324:in `require'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/src/app/bundle/ruby/2.7.0/gems/railties-6.0.3.2/lib/rails/application.rb:339:in `require_environment!'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/src/app/bundle/ruby/2.7.0/gems/railties-6.0.3.2/lib/rails/application.rb:523:in `block in run_tasks_blocks'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/src/app/bundle/ruby/2.7.0/gems/kuby-core-0.11.4/lib/kuby/plugins/rails_app/tasks.rake:20:in `block (4 levels) in <main>'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/src/app/bundle/ruby/2.7.0/gems/rake-13.0.1/exe/rake:27:in `<top (required)>'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:63:in `load'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:63:in `kernel_load'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:28:in `run'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli.rb:476:in `exec'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor.rb:399:in `dispatch'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli.rb:30:in `dispatch'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/base.rb:476:in `start'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/cli.rb:24:in `start'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/exe/bundle:46:in `block in <top (required)>'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/lib/bundler/friendly_errors.rb:123:in `with_friendly_errors'
[FATAL][2020-10-15 18:44:39 -0400]            /usr/local/bundle/gems/bundler-2.1.4/exe/bundle:34:in `<top (required)>'
[FATAL][2020-10-15 18:44:39 -0400]            bin/bundle:113:in `load'
[FATAL][2020-10-15 18:44:39 -0400]            bin/bundle:113:in `<main>'
[FATAL][2020-10-15 18:44:39 -0400]            Tasks: TOP => environment
[FATAL][2020-10-15 18:44:39 -0400]            (See full trace by running task with --trace)

And here is the config:

require 'active_support/core_ext'
require 'active_support/encrypted_configuration'

Kuby.register_package('libvips')
Kuby.register_package('libvips-dev')
Kuby.register_package('libvips-tools')
# Define a production Kuby deploy environment
Kuby.define('TestApp') do
  environment(:production) do
    # Because the Rails environment isn't always loaded when
    # your Kuby config is loaded, provide access to Rails
    # credentials manually.
    app_creds = ActiveSupport::EncryptedConfiguration.new(
      config_path: File.join('config', 'credentials.yml.enc'),
      key_path: File.join('config', 'master.key'),
      env_key: 'RAILS_MASTER_KEY',
      raise_if_missing_key: true
    )

    docker do
      # Configure your Docker registry credentials here. Add them to your
      # Rails credentials file by running `bundle exec rake credentials:edit`.
      package_phase.add('libvips')
      package_phase.add('libvips-dev')
      package_phase.add('libvips-tools')

      credentials do
        username app_creds.dig(:gitlab, :username)
        password app_creds.dig(:gitlab, :access_token)
        email app_creds.dig(:gitlab, :email)
      end

      # Configure the URL to your Docker image here, eg:
      # image_url 'foo.bar.com/me/myproject'
      #
      # If you're using Gitlab's Docker registry, try something like this:
      image_url 'registry.gitlab.com/jwald1/testapp'
    end

    kubernetes do
      # Add a plugin that facilitates deploying a Rails app.
      add_plugin :rails_app do
        # configure database credentials
        hostname "testapp.com"
        database do
          user app_creds.dig(:kuby_db, :user)
          password app_creds.dig(:kuby_db, :password)
        end
      end

      # Use Docker Desktop as the provider.
      # See: https://www.docker.com/products/docker-desktop
      #
      # Note: you will likely want to use a different provider when deploying
      # your application into a production environment. To configure a different
      # provider, add the corresponding gem to your gemfile and update the
      # following line according to the provider gem's README.
      provider :digitalocean do
        access_token app_creds.dig(:digitalocean, :access_token)
        cluster_id app_creds.dig(:digitalocean, :cluster_id)
      end
    end
  end
end

Refresh kubeconfig if kuby config changes

A number of the provider gems will attempt to use old kubeconfig files created for other clusters if the config parameters in kuby.rb change. One possible solution would be to use a hashed version of things like the cluster ID, access token, tenant ID, etc in the file path so any config changes will force a kubeconfig refresh. See: getkuby/kuby-digitalocean#2

.app domains not working

Upon running

bundle exec kuby -e production deploy

I get this:


[INFO][2021-04-14 23:45:55 +0200]	Successfully deployed in 1.4s: ClusterIssuer/letsencrypt-production, ConfigMap/lennis-assets-nginx-config, ConfigMap/lennis-config, Ingress/lennis-ingress, PersistentVolumeClaim/lennis-assets, Postgres/lennis-web-postgres, Postgres/lennis-web-postgres, Secret/lennis-registry-secret, Secret/lennis-secrets, Secret/lennis-web-postgres-secret, Secret/lennis-web-postgres-secret, Service/lennis-assets-svc, Service/lennis-svc, ServiceAccount/lennis-assets-sa, ServiceAccount/lennis-sa
[INFO][2021-04-14 23:45:55 +0200]	Continuing to wait for: Deployment/lennis-assets, Deployment/lennis-web
[INFO][2021-04-14 23:46:28 +0200]	Still waiting for: Deployment/lennis-assets, Deployment/lennis-web
[INFO][2021-04-14 23:47:01 +0200]	Still waiting for: Deployment/lennis-assets, Deployment/lennis-web
[ERROR][2021-04-14 23:47:22 +0200]	Deployment/lennis-web failed to deploy after 88.3s
[INFO][2021-04-14 23:47:22 +0200]	Continuing to wait for: Deployment/lennis-assets
[INFO][2021-04-14 23:47:52 +0200]	Still waiting for: Deployment/lennis-assets
[INFO][2021-04-14 23:48:22 +0200]	Still waiting for: Deployment/lennis-assets
[INFO][2021-04-14 23:48:55 +0200]	Still waiting for: Deployment/lennis-assets
...

When looking at the assets pod I see this error message:

Readiness probe failed: HTTP probe failed with statuscode: 404

I believe this has something to do with the .app TLD domain. These domains require a certificate to be reachable. Are there any solutions for this (in case I'm right and that's causing it).

Static assets not being served

Currently, none of the static assets (images, javascript, css, etc) will be served by a Kuby-deployed application, at least not by default. This is because Rails disables serving static assets in production. You can enable it by setting config.public_file_server.enabled = true in config/environments/production.rb, or by setting the RAILS_SERVE_STATIC_FILES environment variable.

Kuby-deployed apps should serve static assets out of the box. We can do one of:

  1. set the aforementioned environment variable.
  2. build assets into a separate Docker image along with a simple file server. Much more complicated.

Wait for database to start

Often the very first Kuby deploy will fail because the database hasn't started yet. We should be able to add a mechanism to Kuby to wait for the db to respond before starting web pods.

Is redis or memcached supported?

Thanks for creating this amazing tool!

Does Kuby support Redis or Memcached? If so could you please provide some instructions how to add them to a project.

image_url

The image_url field is the full URL to your image, including the registry's domain.

Except for Docker Hub when it should not be there ... at least when pushing!?
Seems that it needs the full URL for deploy phase.

package.json is missing. Yarn obligated?

I'm trying this out with a Rails 5.2 that isn't using Webpack.

When I'm building the image I get this at step 20:

Step 20/27 : COPY package.json .
COPY failed: stat /var/lib/docker/tmp/docker-builder133647128/package.json: no such file or directory
error: build failed: docker command exited with status code 1

Looking in the docs I found:

Yarn phase: Runs yarn install, which installs all the JavaScript dependencies listed in your app's package.json.

Is using Yarn a requirement for using Kuby or can I mark somewhere that this step isn't needed?

Support using a user other than root to connect to databases

I discovered yesterday that rails new mytestapp -d mysql will create a database.yml file with the username field set to the name of the app, eg:

production:
  adapter: mysql2
  username: mytestapp
  password: <%= ENV['MYTESTAPP_DATABASE_PASSWORD'] %>

All the rails apps I tested while developing Kuby use the root user, which kubedb creates by default. I assumed kubedb would create a user from whatever username and password you give it, but that is definitely not the case. Kubedb has the ability to run an init script when MySQL and Postgres instances start, so I think that's the vector we need to use to get the user created (if it isn't root).

Error Setting up Digital Ocean

bundle exec kuby -e production setup
Setting up kubedb
Fetching Helm chart
Error: looks like "https://charts.appscode.com/stable/" is not a valid chart repository or cannot be reached: Get https://charts.appscode.com/stable/index.yaml: dial tcp: i/o timeout
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "gitlab" chart repository (https://charts.gitlab.io):
        Get https://charts.gitlab.io/index.yaml: dial tcp: i/o timeout
...Unable to get an update from the "stable" chart repository (https://kubernetes-charts.storage.googleapis.com/):
        Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp: i/o timeout
Update Complete. ⎈ Happy Helming!⎈ 
Deploying kubedb operator
Error: Kubernetes cluster unreachable
Error: failed to download "appscode/kubedb" (hint: running `helm repo update` may help)
error: could not install chart 'kubedb-operator': helm exited with status code 1

When I run helm repo update locally, it all works fine.
Here's is my config:

require 'active_support/core_ext'
require 'active_support/encrypted_configuration'
require 'kuby/digitalocean'


# Define a production Kuby deploy environment
Kuby.define('RcruiT') do
  environment(:production) do
    # Because the Rails environment isn't always loaded when
    # your Kuby config is loaded, provide access to Rails
    # credentials manually.
    app_creds = ActiveSupport::EncryptedConfiguration.new(
      config_path: File.join('config', 'credentials.yml.enc'),
      key_path: File.join('config', 'master.key'),
      env_key: 'RAILS_MASTER_KEY',
      raise_if_missing_key: true
    )

    docker do
      # Configure your Docker registry credentials here. Add them to your
      # Rails credentials file by running `bundle exec rake credentials:edit`.
      credentials do
        username app_creds[:KUBY_DOCKER_USERNAME]
        password app_creds[:KUBY_DOCKER_PASSWORD]
        email app_creds[:KUBY_DOCKER_EMAIL]
      end

      # Configure the URL to your Docker image here, eg:
      image_url '***********'
      #
      # If you're using Gitlab's Docker registry, try something like this:
      # image_url 'registry.gitlab.com/<username>/<repo>'
    end

    kubernetes do
      # Add a plugin that facilitates deploying a Rails app.
      add_plugin :rails_app do
        # configure database credentials
        database do
          user app_creds[:KUBY_DB_USER]
          password app_creds[:KUBY_DB_PASSWORD]
        end
      end

      # Use Docker Desktop as the provider.
      # See: https://www.docker.com/products/docker-desktop
      #
      # Note: you will likely want to use a different provider when deploying
      # your application into a production environment. To configure a different
      # provider, add the corresponding gem to your gemfile and update the
      # following line according to the provider gem's README.
      # provider :docker_desktop
      provider :digitalocean do
        access_token app_creds[:DIGITAL_OCEAN_ACCESS_TOKEN]
        cluster_id '***************************'
      end

    end
  end
end

Site not reachable: refused to connect.

I'm using Linode and deploy was succesful. I can login to Rails console which works and has DB connection. However I can't reach the site.

I'm getting:

This site can’t be reached 172.105.69.10 refused to connect.

Also the domain: https://bookingprovence.com isn't working. Is there somewhere where we can follow what the TLS task is doing, if it's generating problems?

bundle exec kuby -e production remote status
NAME                             READY   STATUS    RESTARTS   AGE
cavadou-assets-c9764d48c-7lfrr   1/1     Running   0          35m
cavadou-web-5b87fbd567-lhj57     1/1     Running   0          35m
cavadou-web-postgres-0           1/1     Running   0          35m
cm-acme-http-solver-2sn99        1/1     Running   0          35m


bundle exec kuby -e production remote logs
[6] * Listening on http://0.0.0.0:8080
[6] Use Ctrl-C to stop
[6] - Worker 0 (pid: 7) booted, phase: 0
[6] - Worker 1 (pid: 8) booted, phase: 0
[6] - Worker 3 (pid: 10) booted, phase: 0
[6] - Worker 2 (pid: 9) booted, phase: 0

This is the kuby.rb

require 'active_support/core_ext'
require 'active_support/encrypted_configuration'

# Define a production Kuby deploy environment
Kuby.define('Cavadou') do
  environment(:production) do
    # Because the Rails environment isn't always loaded when
    # your Kuby config is loaded, provide access to Rails
    # credentials manually.
    app_creds = ActiveSupport::EncryptedConfiguration.new(
      config_path: File.join('config', 'credentials.yml.enc'),
      key_path: File.join('config', 'master.key'),
      env_key: 'RAILS_MASTER_KEY',
      raise_if_missing_key: true
    )

    docker do
      # Configure your Docker registry credentials here. Add them to your
      # Rails credentials file by running `bundle exec rake credentials:edit`.
      package_phase.remove(:yarn)
      delete :yarn_phase

      credentials do
        username app_creds.gitlab[:username]
        password app_creds.gitlab[:password]
        email app_creds.gitlab[:email]
      end

      # Configure the URL to your Docker image here, eg:
      # image_url 'foo.bar.com/me/myproject'
      #
      # If you're using Gitlab's Docker registry, try something like this:
      image_url 'registry.gitlab.com/rept/cavadou'
    end

    kubernetes do
      provider :linode do
        access_token app_creds.linode[:token]
        cluster_id app_creds.linode[:cluster]
      end

      # Add a plugin that facilitates deploying a Rails app.
      add_plugin :rails_app do
        hostname app_creds.domain_name
        # configure database credentials
        database do
          storage '1Gi'
          user 'rails_user'
          password app_creds.db[:password]
        end
      end

      provider :linode
    end
  end
end

Fail on create-db container

Hi! I'm trying to setup Kuby on Digital Ocean using the following configuration:

# We need to require some Rails stuff to read encrypted credentials
require "active_support/core_ext/hash/indifferent_access"
require "active_support/encrypted_configuration"

Kuby.define("construction") do
  environment(:production) do
    app_creds = ActiveSupport::EncryptedConfiguration.new(
      config_path: "./config/credentials/production.yml.enc",
      key_path: "./config/credentials/production.key",
      env_key: "RAILS_PRODUCTION_KEY",
      raise_if_missing_key: true
    )

    docker do
      base_image "ruby:3.0.0"
      gemfile "./Gemfile"

      webserver_phase.webserver = :puma

      credentials do
        username app_creds.dig(:digitalocean, :access_token)
        password app_creds.dig(:digitalocean, :access_token)
      end

      image_url "registry.digitalocean.com/registry/repo"
    end

    kubernetes do
      add_plugin :rails_app do
        manage_database false

        env do
          data do
            add "RAILS_LOG_TO_STDOUT", "enabled"
            add "DATABASE_URL", app_creds.dig(:mysql, :url)
          end
        end
      end

      provider :digitalocean do
        access_token app_creds.dig(:digitalocean, :access_token)
        cluster_id app_creds.dig(:digitalocean, :cluster_id)
      end
    end
  end
end

Everything seems to be working fine until it gets to the point of starting the application, then I get the following error, here are the logs:

[INFO][2021-12-09 03:42:30 -0600]	------------------------------------------Result: FAILURE-------------------------------------------
[FATAL][2021-12-09 03:42:30 -0600]	Successfully deployed 11 resources and failed to deploy 1 resource
[FATAL][2021-12-09 03:42:30 -0600]
[FATAL][2021-12-09 03:42:30 -0600]	Successful resources
[FATAL][2021-12-09 03:42:30 -0600]	ClusterIssuer/letsencrypt-production              Exists
[FATAL][2021-12-09 03:42:30 -0600]	ConfigMap/construction-assets-nginx-config Available
[FATAL][2021-12-09 03:42:30 -0600]	ConfigMap/construction-config             Available
[FATAL][2021-12-09 03:42:30 -0600]	Deployment/construction-assets            1 replica, 1 updatedReplica, 1 availableReplica
[FATAL][2021-12-09 03:42:30 -0600]	Ingressconstruction-ingress              Created
[FATAL][2021-12-09 03:42:30 -0600]	Secret/construction-registry-secret       Available
[FATAL][2021-12-09 03:42:30 -0600]	Secret/construction-secrets               Available
[FATAL][2021-12-09 03:42:30 -0600]	Service/construction-assets-svc           Selects at least 1 pod
[FATAL][2021-12-09 03:42:30 -0600]	Service/construction-svc                  Selects at least 1 pod
[FATAL][2021-12-09 03:42:30 -0600]	ServiceAccount/construction-assets-sa     Created
[FATAL][2021-12-09 03:42:30 -0600]	ServiceAccount/construction-sa            Created
[FATAL][2021-12-09 03:42:30 -0600]
[FATAL][2021-12-09 03:42:30 -0600]	Deployment/construction-web: FAILED
[FATAL][2021-12-09 03:42:30 -0600]	Latest ReplicaSet: construction-web-658fdfb676
[FATAL][2021-12-09 03:42:30 -0600]
[FATAL][2021-12-09 03:42:30 -0600]	The following containers are in a state that is unlikely to be recoverable:
[FATAL][2021-12-09 03:42:30 -0600]	> construction-create-db: Crashing repeatedly (exit 1). See logs for more information.
[FATAL][2021-12-09 03:42:30 -0600]
[FATAL][2021-12-09 03:42:30 -0600]	  - Final status: 1 replica, 1 updatedReplica, 1 unavailableReplica
[FATAL][2021-12-09 03:42:30 -0600]	  - Events (common success events excluded):
[FATAL][2021-12-09 03:42:30 -0600]	      [Deployment/construction-web]	ScalingReplicaSet: Scaled up replica set welcome-construction-web-658fdfb676 to 1 (1 events)
[FATAL][2021-12-09 03:42:30 -0600]	      [Pod/construction-web-658fdfb676-qvkpf]	BackOff: Back-off restarting failed container (2 events)
[FATAL][2021-12-09 03:42:30 -0600]	  - Logs from container 'construction-web': None found. Please check your usual logging service (e.g. Splunk).
[FATAL][2021-12-09 03:42:30 -0600]	  - Logs from container 'construction-migrate-db': None found. Please check your usual logging service (e.g. Splunk).
[FATAL][2021-12-09 03:42:30 -0600]	  - Logs from container 'construction-create-db':
[FATAL][2021-12-09 03:42:30 -0600]	      standard_init_linux.go:228: exec user process caused: exec format error

Do you have any idea of what might be causing the error?

I don't see any connection attempts or queries on the database side of things.

Any pointers would be greatly appreciated.

Thanks!

Meant for updates too?

Is Kuby meant for updates too? Let me explain: in our current setup we have a cluster with Postgres, Nginx, Redis, Ruby etc...

Tagging a commit automatically starts of Jenkins that builds the new image. This image is pushed to Docker hub and then pushed to staging. This spins up a second Ruby pod and as soon as the liveness and readinessProbe of that pod is OK it shuts down the old pod. Works fantastic.

Your tutorial and video clearly explain how you can easily spin up a complete cluster that even takes care of the certificates. But is Kuby aimed at doing updates (to the Ruby pod) later on too?

Something like:

bundle exec kuby -e production update

Since you know the port of each pod you could include a liveness and readinessProbe to do the basic checks on the ports.

These are the probes we currently use for our Ruby pod.

        livenessProbe:
          httpGet:
            path: /
            port: 80
          # Give the node some time to startup. Replace by startupProbe once it leaves alpha state.
          initialDelaySeconds: 180
          failureThreshold: 2
          periodSeconds: 40
          timeoutSeconds: 2
        readinessProbe:
          httpGet:
            path: /
            port: 80
          periodSeconds: 10
          timeoutSeconds: 2

Re-deploying with the same image and tag should restart web pods

Right now, deploys that don't change the Docker image URL or Docker tag will have no effect, i.e. will instruct Kubernetes to effectively do nothing. That's semi-surprising behavior since many deploy tools like Capistrano will restart your web server even if no new code is deployed. I think that's what Kuby should do too. The major use-case is picking up config changes. In fact, I first noticed this problem after making a change to a k8s Secret. I deployed and didn't see the new secret in the web pods because k8s didn't think anything had changed.

I know there's a field in k8s deployments called imagePullPolicy. If we set it to Always it might do what we want... but I'm not sure. If not, we'll have to detect when the image isn't changing and manually restart. Another option would be to introduce a restart rake task (i.e. rake kuby:remote:restart)... or maybe do both?

Issue: "error: no provider registered with name digitalocean, do you need to add a gem to your Gemfile?"

Hello,

I tried to give Kuby a try on a newly generated Rails application today but came across an issue when trying to run build and also trying to start my server locally.

I generated my Kuby config using the generator command and also added config/initializer/kuby.rb. I then setup everything to deploy to Azure.

Versions being used:
macOS 10.15.7
Ruby 2.5.8
Rails 6.0.3.4
kuby-core 0.11.1
kuby-azure 0.3.0

After setting up my kuby.rb file at the root of my project I attempted to build the image and was greeted with the following.

> kuby -e production build
error: no provider registered with name azure, do you need to add a gem to your Gemfile?

Also when attempting the run rails s I also received the following error

❯ rails s
=> Booting Puma
=> Rails 6.0.3.4 application starting in development
=> Run `rails server --help` for more startup options
Exiting
Traceback (most recent call last):
	93: from bin/rails:4:in `<main>'
	92: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:324:in `require'
	91: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:291:in `load_dependency'
	90: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:324:in `block in require'
	89: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
	88: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
	87: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
	86: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
	85: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
	84: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/commands.rb:18:in `<top (required)>'
	83: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/command.rb:46:in `invoke'
	82: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/command/base.rb:69:in `perform'
	81: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
	80: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
	79: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
	78: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/commands/server/server_command.rb:138:in `perform'
	77: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/commands/server/server_command.rb:138:in `tap'
	76: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/commands/server/server_command.rb:147:in `block in perform'
	75: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/commands/server/server_command.rb:37:in `start'
	74: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/commands/server/server_command.rb:77:in `log_to_stdout'
	73: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/rack-2.2.3/lib/rack/server.rb:422:in `wrapped_app'
	72: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/rack-2.2.3/lib/rack/server.rb:249:in `app'
	71: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/rack-2.2.3/lib/rack/server.rb:349:in `build_app_and_options_from_config'
	70: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/rack-2.2.3/lib/rack/builder.rb:66:in `parse_file'
	69: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/rack-2.2.3/lib/rack/builder.rb:105:in `load_file'
	68: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/rack-2.2.3/lib/rack/builder.rb:116:in `new_from_string'
	67: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/rack-2.2.3/lib/rack/builder.rb:116:in `eval'
	66: from config.ru:3:in `block in <main>'
	65: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:53:in `require_relative'
	64: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:324:in `require'
	63: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:291:in `load_dependency'
	62: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:324:in `block in require'
	61: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/zeitwerk-2.4.0/lib/zeitwerk/kernel.rb:34:in `require'
	60: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
	59: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
	58: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
	57: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
	56: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
	55: from /Users/erikguzman/Documents/code/HelloKuby/config/environment.rb:5:in `<top (required)>'
	54: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/application.rb:363:in `initialize!'
	53: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/initializable.rb:60:in `run_initializers'
	52: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:205:in `tsort_each'
	51: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:226:in `tsort_each'
	50: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:347:in `each_strongly_connected_component'
	49: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:347:in `call'
	48: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:347:in `each'
	47: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:349:in `block in each_strongly_connected_component'
	46: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:415:in `each_strongly_connected_component_from'
	45: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:415:in `call'
	44: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/initializable.rb:50:in `tsort_each_child'
	43: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/initializable.rb:50:in `each'
	42: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:421:in `block in each_strongly_connected_component_from'
	41: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:431:in `each_strongly_connected_component_from'
	40: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:422:in `block (2 levels) in each_strongly_connected_component_from'
	39: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:350:in `block (2 levels) in each_strongly_connected_component'
	38: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/2.5.0/tsort.rb:228:in `block in tsort_each'
	37: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/initializable.rb:61:in `block in run_initializers'
	36: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/initializable.rb:32:in `run'
	35: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/initializable.rb:32:in `instance_exec'
	34: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/engine.rb:624:in `block in <class:Engine>'
	33: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/engine.rb:624:in `each'
	32: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/engine.rb:625:in `block (2 levels) in <class:Engine>'
	31: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/engine.rb:665:in `load_config_initializer'
	30: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/notifications.rb:182:in `instrument'
	29: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/railties-6.0.3.4/lib/rails/engine.rb:666:in `block in load_config_initializer'
	28: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:318:in `load'
	27: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:291:in `load_dependency'
	26: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:318:in `block in load'
# aws:
	25: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:59:in `load'
	24: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:59:in `load'
	23: from /Users/erikguzman/Documents/code/HelloKuby/config/initializers/kuby.rb:2:in `<top (required)>'
	22: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/kuby-core-0.11.1/lib/kuby.rb:46:in `load!'
	21: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:324:in `require'
	20: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:291:in `load_dependency'
	19: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies.rb:324:in `block in require'
	18: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/zeitwerk-2.4.0/lib/zeitwerk/kernel.rb:34:in `require'
	17: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
	16: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
	15: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
	14: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
	13: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/bootsnap-1.4.8/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
	12: from /Users/erikguzman/Documents/code/HelloKuby/kuby.rb:5:in `<top (required)>'
	11: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/kuby-core-0.11.1/lib/kuby.rb:53:in `define'
	10: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/kuby-core-0.11.1/lib/kuby.rb:53:in `instance_eval'
	 9: from /Users/erikguzman/Documents/code/HelloKuby/kuby.rb:7:in `block in <top (required)>'
	 8: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/kuby-core-0.11.1/lib/kuby/definition.rb:16:in `environment'
	 7: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/kuby-core-0.11.1/lib/kuby/definition.rb:16:in `instance_eval'
	 6: from /Users/erikguzman/Documents/code/HelloKuby/kuby.rb:34:in `block (2 levels) in <top (required)>'
	 5: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/kuby-core-0.11.1/lib/kuby/environment.rb:27:in `kubernetes'
	 4: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/kuby-core-0.11.1/lib/kuby/environment.rb:27:in `instance_eval'
	 3: from /Users/erikguzman/Documents/code/HelloKuby/kuby.rb:72:in `block (3 levels) in <top (required)>'
	 2: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/kuby-core-0.11.1/lib/kuby/kubernetes/spec.rb:23:in `provider'
	 1: from /Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/kuby-azure-0.3.0/lib/kuby/azure/provider.rb:13:in `configure'
/Users/erikguzman/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/kuby-azure-0.3.0/lib/kuby/azure/provider.rb:13:in `instance_eval': wrong number of arguments (given 0, expected 1..3) (ArgumentError)

The example project code is here: https://github.com/talk2MeGooseman/HelloKuby

Any help is appreciated because I would love to try this out.

Automatic docker login fails

When not logged in with the Docker registry, attempting to push gives me this result:

$ bundle exec kuby -e production push
Attempting to log in to registry at docker.pkg.github.com
Error: Cannot perform an interactive login from a non TTY device
Couldn't log in to the registry at docker.pkg.github.com
build failed: docker command exited with status code 1

If I log in by hand with the same user and password, it works fine and the push succeeds.

Staging / Production?

Have you thought on how to handle sending deployments to staging or production? Maybe namespaces?

Does not login to docker registry

Either there should be a not about logging in to a registry before pushing image - or it should login with the credentials given in config file.

Kuby config path

Looks like Kuby.load! expects to find kuby.rb in the root of the project instead of an initializer like the docs sugges (e.g. "Put the config into a Rails initializer, eg. config/initializers/kuby.rb."). Maybe it should not try to load if already defined by an initializer?

Add option to write Kubernetes resource YAMLs to disk

Whether you're using git-ops, want to version your k8s resources in source control, or just want to save them to a directory, Kuby should support emitting them via the CLI. Perhaps something like kuby resources -o /path/to/output_dir? There are a couple of strategies we could support as well:

  1. Emit a single large YAML file containing all the resources.
  2. Emit one resource per file. Filenames would need to include the namespace and name of the resource, eg foo-production.foo-deployment.yml or some such.
  3. Emit a directory per namespace.
  4. Emit a directory per resource type, eg. all deployments in their own folder, etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.