Giter Club home page Giter Club logo

gravity's Introduction

Gravity

Warning Gravity was archived 2023-07-01.

Please see our Gravitational is Teleport blog post for more information.

If you're looking for a similar solution, we recommend using a certified Kubernetes Distribution.

Gravity is a Kubernetes packaging solution.

Introduction

Gravity is an open source toolkit for creating "images" of Kubernetes clusters and the applications running inside the clusters. The resulting images are called cluster images and they are just .tar files.

A cluster image can be used to re-create full replicas of the original cluster in any environment where compliance and consistency matters, i.e. in locked-down AWS/GCE/Azure environments or even in air-gapped server rooms. An image can run without human supervision, as a "kubernetes appliance".

Cluster Images

A Cluster Image produced by Gravity includes:

  • All Kubernetes binaries and their dependencies.
  • Built-in container registry.
  • De-duplicated layers of all application containers inside a cluster.
  • Built-in cluster orchestrator which guarantees HA operation, in-place upgrades and auto-scaling.
  • Installation wizard for both CLI and web browser GUI.

An image is all one needs to re-create the complete replica of the original Kubernetes cluster, with all deployed applications inside, even in an air-gapped server room.

Examples

Take a look at the examples directory in this repository to find examples of how to package and deploy Kubernetes applications using Gravity.

The following examples are currently available:

  • Wordpress. Deploys Wordpress CMS with an OpenEBS-backed persistent storage.

Building from source

Gravity is written in Go. There are two ways to build the Gravity tools from source: by using locally installed build tools or via Docker. In both cases you will need a Linux machine.

Building on MacOS, even with Docker, is possible but not currently supported

$ git clone [email protected]:gravitational/gravity.git
$ cd gravity

# Running 'make' with the default target uses Docker.
# The output will be stored in build/current/
$ make

# If you have Go 1.10+ installed, you can build without Docker which is faster.
# The output will be stored in $GOPATH/bin/
$ make install

# To remove the build artifacts:
$ make clean

gravity's People

Contributors

a-palchikov avatar aelkugia avatar alex-kovoy avatar alexwolfe avatar alrs avatar benarent avatar bernardjkim avatar bobhenkel avatar burdzwastaken avatar dependabot[bot] avatar eldios avatar helgi avatar jsilvers-zz avatar klizhentas avatar knisbet avatar kontsevoy avatar lenko-d avatar novas0x2a avatar pierrebeaucamp avatar r0mant avatar siddharth1010 avatar stephenlawrence avatar stevengravy avatar twakes avatar ulysseskan avatar undergreen avatar wadells avatar webvictim avatar whiterm avatar zeelewis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gravity's Issues

[BUG] Unable to set service NodePort range

Describe the bug
I have the following invocation of gravity install:

gravity install --token=foobar --advertise-addr=10.1.10.68 --cloud-provider=generic --config=cluster-config.yaml --pod-network-cidr="172.23.0.0/16" --service-cidr="172.34.0.0/16"

My cluster-config.yaml has:

kind: ClusterConfiguration
version: v1
spec:
  global:
    # port range to reserve for services with NodePort visibility
    serviceNodePortRange: "1025-65535"

However, this setting does not take effect. If I look in gravity-install and gravity-system logs, I can see it's reading it, but Planet never seems to get it, and kube-apiserver is running with --service-nodeport-range=30000-32767 (i.e. it's normal default range).

To Reproduce
Just set the above values in a cluster config file and see if it applies.

Expected behavior
It should set the node port range to what I passed in.

Logs

Environment (please complete the following information):

  • OS [e.g. Redhat 7.4]: Amazon Linux 2
  • Gravity [e.g. 5.5.4]: 6.0.1
  • Platform [e.g. Vmware, AWS]: AWS, but running a generic cloud provider

Additional context
A few months ago I thought this was due to a typo in the kube-apiserver service file, which I submitted a PR for and fixed. However, while that is correct now, it seems that somehow between Gravity and Planet that this configuration value is lost and Planet ends up using the default range it has configured (since this parameter has a default value). To workaround this I have to use my own Planet build, which has the following Docker file:

# There is a bug with Gravity that the Planet image it uses does 
# not allow customization of NodePort range. This hard codes it to
# allow anything above 1024.

FROM quay.io/gravitational/planet:6.0.6-11402
RUN sed -i.bak "s/KUBE_COMPONENT_FLAGS/KUBE_COMPONENT_FLAGS \\\n        --service-node-port-range=1025-65535/" ./usr/lib/systemd/system/kube-apiserver.service

This works great, but it would be really awesome to not need to do this. Note that this happens in all of 5.5.x, 5.6.x and 6.0.x.

Issue with cgroup namespaces

In newer Gravity builds (5.6.0+), on older kernels (e.g. RHEL7/CentOS7 with kernel 3.10), all preflight checks will succeed but the installer will die in the middle. If you look at journalctl, you will see: cgroup namespaces aren't enabled in the kernel

This is also mentioned here: #372
And is caused by this change: gravitational/planet#397

The ideal fix is to make it not error out in this condition and instead just not use `cgroup namespaces. Alternatively, if that's not possible, it would be good if:

  1. Preflight checks didn't pass and made sure it was enabled. This could take the form of https://github.com/opencontainers/runc/blob/master/libcontainer/configs/validate/validator.go#L122

  2. The supported OS list was changed to remove RHEL7/CentOS 7.

The impact on us is for now, we've had to downgrade back to 5.5.7 so that we can support those versions.

Cannot build app tarballs in a docker container

We have an app that builds fine on a pristine machine with docker installed. Our build system, however, runs the same commands in a docker container (we use gravitational/debian-grande for this build) mounting the docker socket from the host.

The tele build command is able to download images from our internal docker repos just fine but is unable to create the local registry and register images there. This is what happens:

	Still embedding application container images (4 minutes elapsed)
time="2019-02-26T05:44:05Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." source=local-docker-registry
* [3/6] Build aborted after 6 minutes 
[ERROR]: failed to push images to local registry

Any pointers on how we could fix this issue or what the root cause might be? Thanks!

kubectl go-template sample command fails with gravity kubectl wrapper

The sample command here fails when run on a gravity cluster:
https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/#list-containers-using-a-go-template-instead-of-jsonpath

$ kubectl get pods --all-namespaces -o go-template --template="{{range .items}}{{range .spec.containers}}{{.image}} {{end}}{{end}}"
error: a resource cannot be retrieved by name across all namespaces

It works if you enter the planet and invoke /usr/bin/kubectl directly, but not when use the kubectl wrapper script from within or outside the planet.

Offline install failure - DNS leaking when node is isolated from upstream DNS

Steps to reproduce:

  1. Start with a minimal CentOS install (can be online with Internet access)
  2. Edit /etc/resolv.conf to put in a bogus nameserver
    (We have also repro'd with other ways of taking the system "offline" besides munging the nameserver, including setting a bogus default gateway, and simply disconnecting the NIC at the VM layer.)
  3. Run gravity install from telekube 5.2.5 bundle.
    Expecting: successful install.
    Finding: it never makes it past site-app installation.

Install output:

$ sudo ./gravity install
Fri Feb  8 14:25:51 UTC	Starting installer
Fri Feb  8 14:25:51 UTC	Preparing for installation...
Fri Feb  8 14:26:19 UTC	Installing application telekube:5.2.5
Fri Feb  8 14:26:19 UTC	Starting non-interactive install
Fri Feb  8 14:26:20 UTC	All agents have connected!
Fri Feb  8 14:26:20 UTC	Starting the installation
Fri Feb  8 14:26:21 UTC	Operation has been created
Fri Feb  8 14:26:22 UTC	Execute preflight checks
Fri Feb  8 14:26:24 UTC	Configure packages for all nodes
Fri Feb  8 14:26:28 UTC	Bootstrap master node node-4
Fri Feb  8 14:26:32 UTC	Pull packages on master node node-4
Fri Feb  8 14:27:20 UTC	Install system software on master node node-4
Fri Feb  8 14:27:21 UTC	Install system package teleport:2.4.7 on master node node-4
Fri Feb  8 14:27:23 UTC	Install system package planet:5.2.19-11105 on master node node-4
Fri Feb  8 14:27:47 UTC	Wait for system services to start on all nodes
Fri Feb  8 14:28:28 UTC	Apply labels and taints to Kubernetes nodes
Fri Feb  8 14:28:29 UTC	Label and taint master node node-4
Fri Feb  8 14:28:30 UTC	Bootstrap Kubernetes roles and PSPs
Fri Feb  8 14:28:32 UTC	Populate Docker registry on master node node-4
Fri Feb  8 14:29:08 UTC	Install system applications
Fri Feb  8 14:29:09 UTC	Install system application dns-app:0.1.0
Fri Feb  8 14:29:16 UTC	Install system application bandwagon:5.2.3
Fri Feb  8 14:29:21 UTC	Install system application logging-app:5.0.2
Fri Feb  8 14:29:28 UTC	Install system application monitoring-app:5.2.2
Fri Feb  8 14:29:46 UTC	Install system application tiller-app:5.2.1
Fri Feb  8 14:30:07 UTC	Install system application site:5.2.5
Fri Feb  8 14:37:08 UTC Operation failure: Job has reached the specified backoff limit
Fri Feb  8 14:37:08 UTC	Installation failed in 10m47.907408928s, check /var/log/telekube-install.log for details
---
Installer process will keep running so you can inspect the operation plan using
`gravity plan` command, see what failed and continue plan execution manually
using `gravity install --phase=<phase-id>` command after fixing the problem.
Once no longer needed, this process can be shutdown using Ctrl-C.

Tail of telekube-install.log:

[ERROR]: failed connecting to https://gravity-site.kube-system.svc.cluster.local:3009/healthz
2019-02-08T14:34:17Z ERRO             "\nERROR REPORT:\nOriginal Error: *url.Error Get https://gravity-site.kube-system.svc.cluster.local:3009/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nStack Trace:\n\t/gopath/src/github.com/gravitational/gravity/tool/gravity/cli/site.go:53 github.com/gravitational/gravity/tool/gravity/cli.statusSite\n\t/gopath/src/github.com/gravitational/gravity/tool/gravity/cli/run.go:224 github.com/gravitational/gravity/tool/gravity/cli.Execute\n\t/gopath/src/github.com/gravitational/gravity/tool/gravity/cli/run.go:79 github.com/gravitational/gravity/tool/gravity/cli.Run\n\t/gopath/src/github.com/gravitational/gravity/tool/gravity/main.go:45 main.run\n\t/gopath/src/github.com/gravitational/gravity/tool/gravity/main.go:36 main.main\n\t/go/src/runtime/proc.go:207 runtime.main\n\t/go/src/runtime/asm_amd64.s:2362 runtime.goexit\nUser Message: failed connecting to https://gravity-site.kube-system.svc.cluster.local:3009/healthz\n" gravity/main.go:36
Container "post-install-hook" changed status from "running" to "terminated, exit code 255".

Container "post-install-hook" changed status from "terminated, exit code 255" to "waiting, reason CrashLoopBackOff".

Container "post-install-hook" restarted, current state is "running".

WARN             syslog not available. reverting to stderr utils/cli.go:82
WARN             syslog not available. reverting to stderr utils/cli.go:82
Fri Feb  8 14:37:07 UTC [ERROR] [node-4] Phase execution failed: Job has reached the specified backoff limit, gravitational.io/site:5.2.5 postInstall hook failed.

Set External DNS annotation on opscenter-public service object

It would be great to support external-dns (installed via whichever automation) to manage the DNS records of the opscenter LB.

Currently what I do is the following:
CLUSTER_DOMAIN=opscenter.example.com And then:
kubectl -n kube-system patch svc opscenter-public -p '{"metadata":{"annotations":{"external-dns.alpha.kubernetes.io/hostname":"$CLUSTER_DOMAIN,*.$CLUSTER_DOMAIN"}}}'

And external-dns will create Route53 records (or whichever I configure).

My request is that when I do ./gravity install --cluster $CLUSTER_DOMAIN it will automagically create the correct annotation on the opscenter-public service object in case I want external-dns to handle the DNS, skipping the need for the kubectl patch operation

Build fail

There is another issue with build fails, but what I'm running into seems unrelated.

I have Docker 18.06.1

Running $ git clone [email protected]:gravitational/gravity.git then $ make results in:

/Applications/Xcode.app/Contents/Developer/usr/bin/make -C build.assets build
cd /Users/nickgrownetics/Code/Grownetics/src && \
		LOCAL_GRAVITY_BUILDDIR=/Users/nickgrownetics/Code/gravity//build/5.3.2-3 \
			/Applications/Xcode.app/Contents/Developer/usr/bin/make -C /Users/nickgrownetics/Code/Grownetics/src/github.com/gravitational/gravity/build.assets -j \
			/Users/nickgrownetics/Code/gravity//build/5.3.2-3/gravity /Users/nickgrownetics/Code/gravity//build/5.3.2-3/tele
make: *** /Users/nickgrownetics/Code/Grownetics/src/github.com/gravitational/gravity/build.assets: No such file or directory.  Stop.
make[2]: *** [build-on-host] Error 2
make[1]: *** [build] Error 2
make: *** [build] Error 2

Health Checks: gravity pods are running

Related Issue (http_proxy installation related problems): https://github.com/gravitational/gravity.e/issues/3998

A customer encountered an issue where they tweaked docker settings to add http_proxy, which caused a side effect where requests for .local were being sent to the proxy which couldn't process them. This caused the internal gravity pods / pods distributed via the local registry to fail during pull.

This ticket is to look into adding satellite checks for failed/missing pods / internal cluster state, that can bubble forwards to the user / support engineer to indicate where problems with the cluster might be located.

Tele build errors when parsing Ansible yaml [BUG]

Describe the bug

I am attempting to include Ansible roles within my tarball for use with an installer Hook. I would like to use Ansible instead of plain shell scripts here. I had included a folder called roles/ within the main directory of my app. During the tele build process, I see the following error displayed:

user@xenial-mmellin atom-infra(gravity) $ sudo tele build -o ~/projects/gitlab/personal/atom-infra/config/artifacts/atom-558-hooks-ansible.tar ~/projects/gitlab/personal/atom-infra/gravity/atom-558g/app.yaml
[sudo] password for user:
* [1/6] Selecting base image version
        Will use base image version 5.5.8 set in manifest
* [2/6] Local package cache is up-to-date
* [3/6] Embedding application container images
* [3/6] Build aborted after 1 second
[ERROR]: failed to parse resource file "roles/helm/tasks/main.yaml": error unmarshaling JSON: while decoding JSON: json: cannot unmarshal array into Go value of type runtime.TypeMeta

To Reproduce

Create a yaml file anywhere within the Application directory which simulates any Ansible task such as this main.yaml:

- name: Initialize Helm
  command: helm init

Run tele build -o <path/to/output.tar> <path/to/app.yaml> as normal.

Expected behavior

I would expect these files get ignored during the build process.

Logs

From the --debug output:

* [3/6] Build aborted after now
ERRO             "\nERROR REPORT:\nOriginal Error: yaml.YAMLSyntaxError error unmarshaling JSON: while decoding JSON: json: cannot unmarshal array into Go value of type runtime.TypeMeta\nStack Trace:\n\t/gopath/src/github.com/gravitational/gravity/lib/app/resources/decode.go:135 github.com/gravitational/gravity/lib/app/resources.(*universalDecoder).Decode\n\t/gopath/src/github.com/gravitational/gravity/lib/app/resources/decode.go:40 github.com/gravitational/gravity/lib/app/resources.Decode\n\t/gopath/src/github.com/gravitational/gravity/lib/app/resources/resourcefiles.go:106 github.com/gravitational/gravity/lib/app/resources.NewResourceFile\n\t/gopath/src/github.com/gravitational/gravity/lib/app/service/vendor.go:762 github.com/gravitational/gravity/lib/app/service.resourcesFromPath.func1\n\t/go/src/path/filepath/path.go:358 path/filepath.walk\n\t/go/src/path/filepath/path.go:382 path/filepath.walk\n\t/go/src/path/filepath/path.go:382 path/filepath.walk\n\t/go/src/path/filepath/path.go:382 path/filepath.walk\n\t/go/src/path/filepath/path.go:382 path/filepath.walk\n\t/go/src/path/filepath/path.go:382 path/filepath.walk\n\t/go/src/path/filepath/path.go:404 path/filepath.Walk\n\t/gopath/src/github.com/gravitational/gravity/lib/app/service/vendor.go:718 github.com/gravitational/gravity/lib/app/service.resourcesFromPath\n\t/gopath/src/github.com/gravitational/gravity/lib/app/service/vendor.go:181 github.com/gravitational/gravity/lib/app/service.(*vendorer).VendorDir\n\t/gopath/src/github.com/gravitational/gravity/lib/builder/builder.go:318 github.com/gravitational/gravity/lib/builder.(*Builder).Vendor\n\t/gopath/src/github.com/gravitational/gravity/lib/builder/build.go:89 github.com/gravitational/gravity/lib/builder.Build\n\t/gopath/src/github.com/gravitational/gravity/tool/tele/cli/build.go:67 github.com/gravitational/gravity/tool/tele/cli.build\n\t/gopath/src/github.com/gravitational/gravity/tool/tele/cli/run.go:53 github.com/gravitational/gravity/tool/tele/cli.Run\n\t/gopath/src/github.com/gravitational/gravity/tool/tele/main.go:45 main.run\n\t/gopath/src/github.com/gravitational/gravity/tool/tele/main.go:36 main.main\n\t/go/src/runtime/proc.go:210 runtime.main\n\t/go/src/runtime/asm_amd64.s:1334 runtime.goexit\nUser Message: failed to parse resource file \"roles/helm/tasks/main.yaml\": error unmarshaling JSON: while decoding JSON: json: cannot unmarshal array into Go value of type runtime.TypeMeta\n" tele/main.go:38
[ERROR]: failed to parse resource file "roles/helm/tasks/main.yaml": error unmarshaling JSON: while decoding JSON: json: cannot unmarshal array into Go value of type runtime.TypeMeta

Environment (please complete the following information):

  • OS [e.g. Redhat 7.4]: Ubuntu 16.4 (Xenial) build client
  • Gravity [e.g. 5.5.4]: 5.5.8
  • Platform [e.g. Vmware, AWS]: n/a

$ go version
go version go1.12.5 linux/amd64

Additional context

Ansible files are only relevant to my Hook container and are expected to be ignored during the build process.

tele build vendoring of sha256 references (invalid tag format)

tele build vendoring of resources doesn't currently support image references pointed at a hash instead of a tag.

quay.io/test@sha256:d237a12aa0cde42b539bcb5efc1118ba5e6ca1351b7493ed52bd574d181c5efd

ERRO             "
ERROR REPORT:
Original Error: *docker.Error API error (500): {\"message\":\"invalid tag format\"}

Stack Trace:
	/gopath/src/github.com/gravitational/gravity/lib/app/service/docker.go:218 github.com/gravitational/gravity/lib/app/service.tagImageWithoutRegistry
	/gopath/src/github.com/gravitational/gravity/lib/app/service/vendor.go:272 github.com/gravitational/gravity/lib/app/service.(*vendorer).VendorDir.func1
	/gopath/src/github.com/gravitational/gravity/lib/run/run.go:85 github.com/gravitational/gravity/lib/run.(*Group).Go.func1
	/go/src/runtime/asm_amd64.s:2362 runtime.goexit
User Message:
" tele/main.go:23
[ERROR]: API error (500): {"message":"invalid tag format"}

Add screenshots for web browser installer sections of docs

Currently, the /installation/ and /quickstart/ sections of the documentation do not have any screenshots of the web browser (GUI) installation process.

Adding screenshots will help readers confirm they are getting through the process successfully.

[BUG] SourceDestinationCheck=false missing on AWS node

Describe the bug

A node was discovered in an AWS integrations cluster that had SourceDestinationCheck enabled. It's not clear exactly how this happened, however, the node that experienced this issue was the first node created in the cluster.

To Reproduce

Unknown... create a cluster on AWS using the reference terraform, where presumably a race exists that can cause the SourceDestinationCheck attribute to be missed. It's possible that the InstanceLaunching event wasn't created or was lost on the first node(s) added to the cluster during terraform creation.

Expected behavior

The SourceDestinationCheck attribute should be set to false on all nodes

** Additional Information **

func (a *Autoscaler) TurnOffSourceDestinationCheck(ctx context.Context, instanceID string) error {

if err := a.TurnOffSourceDestinationCheck(ctx, event.InstanceID); err != nil {

taking a look at this, it looks like in the AWS configuration dependency graph, the lifecycle hooks depend on the ASG name, so it means the ASG gets created first and will exist for some amount of time before terraform is able to apply the lifecycle hooks, which presumably results in the message getting lost in a race condition...

[BUG] Unable to run privileged containers in docker and k8s

Describe the bug
Docker and Kubernetes are unable to run container with --privileged permissions, also fails with --add-cap=ALL

To Reproduce
Inside the planet environment, execute:

docker run -it --cap-add=ALL busybox sh

OR

docker run -it --privileged busybox sh

Expected behavior
Container to be run with privileged capabilities.

Logs

gravity-poc-2:/$ docker run -it --cap-add=ALL busybox sh
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "apply caps: operation not permitted": unknown.
gravity-poc-2:/$ docker run -it --privileged busybox sh
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "apply caps: operation not permitted": unknown.

Environment:

  • OS: Ubuntu 18.04
  • Gravity: 6.0.1
  • Platform: GCP

Additional context
I also tried this: https://community.gravitational.com/t/can-i-run-privileged-containers-in-gravity/63/3
But the problem is still there.

Note: I must have the ability to use privileged in order to change kernel parameters from inside the container.

[BUG] gravity license show nil pointer dereference

Describe the bug

ip-10-1-0-4:/$ gravity license show
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1d480e6]

goroutine 1 [running]:
github.com/gravitational/gravity/e/lib/environment.(*Local).ClusterOperator(0x0, 0x1, 0xc4206d3560, 0xb33691)
    /gopath/src/github.com/gravitational/gravity/e/lib/environment/local.go:17 +0x26
github.com/gravitational/gravity/e/tool/gravity/cli.showLicense(0x0, 0x2915ac3, 0x3, 0xc4206d3601, 0x412709)
    /gopath/src/github.com/gravitational/gravity/e/tool/gravity/cli/license.go:77 +0x40
github.com/gravitational/gravity/e/tool/gravity/cli.execute(0xc42080c780, 0xc420626600, 0xc, 0x43449a0, 0x0, 0x0, 0x0, 0x0)
    /gopath/src/github.com/gravitational/gravity/e/tool/gravity/cli/run.go:169 +0x769
github.com/gravitational/gravity/e/tool/gravity/cli.Run(0xc42080c780, 0xc42080c780, 0xc42065a870)
    /gopath/src/github.com/gravitational/gravity/e/tool/gravity/cli/run.go:39 +0x2d7
main.run(0xc4208181e0, 0x7, 0x297ae12)
    /gopath/src/github.com/gravitational/gravity/e/tool/gravity/main.go:33 +0x39
main.main()
    /gopath/src/github.com/gravitational/gravity/e/tool/gravity/main.go:24 +0x92

To Reproduce

Expected behavior

Logs

Environment (please complete the following information):

  • OS [e.g. Redhat 7.4]:
  • Gravity [e.g. 5.5.4]: 5.2.12, maybe 5.5 also
  • Platform [e.g. Vmware, AWS]:

Additional context

[BUG] Gravity breaks Ubuntu 18.04

Describe the bug

After installation of Gravity, systemd-networkd is broken and the system can't boot properly; networking is down and therefore Gravity/Kubernetes fails to start.

To Reproduce

Install Gravity 6.0.1+ on Ubuntu 18.04 machine.

Expected behavior

Ubuntu to boot properly and a working Gravity instance.

Logs

journalctl -xe | grep -i error produces:

/lib/systemd/systemd-networkd: error while loading shared libraries: libip4tc.so.0: cannot open shared object file: No such file or directory

Environment (please complete the following information):

  • OS: Ubuntu 18.04
  • Gravity: 6.0.1
  • Platform: Laptop, GCP

Additional information

Tried on a laptop and 5 different cloud instances on GCP with Ubuntu 18.04.2

`tele build -f` doesn't seem to overwrite

Despite being a documented flag, the option to overwrite the existing tar when doing tele build using either tele build -f or tele build --overwrite has no impact. From looking at the code it doesn't seem to be wired but I may just have missed it.

Document max number of nodes supported by gravity 5.x.

We are looking at a reasonable number of nodes for Runtime Fabric use cases, I think product engineering tested 16, but I personally think 32 should work fine (5 controller + 27 workers - well beyond any real world use cases as of now).

A: Theoretically it's limited by etcd/k8s capacity, but would be good to provide some extra info on best practices for scaling apps.

Note: Also define SSD requirements for running etcd.

CRD on helm

I'm trying to add a CRD but apiextensions.k8s.io/v1beta1 it's not recognized as a valid type when building my bundle:

tele build -o bundle.tar appliance/gravity/resources/app.yaml

Ref: https://github.com/gravitational/gravity/blob/5.5.0-alpha.6/lib/app/resources/resourcefiles.go#L196

Will skip unrecognized object in Helm chart charts/kong: apiVersion=apiextensions.k8s.io/v1beta1, kind=CustomResourceDefinition
Will skip unrecognized object in Helm chart charts/kong: apiVersion=apiextensions.k8s.io/v1beta1, kind=CustomResourceDefinition
Will skip unrecognized object in Helm chart charts/kong: apiVersion=apiextensions.k8s.io/v1beta1, kind=CustomResourceDefinition
Will skip unrecognized object in Helm chart charts/kong: apiVersion=apiextensions.k8s.io/v1beta1, kind=CustomResourceDefinition

Is this intentional and there's another way of adding a CRD?

customize TLS cert SAN for apiserver

When creating TLS cert for apiserver, the SAN is set to use node FQDN and other predefined DNS names, such as leader.telekube.local, apiserver and etc.

Is it possible to add custom DNS names? For example apiserver.<cluster-name>.

The purpose of the feature is to allow a cluster to be accessible from outside the cluster.

devicemapper default

Encountered while providing support in the gravity community slack, it looks like a minimal app.yaml set devicemapper as the default docker graph driver. Since we're no longer supporting devicemapper, we should check to make sure we don't have any defaults still pointing to devicemapper.

apiVersion: bundle.gravitational.io/v2
kind: Bundle
metadata:
 name: test
 resourceVersion: "1.0.0"

Improved Audit log output.

A ticket based on internal conversation.

  • Be more explicit in audit message for when a customer adds a license. e.g. License for max nodes 4 has been generated
  • Fix table sorting arrows
  • Change Social User Login to SSO Login

Replace "tele login" with "tsh login"

Objectives

  • tele login command is no longer needed to login into Hub.
  • tsh login is used to login into Hub and regular clusters.
  • tele commands (such as build, ls, pull, push) work after tsh login.
  • tsh login can login users into Docker/Helm as well.

Implementation notes

  • tele should be taught to read user's x509 certificate from ~/.tsh and use it for client auth when talking to Gravity API.
  • Gravity web servers (pack/app/ops) should be extended to support client-cert auth and extract user information from the client certificate. Maybe use Teleport's lib/auth/middleware.go:AuthMiddleware for that.

Teleport with Kubernetes integration not working

I've tested Teleport Kubernetes integration functionality and isn't working properly because the kube-controller-manager doesn't have --cluster-signing-cert-file and --cluster-signing-key-file set.

When teleport tries to create a CSR nothing happens because the controller manager doesn't generate any certificate. I've modified the kube-controller-manager systemd unit passing those flags and it works.

Teleport Version: Teleport v3.1.4 git:v3.1.4-0-g9b7b5b8b go1.11.4

# /etc/teleport.yaml
auth_service:
  kubeconfig_file: /etc/kubeconfig.test
  cluster_name: main
  authentication:
    second_factor: off
  public_addr: public_addr:3025
proxy_service:
  public_addr: public_addr:3080
  ssh_public_addr: public_addr:3023
  kubernetes:
    enabled: yes
    public_addr: ['public_addr:3026']

Started teleport with: ./teleport start --insecure -d

Then, with tsh:

tsh --proxy=public_addr --insecure login --user san
# it hangs because teleport timed out waiting for a certificate from CSR API
kubectl get pod

--insecure was used for testing purporses

Gravity Version:

Edition:	open-source
Version:	5.4.4
Git Commit:	7307d06d7aa775276cf9097aeade96d4dfac1e2

References:

[BUG] Prevent upgrades that skip versions

Describe the bug
Gravity doesn't seem to currently prevent upgrades that skip too many versions. This should be enforced in the upgrade scripts and/or tele build to prevent the cluster from breaking when using unsupported upgrade paths.

Apiserver should set up --advertise-address to the address specified during install

Description

Right now apiserver does not specify advertise address, what means that apiserver picks the default interface to advertise.


--advertise-address ip
--
  | The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.

I suggest to set up the address to the advertise-ip provided by the user during install at all times.

Installation fails on Waiting for kubernetes API to start:cannot resolve non-cluster local address

Hi,

I'm trying to install Gravity on CentOS 7.2. The installation procedure fails after a few minutes of
Tue Jan 15 17:28:54 UTC [INFO] Waiting for kubernetes API to start: Get https://leader.telekube.local:6443/api/v1/componentstatuses/scheduler: cannot resolve non-cluster local address
And then fails
Phase execution failed: context deadline exceeded.

Not sure how to debug, find docker/Kuberentes logs or connect to docker daemon.

pre-checks on Helm look for outdated plugin

The check that helm is properly installed [1] requires that a template plugin is installed which is now part of helm core. Echoing what's in the README of the plugin repo: https://github.com/technosophos/helm-template/blob/master/README.md, helm template has been built-in to Helm in recent versions, so explicitly checking for the plugin is not needed, you could instead just check for the presence of the helm template sub-command.

If you are using a recent version of Helm, you do not need this anymore!

helm template is now a built-in part of Helm. Just run helm template --help with your existing Helm.

[1]

buf := &bytes.Buffer{}
err = Exec(exec.Command("helm", "plugin", "list"), buf)
if err != nil {
return trace.BadParameter("failed to run 'helm plugin list' command: %v", err)
}
if !strings.Contains(buf.String(), "template") {
return trace.BadParameter("helm template plugin is not found in installed plugins, install using 'helm plugin install https://github.com/technosophos/helm-template'")
}

[BUG] app that contains resources directory cannot install

Describe the bug

If the root directory of a gravity app contains a resources subdirectory, the built application will not install.

/app.yaml
/resources
/resources/something.yaml

To Reproduce

Download quickstart reference
Add resources directory: mkdir mattermost/resources/resources
Add file:

cat mattermost/resources/resources/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: test

Build app: tele build -o mattermost.tar mattermost/resources/app.yaml
Install built app.

Expected behavior

The install to complete successfully

Logs

The init container for the install hook fails with the following logs:

kevin-test1:/$ kctl logs install-50bbd4-5pgpx -c init
gravitational.io/mattermost:2.2.0 unpacked at /var/lib/gravity/resources
mv: can't rename '/var/lib/gravity/resources/resources/charts': Directory not empty
mv: can't rename '/var/lib/gravity/resources/resources/resources': Directory not empty

Environment (please complete the following information):

  • OS [e.g. Redhat 7.4]: Ubuntu 16.04
  • Gravity [e.g. 5.5.4]: 6.0.0-rc.1
  • Platform [e.g. Vmware, AWS]: GCE (local provider)

Additional context

The install logs should probably pull the init container logs if the installer see's the init container failing.

Run `gravity status` at the end of a failed install

Description

Running gravity status at the end of a failed install will be helpful to focus operator on the potential issues instead of the last thing that failed during install that is not always the root cause.

It will be also helpful to point to a mini-collection of logs from the failed pods, jobs and units so people can send us the mini report right away.

[BUG] Gravity installation fails while populating docker registry

Describe the bug
Periodically, gravity install fails in the populate docker registry step. Here is the sample CLI output:

sudo ./gravity install --token=fIJyJQHiaC --advertise-addr=172.31.100.139 --cloud-provider=generic --config=cluster-config.yaml --pod-network-cidr="172.23.0.0/16" --service-cidr="172.34.0.0/16"
Tue Jun 25 04:46:15 UTC	Starting installer
Tue Jun 25 04:46:15 UTC	Preparing for installation...
Tue Jun 25 04:46:19 UTC	Installing application odas:1.4.0-rc.2-gravity
Tue Jun 25 04:46:19 UTC	Starting non-interactive install
Tue Jun 25 04:46:19 UTC	Auto-loaded kernel module: br_netfilter
Tue Jun 25 04:46:19 UTC	Auto-loaded kernel module: iptable_nat
Tue Jun 25 04:46:19 UTC	Auto-loaded kernel module: iptable_filter
Tue Jun 25 04:46:19 UTC	Auto-loaded kernel module: ebtables
Tue Jun 25 04:46:19 UTC	Auto-loaded kernel module: overlay
Tue Jun 25 04:46:19 UTC	Auto-set kernel parameter: net.ipv4.ip_forward=1
Tue Jun 25 04:46:19 UTC	Auto-set kernel parameter: net.bridge.bridge-nf-call-iptables=1
Tue Jun 25 04:46:20 UTC	Successfully added "master" node on 172.31.100.139
Tue Jun 25 04:46:20 UTC	All agents have connected!
Tue Jun 25 04:46:20 UTC	Starting the installation
Tue Jun 25 04:46:21 UTC	Operation has been created
Tue Jun 25 04:46:22 UTC	Execute preflight checks
Tue Jun 25 04:46:24 UTC	Configure packages for all nodes
Tue Jun 25 04:46:27 UTC	Bootstrap master node ip-172-31-100-139.us-west-2.compute.internal
Tue Jun 25 04:46:31 UTC	Pull packages on master node ip-172-31-100-139.us-west-2.compute.internal
Tue Jun 25 04:47:06 UTC	Install system package teleport:3.0.5 on master node ip-172-31-100-139.us-west-2.compute.internal
Tue Jun 25 04:47:08 UTC	Install system package odas-planet:1.4.0-rc.2-gravity on master node ip-172-31-100-139.us-west-2.compute.internal
Tue Jun 25 04:47:23 UTC	Wait for kubernetes to become available
Tue Jun 25 04:47:40 UTC	Bootstrap Kubernetes roles and PSPs
Tue Jun 25 04:47:42 UTC	Configure CoreDNS
Tue Jun 25 04:47:43 UTC	Create user-supplied Kubernetes resources
Tue Jun 25 04:47:45 UTC	Populate Docker registry on master node ip-172-31-100-139.us-west-2.compute.internal
Tue Jun 25 04:47:46 UTC	Operation failure: failed to connect to registry at "127.0.0.1:5000", failed to execute phase "/export/ip-172-31-100-139.us-west-2.compute.internal"
Tue Jun 25 04:47:46 UTC	Installation failed in 1m26.39374357s, check /var/log/gravity-install.log and /var/log/gravity-system.log for details

This is running on EC2 instance, stock Amazon Linux 2 AMI, with 120GB disk. Here is the cluster-config.yaml file:

kind: ClusterConfiguration
version: v1
spec:
  global:
    # port range to reserve for services with NodePort visibility
    serviceNodePortRange: "1025-65535"
---
kind: user
version: v2
metadata:
  name: "admin"
spec:
  type: "admin"
  password: "fIJyJQHiaC"
  roles: ["@teleadmin"]
---
kind: RuntimeEnvironment
version: v1
spec:
  data:
    KUBE_KUBELET_FLAGS: "--sync-frequency=10s"

I've included the log files for gravity-install, gravity-system and journalctl from within the Planet container, as well as the one from the actual system (journalctl_system.log).

To Reproduce

Nothing really - I run the exact same steps (this is more or less scripted for us), and it succeeds 19 times out of 20. Typically I can just re-run the set up (after running ./gravity leave --force to clean up), and it just works on the same machine.

Expected behavior

It should work every time.

Logs

gravity-install.log
gravity-system.log
journalctl.log
journalctl_system.log

Environment (please complete the following information):

  • OS [e.g. Redhat 7.4]: stock Amazon Linux 2
  • Gravity [e.g. 5.5.4]: 5.5.7
  • Platform [e.g. Vmware, AWS]: AWS

Additional context

Mattermost demo failing

Summary

Mattermost demo quickstart fails during the web "complete" stage with a kubernetes error

Steps to reproduce

Install the demo here: https://gravitational.com/gravity/docs/quickstart/
After the mattermost server has been created, you will be sent to /web/installer/site/mattermost-demo/complete/

Expected behavior

Should see the final completion section of the mattermost server

Observed behavior (that appears unintentional)

On that page you will see the following error message:
failed to get cluster info: [ERROR]: {"message":"services is forbidden: User "system:serviceaccount:kube-system:gravity-site" cannot list resource "services" in API group "" in the namespace "default""}

Possible fixes

See PR

Setting --dns-port creates an inoperable cluster

In my tests, if you set --dns-ports to another port (say 1054), you get an inoperable cluster.

The installation stops at:

Mon Apr 22 01:20:36 UTC	Wait for kubernetes to become available
Mon Apr 22 01:20:51 UTC	Bootstrap Kubernetes roles and PSPs
Mon Apr 22 01:20:53 UTC	Configure CoreDNS
Mon Apr 22 01:20:54 UTC	Create user-supplied Kubernetes resources
Mon Apr 22 01:20:55 UTC	Operation failure: exit status 1

In gravity-install.log, you see:


Mon Apr 22 01:20:36 UTC [INFO] [ip-10-1-10-124] Executing phase: /wait.
Mon Apr 22 01:20:38 UTC [INFO] [ip-10-1-10-124] Waiting for kubernetes API to start: Get https://leader.telekube.local:6443/api/v1/componentstatuses/scheduler: cannot resolve non-cluster local address
Mon Apr 22 01:20:39 UTC [INFO] [ip-10-1-10-124] Waiting for kubernetes API to start: Get https://leader.telekube.local:6443/api/v1/componentstatuses/scheduler: cannot resolve non-cluster local address
Mon Apr 22 01:20:40 UTC [INFO] [ip-10-1-10-124] Waiting for kubernetes API to start: Get https://leader.telekube.local:6443/api/v1/componentstatuses/scheduler: cannot resolve non-cluster local address
Mon Apr 22 01:20:41 UTC [INFO] [ip-10-1-10-124] Waiting for kubernetes API to start: Get https://leader.telekube.local:6443/api/v1/componentstatuses/scheduler: cannot resolve non-cluster local address
Mon Apr 22 01:20:42 UTC [INFO] [ip-10-1-10-124] Waiting for kubernetes API to start: Get https://leader.telekube.local:6443/api/v1/componentstatuses/scheduler: cannot resolve non-cluster local address
Mon Apr 22 01:20:43 UTC [INFO] [ip-10-1-10-124] Waiting for kubernetes API to start: Get https://leader.telekube.local:6443/api/v1/componentstatuses/scheduler: cannot resolve non-cluster local address
Mon Apr 22 01:20:44 UTC [INFO] [ip-10-1-10-124] Waiting for kubernetes API to start: Get https://leader.telekube.local:6443/api/v1/componentstatuses/scheduler: cannot resolve non-cluster local address
Mon Apr 22 01:20:45 UTC [INFO] [ip-10-1-10-124] Waiting for kubernetes API to start: Get https://leader.telekube.local:6443/api/v1/componentstatuses/scheduler: cannot resolve non-cluster local address
Mon Apr 22 01:20:46 UTC [INFO] [ip-10-1-10-124] Waiting for kubernetes API to start: Get https://leader.telekube.local:6443/api/v1/componentstatuses/scheduler: cannot resolve non-cluster local address
Mon Apr 22 01:20:50 UTC [INFO] [ip-10-1-10-124] Kubernetes API is available.
Mon Apr 22 01:20:51 UTC [INFO] [ip-10-1-10-124] Executing phase: /rbac.
Mon Apr 22 01:20:52 UTC [INFO] [ip-10-1-10-124] Created Kubernetes RBAC resources.
Mon Apr 22 01:20:53 UTC [INFO] Executing phase: /coredns.
Mon Apr 22 01:20:53 UTC [INFO] Configuring CoreDNS.
Mon Apr 22 01:20:54 UTC [INFO] [ip-10-1-10-124] Executing phase: /resources.
Mon Apr 22 01:20:54 UTC [ERROR] [ip-10-1-10-124] Phase execution failed: failed to create user resources: unable to recognize "/ext/share/resources.yaml": Get https://leader.telekube.local:6443/api?timeout=32s: dial tcp: lookup leader.telekube.local on 10.1.0.2:53: no such host
unable to recognize "/ext/share/resources.yaml": Get https://leader.telekube.local:6443/api?timeout=32s: dial tcp: lookup leader.telekube.local on 10.1.0.2:53: no such host
2019-04-22T01:20:52Z INFO             Created Kubernetes RBAC resources. phase:/rbac install/hook.go:56
2019-04-22T01:20:52Z DEBU [FSM:INSTA] Applied StateChange(Phase=/rbac, State=completed). opid:58ff5965-4c68-4627-979c-3d56cf7cf797 install/hook.go:56
2019-04-22T01:20:52Z DEBU [FSM:INSTA] Executing phase "/coredns". opid:58ff5965-4c68-4627-979c-3d56cf7cf797 install/hook.go:56
2019-04-22T01:20:53Z DEBU [OPS]       Created: ops.ProgressEntry{ID:"", SiteDomain:"nostalgicroentgen8876", OperationID:"58ff5965-4c68-4627-979c-3d56cf7cf797", Created:time.Time{wall:0xd8608c9, ext:63691492853, loc:(*time.Location)(nil)}, Completion:24, Step:4, State:"in_progress", Message:"Configure CoreDNS"}. install/hook.go:56
2019-04-22T01:20:53Z DEBU [FSM:INSTA] Applied StateChange(Phase=/coredns, State=in_progress). opid:58ff5965-4c68-4627-979c-3d56cf7cf797 install/hook.go:56
2019-04-22T01:20:53Z INFO             Executing phase: /coredns. phase:/coredns install/hook.go:56
2019-04-22T01:20:53Z INFO             Configuring CoreDNS. phase:/coredns install/hook.go:56
2019-04-22T01:20:53Z DEBU [FSM:INSTA] Applied StateChange(Phase=/coredns, State=completed). opid:58ff5965-4c68-4627-979c-3d56cf7cf797 install/hook.go:56
2019-04-22T01:20:53Z DEBU [FSM:INSTA] Executing phase "/resources". opid:58ff5965-4c68-4627-979c-3d56cf7cf797 install/hook.go:56
2019-04-22T01:20:54Z DEBU [OPS]       Created: ops.ProgressEntry{ID:"", SiteDomain:"nostalgicroentgen8876", OperationID:"58ff5965-4c68-4627-979c-3d56cf7cf797", Created:time.Time{wall:0x8c17d3f, ext:63691492854, loc:(*time.Location)(nil)}, Completion:24, Step:4, State:"in_progress", Message:"Create user-supplied Kubernetes resources"}. install/hook.go:56
2019-04-22T01:20:54Z DEBU [FSM:INSTA] Applied StateChange(Phase=/resources, State=in_progress). opid:58ff5965-4c68-4627-979c-3d56cf7cf797 install/hook.go:56
2019-04-22T01:20:54Z INFO             Executing phase: /resources. phase:/resources install/hook.go:56
2019-04-22T01:20:54Z ERRO             "Phase execution failed: failed to create user resources: unable to recognize \"/ext/share/resources.yaml\": Get https://leader.telekube.local:6443/api?timeout=32s: dial tcp: lookup leader.telekube.local on 10.1.0.2:53: no such host\nunable to recognize \"/ext/share/resources.yaml\": Get https://leader.telekube.local:6443/api?timeout=32s: dial tcp: lookup leader.telekube.local on 10.1.0.2:53: no such host\n." phase:/resources install/hook.go:56
2019-04-22T01:20:54Z DEBU [FSM:INSTA] "Applied StateChange(Phase=/resources, State=failed, Error=failed to create user resources: unable to recognize \"/ext/share/resources.yaml\": Get https://leader.telekube.local:6443/api?timeout=32s: dial tcp: lookup leader.telekube.local on 10.1.0.2:53: no such host\nunable to recognize \"/ext/share/resources.yaml\": Get https://leader.telekube.local:6443/api?timeout=32s: dial tcp: lookup leader.telekube.local on 10.1.0.2:53: no such host\n)." opid:58ff5965-4c68-4627-979c-3d56cf7cf797 install/hook.go:56
2019-04-22T01:20:54Z ERRO [INSTALLER] "Failed to execute plan: \nERROR REPORT:\nOriginal Error: *exec.ExitError exit status 1\nStack Trace:\n\t/gopath/src/github.com/gravitational/gravity/lib/utils/exec.go:128 github.com/gravitational/gravity/lib/utils.RunStream\n\t/gopath/src/github.com/gravitational/gravity/lib/utils/exec.go:91 github.com/gravitational/gravity/lib/utils.RunCommand\n\t/gopath/src/github.com/gravitational/gravity/lib/utils/exec.go:85 github.com/gravitational/gravity/lib/utils.RunInPlanetCommand\n\t/gopath/src/github.com/gravitational/gravity/lib/install/phases/resources.go:79 github.com/gravitational/gravity/lib/install/phases.(*resourcesExecutor).Execute\n\t/gopath/src/github.com/gravitational/gravity/lib/fsm/fsm.go:421 github.com/gravitational/gravity/lib/fsm.(*FSM).executeOnePhase\n\t/gopath/src/github.com/gravitational/gravity/lib/fsm/fsm.go:355 github.com/gravitational/gravity/lib/fsm.(*FSM).executePhaseLocally\n\t/gopath/src/github.com/gravitational/gravity/lib/fsm/fsm.go:315 github.com/gravitational/gravity/lib/fsm.(*FSM).executePhase\n\t/gopath/src/github.com/gravitational/gravity/lib/fsm/fsm.go:192 github.com/gravitational/gravity/lib/fsm.(*FSM).ExecutePhase\n\t/gopath/src/github.com/gravitational/gravity/lib/fsm/fsm.go:150 github.com/gravitational/gravity/lib/fsm.(*FSM).ExecutePlan\n\t/gopath/src/github.com/gravitational/gravity/lib/install/flow.go:335 github.com/gravitational/gravity/lib/install.(*Installer).startFSM\n\t/go/src/runtime/asm_amd64.s:1334 runtime.goexit\nUser Message: failed to create user resources: unable to recognize \"/ext/share/resources.yaml\": Get https://leader.telekube.local:6443/api?timeout=32s: dial tcp: lookup leader.telekube.local on 10.1.0.2:53: no such host\nunable to recognize \"/ext/share/resources.yaml\": Get https://leader.telekube.local:6443/api?timeout=32s: dial tcp: lookup leader.telekube.local on 10.1.0.2:53: no such host\n, failed to execute phase \"/resources\"\n." install/hook.go:56

As far as I can tell, with the planet-specific CoreDNS started on 1054, subsequent DNS lookups (e.g. for leader.telekube.local) are failing as they try and do it against the servers in /etc/resolv.conf (which just has the default of 127.0.0.2). Indeed, if I do dig leader.telekube.local -p 1054, it properly resolves it (and I can also see that CoreDNS is properly bound on port 1054).

It would be really great if it would be possible to start Gravity without needing to bind on port 53. Specifically, on machines which have dnsmasq installed, it causes a ip/port conflict, as dnsmasq binds to all interfaces by default.

Compiling/building fails ?

I got this when issuing make on your code:

$ make
make -C build.assets build
make[1]: Entering directory '/go/src/github.com/gravitational/gravity/build.assets'
make[2]: Entering directory '/go/src/github.com/gravitational/gravity/build.assets'
docker build \
	--build-arg PROTOC_VER=3.4.0 \
	--build-arg PROTOC_PLATFORM=linux-x86_64 \
	--build-arg GOGO_PROTO_TAG=v0.4 \
	--build-arg GRPC_GATEWAY_TAG=v1.1.0 \
	--build-arg VERSION_TAG=0.0.2 \
	--pull --tag gravity-buildbox:latest .
Sending build context to Docker daemon  39.94kB
Step 1/20 : FROM quay.io/gravitational/debian-venti:go1.10.3-stretch
go1.10.3-stretch: Pulling from gravitational/debian-venti
a5aeb52b69f3: Already exists 
f48a32a1dbca: Already exists 
Digest: sha256:763e6950744a18941cbb47cef9cb2387e7c217b841d28571c1d9b5773a461edb
Status: Image is up to date for quay.io/gravitational/debian-venti:go1.10.3-stretch
 ---> fdf846dfb4cb
Step 2/20 : ARG PROTOC_VER
 ---> Using cache
 ---> 0a9f1ab8a19d
Step 3/20 : ARG PROTOC_PLATFORM
 ---> Using cache
 ---> 7a83f17d0ef7
Step 4/20 : ARG GOGO_PROTO_TAG
 ---> Using cache
 ---> eeaae3cdd2e0
Step 5/20 : ARG GRPC_GATEWAY_TAG
 ---> Using cache
 ---> 1cf1e029e2be
Step 6/20 : ARG VERSION_TAG
 ---> Using cache
 ---> 8a43f39098c2
Step 7/20 : ENV TARBALL protoc-${PROTOC_VER}-${PROTOC_PLATFORM}.zip
 ---> Using cache
 ---> 57d2cfc63879
Step 8/20 : ENV GOGOPROTO_ROOT ${GOPATH}/src/github.com/gogo/protobuf
 ---> Using cache
 ---> 0bdccc1403cd
Step 9/20 : ENV PROTOC_URL https://github.com/google/protobuf/releases/download/v${PROTOC_VER}/protoc-${PROTOC_VER}-${PROTOC_PLATFORM}.zip
 ---> Using cache
 ---> 09bdb3b636a6
Step 10/20 : RUN adduser jenkins --uid=995 --disabled-password --system
 ---> Using cache
 ---> bba220bff631
Step 11/20 : RUN (mkdir -p /gopath/src/github.com/gravitational/gravity &&      chown -R jenkins /gopath &&      mkdir -p /.cache &&      chmod 777 /.cache)
 ---> Using cache
 ---> 5785f6cc26d5
Step 12/20 : ENV LANGUAGE "en_US.UTF-8" LANG "en_US.UTF-8" LC_ALL "en_US.UTF-8" LC_CTYPE "en_US.UTF-8" GOPATH "/gopath" PATH "$PATH:/opt/protoc/bin:/opt/go/bin:/gopath/bin"
 ---> Using cache
 ---> 86db9fe4b27c
Step 13/20 : RUN (mkdir -p /gopath/src/github.com/gravitational &&      cd /gopath/src/github.com/gravitational &&      git clone https://github.com/gravitational/version.git &&      cd /gopath/src/github.com/gravitational/version &&      git checkout ${VERSION_TAG} &&      go install github.com/gravitational/version/cmd/linkflags)
 ---> Using cache
 ---> b60af019d20c
Step 14/20 : RUN (mkdir -p /opt/protoc &&      wget --quiet -O /tmp/${TARBALL} ${PROTOC_URL} &&      unzip -d /opt/protoc /tmp/${TARBALL} &&      go get -u github.com/gogo/protobuf/proto github.com/gogo/protobuf/protoc-gen-gogo github.com/gogo/protobuf/gogoproto 	 github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway 	 github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger)
 ---> Using cache
 ---> 64a242ff1ab6
Step 15/20 : RUN cd ${GOPATH}/src/github.com/gogo/protobuf && git reset --hard ${GOGO_PROTO_TAG} && make install
 ---> Using cache
 ---> 69e34479d0a2
Step 16/20 : RUN cd ${GOPATH}/src/github.com/grpc-ecosystem/grpc-gateway && git reset --hard ${GRPC_GATEWAY_TAG} && go install ./protoc-gen-grpc-gateway
 ---> Using cache
 ---> 51067d3d0a53
Step 17/20 : ENV PROTO_INCLUDE "/usr/local/include":"${GOPATH}/src":"${GOPATH}/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis":"${GOPATH}/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis":"${GOGOPROTO_ROOT}/protobuf"
 ---> Using cache
 ---> 3e3ad4175a60
Step 18/20 : RUN wget --quiet -O /usr/bin/dep https://github.com/golang/dep/releases/download/v0.4.1/dep-linux-amd64 && chmod +x /usr/bin/dep
 ---> Using cache
 ---> 6d1e1fc5c419
Step 19/20 : RUN chmod -R a+rw /gopath
 ---> Using cache
 ---> 862b2cf1e985
Step 20/20 : VOLUME /gopath/src/github.com/gravitational/gravity
 ---> Using cache
 ---> 14c76a503024
Successfully built 14c76a503024
docker run --rm=true -u $(id -u) \
	   -v /go/src/github.com/gravitational/gravity/:/gopath/src/github.com/gravitational/gravity \
	   -e "LOCAL_BUILDDIR=/gopath/src/github.com/gravitational/gravity/build" \
	   -e "LOCAL_GRAVITY_BUILDDIR=/gopath/src/github.com/gravitational/gravity/build/5.2.0-rc.3.3" \
	   -e "GRAVITY_PKG_PATH=github.com/gravitational/gravity" \
	   -e "GRAVITY_VERSION=5.2.0-rc.3.3" \
	   -e "GRAVITY_TAG=5.2.0-rc.3.3" \
	   -e 'GRAVITY_LINKFLAGS="-X github.com/gravitational/gravity/vendor/github.com/gravitational/version.gitCommit=25ad7b64de6db6a009597c3eaf54f79b4c44dbf9 -X github.com/gravitational/gravity/vendor/github.com/gravitational/version.version=5.2.0-rc.3.3 -w -s"' \
	   gravity-buildbox:latest \
	   dumb-init make -C /gopath/src/github.com/gravitational/gravity/build.assets -j \
	   	/gopath/src/github.com/gravitational/gravity/build/5.2.0-rc.3.3/tele /gopath/src/github.com/gravitational/gravity/build/5.2.0-rc.3.3/gravity
make: *** /gopath/src/github.com/gravitational/gravity/build.assets: No such file or directory.  Stop.
Makefile:197: recipe for target 'build-in-container' failed
make[2]: *** [build-in-container] Error 2
make[2]: Leaving directory '/go/src/github.com/gravitational/gravity/build.assets'
Makefile:155: recipe for target 'build' failed
make[1]: *** [build] Error 2
make[1]: Leaving directory '/go/src/github.com/gravitational/gravity/build.assets'
Makefile:185: recipe for target 'build' failed
make: *** [build] Error 2

Um, missing files in the repo ?

Update release schedule for gravity

Description

New releases section

We are adopting new LTS and non LTS releases for Gravity:

  • 24 months LTS releases starting gravity 5.5
  • 4 months for non LTS commercial releases starting Gravity 5.5

This should be reflected in our Releases page that should look like:

Every major Gravity version x.0.0 has it's long term support release, e.g. for 3.0.0 version
LTS starts with 3.51.0 with minor backwards compatible changes added over time until the end of support cycle.

Release LTS Initial Release Last Updated Supported Until Kubernetes Version Teleport Version
5.5.3 LTS March 26th, 2019 March 26th, 2019 September 7th, 2020 (only with commercial subscription) 1.13.5 3.0.4

Most important

I think we should explore the possibility of releases with security backports should not be downloadable from our OSS page, only through enterprise houston. Non commercial non LTS releases should not be supported on extended schedule.

[BUG] Web Installer isn't working / throws a 400

While working on screenshots for the Web UI installer I found a few issues.

  • If you have 3 servers, the 1st server doesn't / can't be included in the three server cluster
  • Once two machines are added 'verify' / continue either time out or blur out. ( It shows that it's calling prechecks but (pending) takes a while .

image

Example showing Precheck, but taking a while to complete.
image

  • when I came back, I was logged out of the web ui.
    image

  • After adding three, and waiting 15+min i got this error.
    image

[BUG]Removing a master and adding a master result in master not connecting back in

Describe the bug
New to Gravity, want to understand it's failure points. Stood up a 4 node cluster 3 masters(k8s control plane) and 1 worker. I think I ran into a issue or the process takes a lot longer than I would assume and I'm not waiting long enough.

Overview of problem:
Go into google console and terminate 1 master. Now have 2 in a healthy state. Launch new node to join and be the 3rd master to replace the one that is gone. See log section screen shot below(left terminal is new node and right terminal is healthy master node, just see "Still waiting for planet to start on new node, yes in this case my screen shows I only waited a minute, but a few runs before it ran for hand full of minutes before I killed it as the status just shows operation_expand stuck at 72% complete. The new master node never gets planet started if I kill this process and run gravity leave --force then gravity status shows a cluster expand taking place that is stuck at 72% indefinitely. At this point I'm stuck with two masters and can't get a third to join or even add workers as there is a lock in the system because of the operation_expand stuck at 72% complete.

To Reproduce

  • Launch three masters and terminate 1 master.
  • From another master force leave the terminated node
  • Spin up a new node and try to join to cluster
  • Watch log output and also run gravity status on another node and see operation_expand stuck at 72%
  • Get confused
  • Terminate gravity join command on new node
  • Run gravity status on a healthy node and see operation_expand stuck at 72%
  • At this point it appears there's a possible infinite loop keeping you from adding any new nodes to the cluster.

Expected behavior
I expect to be able to take masters in and out and join them back in. Or have a master node go belly up and add a replacment once I'm alerted that I have a master down.

Logs

gravity-add-master-back-bug

Environment (please complete the following information):

  • OS : Ubuntu 16.04
  • Gravity : 6.1.1
  • Platform: Google GCP

Additional context

Site Post Install Hook Failing to get healthcheck

Install repeatedly fails and hangs on "site-app-post-install" job.

Version: 5.5.3
Environment: 3 RHEL 7.6 EC2 VMs on AWS

Install:

Thu Mar 28 15:06:56 UTC	Connecting to cluster
Thu Mar 28 15:06:57 UTC	Auto-loaded kernel module: br_netfilter
Thu Mar 28 15:06:57 UTC	Auto-loaded kernel module: iptable_nat
Thu Mar 28 15:06:57 UTC	Auto-loaded kernel module: iptable_filter
Thu Mar 28 15:06:57 UTC	Auto-loaded kernel module: ebtables
Thu Mar 28 15:06:57 UTC	Auto-loaded kernel module: overlay
Thu Mar 28 15:06:57 UTC	Auto-set kernel parameter: net.ipv4.ip_forward=1
Thu Mar 28 15:06:57 UTC	Auto-set kernel parameter: net.bridge.bridge-nf-call-iptables=1
Thu Mar 28 15:06:57 UTC	Auto-set kernel parameter: fs.may_detach_mounts=1
Thu Mar 28 15:06:57 UTC	Connected to installer at https://IP:61009
Thu Mar 28 15:06:57 UTC	Operation has been created
Thu Mar 28 15:07:44 UTC	All servers are up
Thu Mar 28 15:07:45 UTC	Configure packages for all nodes
Thu Mar 28 15:07:53 UTC	Bootstrap all nodes
Thu Mar 28 15:07:54 UTC	Bootstrap master node VMIP.eu-central-1.compute.internal
Thu Mar 28 15:07:59 UTC	Pull packages on master node VMIP.eu-central-1.compute.internal
Thu Mar 28 15:09:18 UTC	Install system software on master nodes
Thu Mar 28 15:09:19 UTC	Install system software on master node VMIP.eu-central-1.compute.internal
Thu Mar 28 15:09:21 UTC	Install system package teleport:3.0.5 on master node VMIP.eu-central-1.compute.internal
Thu Mar 28 15:09:23 UTC	Install system package planet:5.5.14-11305 on master node VMIP.eu-central-1.compute.internal
Thu Mar 28 15:09:27 UTC	Install system package planet:5.5.14-11305 on master node VMIP.eu-central-1.compute.internal
Thu Mar 28 15:10:02 UTC	Wait for kubernetes to become available
Thu Mar 28 15:10:22 UTC	Bootstrap Kubernetes roles and PSPs
Thu Mar 28 15:10:25 UTC	Configure CoreDNS
Thu Mar 28 15:10:27 UTC	Populate Docker registry on master node VMIP.eu-central-1.compute.internal
Thu Mar 28 15:11:37 UTC	Wait for cluster to pass health checks
Thu Mar 28 15:12:19 UTC	Install system application dns-app:0.3.0
Thu Mar 28 15:12:35 UTC	Install system application logging-app:5.0.2
Thu Mar 28 15:12:41 UTC	Install system application monitoring-app:5.5.0
Thu Mar 28 15:12:51 UTC	Install system application tiller-app:5.5.1
Thu Mar 28 15:13:04 UTC	Install system application site:5.5.3
Thu Mar 28 15:20:35 UTC	Operation failure: rpc error: code = Unknown desc = exit status 255
Failed to join the cluster

---
Agent process will keep running so you can re-run certain steps.
Once no longer needed, this process can be shutdown using Ctrl-C.

/var/log/gravity-install.log

clusterrole.rbac.authorization.k8s.io/gravity-site created
clusterrolebinding.rbac.authorization.k8s.io/gravity-site created
daemonset.extensions/gravity-site created
service/gravity-site created
role.rbac.authorization.k8s.io/gravity-site created
rolebinding.rbac.authorization.k8s.io/gravity-site created
Pod "gravity-install-8c711b-v5zv6" in namespace "kube-system", has changed state from "Running" to "Succeeded".
Container "gravity-install" changed status from "running" to "terminated, exit code 0".

Thu Mar 28 15:13:37 UTC [INFO] [IP.eu-central-1.compute.internal] Executing postInstall hook for site:5.5.3.
Created Pod "site-app-post-install-4dbe50-mvwgf" in namespace "kube-system".

Container "post-install-hook" created, current state is "waiting, reason PodInitializing".

Pod "site-app-post-install-4dbe50-mvwgf" in namespace "kube-system", has changed state from "Pending" to "Running".
Container "post-install-hook" changed status from "waiting, reason PodInitializing" to "running".

[ERROR]: failed connecting to https://gravity-site.kube-system.svc.cluster.local:3009/healthz
Container "post-install-hook" changed status from "running" to "terminated, exit code 255".

Container "post-install-hook" restarted, current state is "running".

[ERROR]: failed connecting to https://gravity-site.kube-system.svc.cluster.local:3009/healthz
Container "post-install-hook" changed status from "running" to "terminated, exit code 255".

Container "post-install-hook" changed status from "terminated, exit code 255" to "waiting, reason CrashLoopBackOff".

Container "post-install-hook" restarted, current state is "running".

[ERROR]: failed connecting to https://gravity-site.kube-system.svc.cluster.local:3009/healthz
Container "post-install-hook" changed status from "running" to "terminated, exit code 255".

Container "post-install-hook" changed status from "terminated, exit code 255" to "waiting, reason CrashLoopBackOff".

Container "post-install-hook" restarted, current state is "running".

[ERROR]: failed connecting to https://gravity-site.kube-system.svc.cluster.local:3009/healthz
Container "post-install-hook" changed status from "running" to "terminated, exit code 255".

Container "post-install-hook" changed status from "terminated, exit code 255" to "waiting, reason CrashLoopBackOff".

Container "post-install-hook" restarted, current state is "running".

[ERROR]: failed connecting to https://gravity-site.kube-system.svc.cluster.local:3009/healthz
Container "post-install-hook" changed status from "running" to "terminated, exit code 255".

Container "post-install-hook" changed status from "terminated, exit code 255" to "waiting, reason CrashLoopBackOff".

Container "post-install-hook" restarted, current state is "running".

[ERROR]: failed connecting to https://gravity-site.kube-system.svc.cluster.local:3009/healthz
Container "post-install-hook" changed status from "running" to "terminated, exit code 255".

Container "post-install-hook" changed status from "terminated, exit code 255" to "waiting, reason CrashLoopBackOff".

Container "post-install-hook" restarted, current state is "running".

Thu Mar 28 15:20:34 UTC [ERROR] [IP.eu-central-1.compute.internal] Phase execution failed: Job has reached the specified backoff limit, gravitational.io/site:5.5.3 postInstall hook failed.
--

Occurs with both UI and command line installer. I can access the healthz endpoint in gravity shell on all 3 machines.

Also when in the gravity-site container (via exec -it) gravity site shell returns "site is up and running"

Running install as a bare metal install.


Installing with 5.3.5 vs 5.5.3 gets two different DNS lookups for the host VM ip in the gravity site container.

5.5.3
/ $ cat /etc/resolv.conf
nameserver 127.0.0.2
nameserver 172.31.0.2
search eu-central-1.compute.internal
options ndots:2 timeout:1 attempts:2

/ $ nslookup 172.31.XX.XX
Server: 127.0.0.2
Address 1: 127.0.0.2

Name: 172.31.XX.XX
Address 1: 172.31.XX.XX 172-31-XX-XX.gravity-site.kube-system.svc.cluster.local

5.3.5
/ $ cat /etc/resolv.conf
nameserver 127.0.0.2
nameserver 172.31.0.2
search eu-central-1.compute.internal
options ndots:2 timeout:1 attempts:2

/ $ nslookup 172.31.XX.XX
Server: 127.0.0.2
Address 1: 127.0.0.2

Name: 172.31.XX.XX
Address 1: 172.31.XX.XX ip-172-31-XX-XX.eu-central-1.compute.internal

--
replicated the job container and exec into it cannot resolve proper dns of the cluster

/ # nslookup gravity-site.kube-system.svc.cluster.local
Server:    10.100.14.135
Address 1: 10.100.14.135

nslookup: can't resolve 'gravity-site.kube-system.svc.cluster.local'
/ # 
/ # nslookup kubernetes.default
Server:    10.100.14.135
Address 1: 10.100.14.135

nslookup: can't resolve 'kubernetes.default'

Failed in first phase of execute preflight checks

Got this error after trying to install gravity
it's failed in first phase of execute preflight checks

2019-05-27T11:11:24Z ERRO             "Phase execution failed: \ncouldn't create a test file in temp directory  on \"localhost.localdomain\": touch: cannot touch ‘tmpcheck.49013934-5555-4f22-b822-4214cd7f96c3’: No such file or directory\n." phase:/checks install/hook.go:56
2019-05-27T11:11:24Z DEBU [FSM:INSTA] "Applied StateChange(Phase=/checks, State=failed, Error=\ncouldn't create a test file in temp directory  on \"localhost.localdomain\": touch: cannot touch ‘tmpcheck.49013934-5555-4f22-b822-4214cd7f96c3’: No such file or directory\n)." opid:145d470b-6565-4c44-ae18-3d2c4127bcb2 install/hook.go:56
2019-05-27T11:11:24Z ERRO [INSTALLER] "Failed to execute plan: \nERROR REPORT:\nOriginal Error: *trace.BadParameterError \ncouldn't create a test file in temp directory  on \"localhost.localdomain\": touch: cannot touch ‘tmpcheck.49013934-5555-4f22-b822-4214cd7f96c3’: No such file or directory\n\nStack Trace:\n\t/gopath/src/github.com/gravitational/gravity/lib/ops/opsclient/opsclient.go:1201 github.com/gravitational/gravity/lib/ops/opsclient.(*Client).ValidateServers\n\t/gopath/src/github.com/gravitational/gravity/lib/install/phases/checks.go:67 github.com/gravitational/gravity/lib/install/phases.(*checksExecutor).Execute\n\t/gopath/src/github.com/gravitational/gravity/lib/fsm/fsm.go:421 github.com/gravitational/gravity/lib/fsm.(*FSM).executeOnePhase\n\t/gopath/src/github.com/gravitational/gravity/lib/fsm/fsm.go:355 github.com/gravitational/gravity/lib/fsm.(*FSM).executePhaseLocally\n\t/gopath/src/github.com/gravitational/gravity/lib/fsm/fsm.go:315 github.com/gravitational/gravity/lib/fsm.(*FSM).executePhase\n\t/gopath/src/github.com/gravitational/gravity/lib/fsm/fsm.go:192 github.com/gravitational/gravity/lib/fsm.(*FSM).ExecutePhase\n\t/gopath/src/github.com/gravitational/gravity/lib/fsm/fsm.go:150 github.com/gravitational/gravity/lib/fsm.(*FSM).ExecutePlan\n\t/gopath/src/github.com/gravitational/gravity/lib/install/flow.go:335 github.com/gravitational/gravity/lib/install.(*Installer).startFSM\n\t/go/src/runtime/asm_amd64.s:1334 runtime.goexit\nUser Message: failed to execute phase \"/checks\"\n." install/hook.go:56
2019-05-27T11:11:25Z INFO [OPS]       ops.SetOperationStateRequest{State:"failed", Progress:(*ops.ProgressEntry)(0xc0007f6c80)} install/hook.go:56
2019-05-27T11:11:25Z DEBU [OPS]       Created: ops.ProgressEntry{ID:"", SiteDomain:"lirany", OperationID:"145d470b-6565-4c44-ae18-3d2c4127bcb2", Created:time.Time{wall:0x3184c919, ext:63694552284, loc:(*time.Location)(nil)}, Completion:100, Step:9, State:"failed", Message:"Operation failure: \ncouldn't create a test file in temp directory  on \"localhost.localdomain\": touch: cannot touch ‘tmpcheck.49013934-5555-4f22-b822-4214cd7f96c3’: No such file or directory"}. install/hook.go:56
2019-05-27T11:11:25Z DEBU [FSM:INSTA] Marked operation complete. opid:145d470b-6565-4c44-ae18-3d2c4127bcb2 install/hook.go:56
2019-05-27T11:11:25Z INFO             Operation failed. install/hook.go:56

GCE integrations: health checks don't indicate VM has insufficient service account permissions

Version: 5.2.4
Environment: Google Compute

When running an installation on a google cloud VM, which doesn't have appropriate service account settings, the cloud-provider auto detection will enable the cloud API integrations, but fail to install the node. This will give miss-leading errors with required planet services including docker failing to start, which is actually caused by flannel failing to start due to experiencing errors reaching the cloud API.

Jan 21 15:50:48 meir-test flanneld[459]: E0121 15:50:48.861676     459 main.go:295] Error registering network: error getting network from compute service: googleapi: Error 403: Ins

Provide an easy way to query historic and current influx metrics via CLI

Description

In many instances users of Graivty have no way of using the UI, especially having trouble during debugging sessions where access to monitoring data is critical.

Come up with an easy way to display monitoring data and explore critical metrics via CLI.

Perhaps installing something like OSQuery would work? The best way would be to have

gravity status command to display key charts for OS, RAM, CPU, disk, threads over time spans with ability to dig into historical data by setting time frames.

[BUG] etcd gateway not in sync with master changes

Describe the bug
Ported over from #304
The list of masters given to the etcd gateway does not get updated as the master list changes. Before moving to etcd3, the etcd proxy service would join the cluster in a way that allowed it to observe cluster changes, and internally update it's list of masters. The etcd gateway may not have this capability.

To Reproduce
Remove a master node from a multi-master cluster with workers, and observe the etcd gateway on workers behaviour after the master has been deleted.

Expected behavior
The etcd gateway to be updated with master node changes within the cluster.

Logs
See the following comment on another ticket:
#304 (comment)

Additional context
This appears to be related to the upgrade from etcd2 -> etcd3, and we need to check into whether the gateway service has a way to learn master changes, and if not ensure we implement it ourselves.

max master nodes

Gravity currently enforces a maximum on master nodes (3), without an apparent way to override.

This creates some implications in a couple cases:

  • running 5 etcd nodes is a common operating method, to allow 2 nodes to fail before losing quorum
  • When doing node replacements (such as AMI replacement), in some cases it may be preferable to add a node before deleting the node to be replaced. This doesn't really increase the redundancy, because at 4 nodes the cluster can only tolerate one failure, but if the cluster fails to expand it's not left stuck with 2/3 master nodes.

I think gravity should not rewrite master/node membership if explicitly defined in the application manifest, as this may also lead to unexpected behaviour.

Reference: https://github.com/gravitational/gravity/blob/master/lib/ops/opsservice/install.go#L885-L891

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.