Giter Club home page Giter Club logo

kubeval's Introduction

Kubeval

NOTE: This project is no longer maintained, a good replacement is kubeconform

kubeval is a tool for validating a Kubernetes YAML or JSON configuration file. It does so using schemas generated from the Kubernetes OpenAPI specification, and therefore can validate schemas for multiple versions of Kubernetes.

CircleCI Go Report Card GoDoc

$ kubeval my-invalid-rc.yaml
WARN - fixtures/my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: [integer,null], given: string
$ echo $?
1

For full usage and installation instructions see kubeval.com.

kubeval's People

Contributors

a8uhnf avatar adam-golab avatar andidog avatar bbaja42 avatar bmcustodio avatar bubbaksmith avatar carlangueitor avatar carlossg avatar carlpett avatar cpnielsen avatar davidhao3300 avatar dependabot[bot] avatar dmarkwat avatar ebachle avatar emirozer avatar garethr avatar geneccx avatar glb avatar gregswift avatar hoesler avatar ian-howell avatar johanneswuerbach avatar keegancsmith avatar lietu avatar lilianchuang avatar mig4 avatar mpon avatar pablocastellano avatar patouche avatar skos-ninja avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubeval's Issues

load schema from local error

Hi,

I clone the repo(https://github.com/garethr/kubernetes-json-schema) to my local location and try to valid the k8s yaml file offline. I run the command like below:

./kubeeval temp_yaml --schema-location /root/output/service-ctrl

And get the error message like below:


> 2 errors occurred:
> 
> * Problem loading schema from the network at /root/output/service-ctrl/kubernetes-json-schema/master/master-standalone/deployment.json: Reference {0xc420152780 {[]} false true false false true} must be canonical
> * Problem loading schema from the network at /root/output/service-ctrl/kubernetes-json-schema/master/master-standalone/service.json: Reference {0xc420152880 {[]} false true false false true} must be canonical

These two files are exist. Does anyone know why?

Add kubeval to $PATH in docker images

I was hoping to add kubeval to the path in the docker images just because I was trying to use it in a CI job and found it unintuitive to not have it in the path. I would leave it at /kubeval as well, in order to maintain backwards compatibility, if you are ok with this change I'll submit a PR.

Unable to validate due to problem loading schema from Github

With the current latest kubeval 0.7.1 (from Homebrew) I get the following error:

$ kubeval example.yaml
1 error occurred:

* Problem loading schema from the network at https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/master-standalone/issuer.json: Could not read schema from HTTP, response status is 404 Not Found

Checking the mentioned URL in a browser indeed returns a 404 as well.

invalid service/deployment name passes kubeval

I ran a yaml file through kubeval and didn't get any errors. Later when I tried to apply the config to Minikube I got the below


Error from server (Invalid): error when creating "kubernetes.yaml": Service "Foo" is invalid: metadata.name: Invalid value: "Foo": a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name',  or 'abc-123', regex used for validation is '[a-z]([-a-z0-9]*[a-z0-9])?')

Error from server (Invalid): error when creating "kubernetes.yaml": Deployment.apps "Bar" is invalid: metadata.name: Invalid value: "Bar": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

Changing the service/deployment 'metadata' > 'name' per the errors allowed me to apply the yaml file to Minikube.

Kubeval should check service/deployment configurations against the regex listed in the above error.

Kubeval fails validation when yaml contains multi-line scalar

Is it expected behavior for kubeval to fail validation on a configmap that contains a multi-line scalar, i.e. (from the docs):

data:
  game.properties: |
    enemies=aliens
    lives=3
    enemies.cheat=true
    enemies.cheat.level=noGoodRotten
    secret.code.passphrase=UUDDLRLRBABAS
    secret.code.allowed=true
    secret.code.lives=30

If that data is crunched into JSON, it validates fine. From a readability/maintainability standpoint it's much easier to be able to use the multi-line scalar, especially with configmaps like nginx. Is this something we will need to use a custom schema for?

The actual error is:

* Failed to decode YAML from my-configmap.yml
Exited with code 123

Improperly validates DaemonSet

Example

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  replicas: 2
  template:
    spec:
      containers:
      - image: nginx
        name: nginx

kubeval verifies the yaml

kubeval test.yaml
The document test.yaml contains a valid DaemonSet

replicas is invalid for a DaemonSet

kubectl create -f test.yaml
error: error validating "test.yaml": error validating data: found invalid field replicas for v1beta1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with --validate=false

I didn't look at the code (I'm assuming it a straight forward fix) but wanted to report it in case I don't get to it.

kubeval don't recognize extra field at root level

I have a file sample file as,

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    batman: true
    io.kompose.service: redis-master
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        io.kompose.service: redis-master
    spec:
      containers:
      - image: gcr.io/google_containers/redis:e2e
        name: redis-master
        ports:
        - containerPort: 6379
      restartPolicy: Always

in which batman is extra key which kubeval recognize very well with strict mode,

$ kubval deployment.yaml --strict
The document redis-master-deployment.yaml contains an invalid Deployment
---> batman: Additional property batman is not allowed

But If I provide extra key superman at root level as below,

apiVersion: extensions/v1beta1
kind: Deployment
superman: true
metadata:
  labels:
    io.kompose.service: redis-master
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        io.kompose.service: redis-master
    spec:
      containers:
      - image: gcr.io/google_containers/redis:e2e
        name: redis-master
        ports:
        - containerPort: 6379
      restartPolicy: Always

Kubeval fails to validate

$ kubval deployment.yaml --strict
The document redis-master-deployment.yaml contains a valid Deployment

Could not open file fixtures/*

Issuing the command

docker run -it -v `pwd`/Files:/fixtures garethr/kubeval fixtures/*

throws a 'Could not open file fixtures/*' error on my MacBook.
Choosing a specific file works like a charm.
So I think the problem is with the asterisk.
Any hints?

Make kubeval consumable as a library

As well as being a CLI tool, kubeval should be available as a library, so other Go tools could easily integrate the core functionality.

  • Move validate into a separate package
  • Have main public method return a struct, rather than bool
  • Move all the output to the CLI

Provide examples of using kubeval in CI

kubeval can be used to validate config files in a CI system, it would be useful to provide an example of this for different tools:

  • Travis example
  • Ksonnet example
  • Helm example

DaemonSet schema not found when using --kubernetes-version=1.8.5 and --strict

I may be misunderstanding the differences between the various flavors of schemas in https://github.com/garethr/kubernetes-json-schema but I was surprised that when running kubeval with --kubernetes-version=1.8.5 --strict that the schema for DaemonSet could not be found:

1 error occurred:

* Problem loading schema from the network at https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/v1.8.5-standalone-strict/daemonset.json: Could not read schema from HTTP, response status is 404 Not Found

Kubeval fails when resource contains '----'

I have a configmap which contains a certificate file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kubeval-test-config
data:
  my.crt: |-
    -----BEGIN CERTIFICATE-----
    REDACTED
    -----END CERTIFICATE-----

Running kubeval against this file fails with the following error:

$ kubeval a.configmap.yaml
1 error occurred:

* Missing a kind key

I believe this is due to the multi-document YAML support (#9) erroneously splitting the document with the line ending ----.

Resources requests/limits fails to validate when providing the number of CPUs as integers

I have a Job with the following resources spec for it's containers:

          resources:
            limits:
              cpu: 2
              memory: "12G"
            requests:
              cpu: 1
              memory: "8G"

I'm getting the following validation error:

./bin/linux/amd64/kubeval ec.10395.yaml 
The document ec.10395.yaml contains an invalid Job
---> spec.template.spec.containers.0.resources.requests: Invalid type. Expected: [string,null], given: integer
---> spec.template.spec.containers.0.resources.limits: Invalid type. Expected: [string,null], given: integer

The error doesn't refer to the cpu key for some reason. Furthermore, kubectl happily creates my job correctly and the documentation itself uses an integer for cpus as well as values like "200m".

Raw: Raw is required

I am executing the command ./kubeval --openshift --kubernetes-version 1.5.0 yaml/* and getting the error message:

The document yaml/deployment-template.yaml contains an invalid Template
---> Raw: Raw is required
---> Raw: Raw is required
---> Raw: Raw is required

Sample file :

apiVersion: v1
kind: Template
metadata:
  name: bar
parameters:
- name: foo
  displayName: The name of the REST application. It will be part of the exposed route.
  value: bar
objects:
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    labels:
      app: ${foo}
    name: ${foo}
  spec:
    replicas: 1
    selector:
      app: ${foo}
      deploymentconfig: ${foo}
    template:
      metadata:
        labels:
          app: ${foo}
          deploymentconfig: ${foo}
      spec:
        containers:
        - env:
          - name: LOG_LEVEL
            value: DEBUG
          image: ${foo}
          imagePullPolicy: Always
          name: ${foo}
          livenessProbe:
            httpGet:
              path: /api/healthcheck
              port: 8080
            initialDelaySeconds: 300
            timeoutSeconds: 5
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /api/healthcheck
              port: 8080
            initialDelaySeconds: 5
            timeoutSeconds: 5
          resources:
            requests:
              cpu: 500m
              memory: 500Mi
            limits:
              cpu: 1000m
              memory: 1Gi
          ports:
          - containerPort: 8080
            name: http
            protocol: TCP
          - containerPort: 8778
            name: jolokia
            protocol: TCP
          terminationMessagePath: /dev/termination-log
        dnsPolicy: ClusterFirst
        restartPolicy: Always
    test: false
- apiVersion: v1
  kind: ImageStream
  metadata:
    labels:
      build: ${foo}
    name: ${foo}
  spec:
    tags:
    - from:
        kind: DockerImage
        name: ${foo}:latest

Witout TTY docker on CI

I am triying to use kubeval on docker without TTY to validate YML on CI but give me this error "The document stdin appears to be empty". How i can use it without TTY?

clusterRoleBinding files fail validation

I get
---> apiGroup: Additional property apiGroup is not allowed
when running
cat "$file" | kubeval -f="$file" --strict
on a clusterRoleBinding.yaml file

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin
subjects:
- kind: User
  name: user001
  apiGroup: ""
roleRef:
  # this is referring to the default ClusterRole 'cluster-admin'
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""

Kubeval fails for validation

here is sample service file,

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: httpd
  name: INVALID-e_f
spec:
  ports:
  - port: 8080
    targetPort: 80
  selector:
    app: httpd
  type: INVALID
status:
  loadBalancer: {}

after running kubeval, it shows valid, but it's not

$ kubeval service.yml 
The document docker-compose.yml contains a valid Service

Not catching incorrect property names

To reproduce:

  1. Create a deployment.yml with the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rfq-explorer
spec:
  type: RollingUpdate
  1. Run kubeval deployment.yml
  2. The document deployment.yml contains a valid Deployment

Expected:
3. ---> spec.type: Unknown property. Or something along those lines

Ports and pod counts as strings

Hi @garethr, let me thank you again for this!

I've run kubeval against our specs and noticed that it complains about Expected: string, given: integer for several fields that Kubernetes happily accepts:

--> spec.template.spec.containers.0.livenessProbe.httpGet.port: Invalid type. Expected: string, given: integer
--> spec.strategy.rollingUpdate.maxSurge: Invalid type. Expected: string, given: integer
--> spec.strategy.rollingUpdate.maxUnavailable: Invalid type. Expected: string, given: integer

Any thoughts as to what is the problem here? env values which are integers are rightly flagged by kubeval since those are not accepted.

Thanks!

Error Reports can be much more specific

I have a yaml file containing configurations for multiple Kubernetes resources

when i run kubeval

I get a one line report!:

* Missing a kind key

and it does not specify where in file the error has occured

Unexpected strict mode failures

Hi,

Since about 10 days ago we are getting failures validating manifests which have not changed in weeks, specifically with horizontalPodAutoscaler:

$ kubeval --strict horizontalPodAutoscaler.yaml
The document horizontalPodAutoscaler.yaml contains an invalid HorizontalPodAutoscaler
---> targetCPUUtilizationPercentage: Additional property targetCPUUtilizationPercentage is not allowed
$ kubeval --version
Version:      0.7.0
Git commit:   2fcbe11d06671ae19210067529cb0fecf336f630
Built:        2017-09-16 04:46:25 UTC
Go version:   go1.8.3
OS/Arch:      linux/amd64

This is a property which has indeed been dropped from the master schemas but kubeval is failing even when specifying our actual k8s version (e.g. --kubernetes-version 1.7.8), which should accept it: https://github.com/garethr/kubernetes-json-schema/blob/master/v1.7.8-standalone-strict/horizontalpodautoscalerspec.json#L40

Does not catch malformed container names

kind: Pod
metadata:
  name: demo
  labels:
    role: myrole
spec:
  containers:
    - name: bad_name
a DNS-1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name',  or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')]

panic: interface conversion: interface {} is nil, not string

kubeval v0.7.0 produces a panic when validating below file:

# test.yml
kind:
panic: interface conversion: interface {} is nil, not string

goroutine 1 [running]:
github.com/garethr/kubeval/kubeval.validateResource(0xc420088d80, 0x5, 0x205, 0x7ffc6055df6f, 0x11, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /go/src/github.com/garethr/kubeval/kubeval/kubeval.go:131 +0x9fd
github.com/garethr/kubeval/kubeval.Validate(0xc420088d80, 0x5, 0x205, 0x7ffc6055df6f, 0x11, 0x0, 0x0, 0x0, 0xc4200cb900, 0x0)
        /go/src/github.com/garethr/kubeval/kubeval/kubeval.go:174 +0x1e9
github.com/garethr/kubeval/cmd.glob..func1(0xa94160, 0xc420112e40, 0x1, 0x1)
        /go/src/github.com/garethr/kubeval/cmd/root.go:67 +0x1f6
github.com/garethr/kubeval/vendor/github.com/spf13/cobra.(*Command).execute(0xa94160, 0xc42000c110, 0x1, 0x1, 0xa94160, 0xc42000c110)
        /go/src/github.com/garethr/kubeval/vendor/github.com/spf13/cobra/command.go:654 +0x299
github.com/garethr/kubeval/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xa94160, 0xc4200cb680, 0x0, 0x0)
        /go/src/github.com/garethr/kubeval/vendor/github.com/spf13/cobra/command.go:729 +0x339
github.com/garethr/kubeval/vendor/github.com/spf13/cobra.(*Command).Execute(0xa94160, 0x0, 0x6e)
        /go/src/github.com/garethr/kubeval/vendor/github.com/spf13/cobra/command.go:688 +0x2b
github.com/garethr/kubeval/cmd.Execute()
        /go/src/github.com/garethr/kubeval/cmd/root.go:99 +0x31
main.main()
        /go/src/github.com/garethr/kubeval/main.go:6 +0x20

Include schema into binary/docker image for offline validation

It would be great if the project had a version of the binary or docker image that included the schema needed for validation and didn't do any network calls as part of the execution.

In our use case, we want to validate our k8s templates offline with the docker image, but currently it needs network access to do anything.

Let me know if this would be feasible, maybe some references to where you download external dependencies at runtime?

Thanks!

Doesnt catch invalid `env` indentation

The following spec passes kubeval but fails kubectl apply with the error "ValidationError(CronJob.spec.jobTemplate.spec.template.spec): unknown field "env" in io.k8s.api.core.v1.PodSpec" as the env spec doesn't have the right indentation.

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: my-cron
spec:
  schedule: "0 2 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: my-cron
            image: my-image:latest
            command: ["...]
          env:
          - name: VAR
            value: test

Published SHA256 are wrong due to all-caps

The sha256 checksums published with the releases are in all-caps. For example, release 0.7.1 shows the binary kubeval-linux-amd64.tar.gz with a sha256 of 8259D462BD19E5FC2DB2EA304E51ED4DB928BE4343F6C9530F909DBA66E15713 but when attempting to check the tarball:

openssl sha -sha256 kubeval-linux-amd64.tar.gz | awk '{print $2}'
8259d462bd19e5fc2db2ea304e51ed4db928be4343f6c9530f909dba66e15713

which uses lowercase a-f.

cannot validate CustomResourceDefinitions

Apologies if this should be created against https://github.com/garethr/kubernetes-json-schema instead.

Attempting to validate a apiextensions.k8s.io/v1beta1 CustomResourceDefinition resource fails as the schema file in $VERSION-standalone is empty:

1 error occurred:

* Problem loading schema from the network at https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/v1.8.5-standalone/customresourcedefinition.json: EOF
[mattbrown@mattmbp kubernetes-json-schema]$ wc -c v1.*-standalone/customresourcedefinition.json
       0 v1.8.0-standalone/customresourcedefinition.json
       0 v1.8.1-standalone/customresourcedefinition.json
       0 v1.8.2-standalone/customresourcedefinition.json
       0 v1.8.3-standalone/customresourcedefinition.json
       0 v1.8.4-standalone/customresourcedefinition.json
       0 v1.8.5-standalone/customresourcedefinition.json
       0 v1.8.6-standalone/customresourcedefinition.json
       0 v1.9.0-standalone/customresourcedefinition.json
       0 total

Is this intentional? It seems impossible in the current form to lint any CustomResourceDefinitions. The kubernetes-json-schema repo does have non-0 byte versions of the schema in the non-standalone directories (i.e. in /v1.8.0/) but kubeval is hardcoded to load the -standalone flavor of each schema.

Docker image

Would it make sense to make travis publish a Docker image for every new release? That way it would be even easier to install and use kubeval.

No schema for kubernetes 1.8.7

There is 404 error under

https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/v1.8.7-standalone/deployment.json

Error from kubeval binary:

* Problem loading schema from the network at https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/v1.8.7-standalone/deployment.json: Could not read schema from HTTP, response status is 404 Not Found

piping from helm template fails with empty objects

Suppose you have a chart feature that's conditionally included, like say cronjobs:

{{- range $v := $.Values.cronJobs }}
---
apiVersion: batch/v1beta1
kind: CronJob
...
{{- end }}

If you have a service not using this feature (not supplying any Values.cronJobs), then helm template will output (among other things):

---
# Source: base/templates/cronjobs.yaml

as the result for that template file, and kubeval will complain about * Missing a kind key for this part of the resource.

Is there a sensible way to ignore these failures? Is this maybe a helm template bug?

Issue with commented out array values

For a Kubernetes Dashboard spec, also downloaded, the k8s API happily accepts this:

    spec:
      containers:
        <...>
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port

Kubeval, however, doesn't like it:

The document ../../contentful/cf-infra-stacks/kubeconfigs/staging/us-east-1/delivery-k8s-002/kubernetes-dashboard/dashboard.yaml is not a valid Deployment
--> spec.template.spec.containers.0.args: Invalid type. Expected: array, given: null

missing kind key

I am trying to use kubeval library in my project.
calling the validate function:
kubeval.Validate([]byte("v1.7.2"), "D:/Playground/nginx-deployment.yaml")

Throws the following error :

Missing a kind key
Whats could be the reason for the failure? am I calling the validate function in the right way?

the deployment file is valid:
apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:

  • name: nginx
    image: nginx:1.7.9
    ports:
  • containerPort: 80

XXXX.yaml contains an invalid HorizontalPodAutoscaler

Hi, I have a (correct) horizontal pod autoscaler definition, running on Kubernetes 1.7.11

When I launch kubeval with the -strict, even with -v 1.7.11, I get
core-api.yaml contains an invalid HorizontalPodAutoscaler
---> targetCPUUtilizationPercentage: Additional property targetCPUUtilizationPercentage is not allowed

But the property is correct. By removing the -strict the file is considered correct.
I need the -strict because I must be able to understand if some yaml contains values rejected by kubectl (e.g. adding a real not supported property)

Any ideas on why it tells me it is wrong? This is preventing us from adding this very useful script to our CI/CD release pipeline.

how to handle custom resources

I recenetly just implemented https://github.com/jetstack/cert-manager which comes with a few custom resources.....

Now my CI fails using kubeval with https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/v1.8.6-standalone/clusterissuer.json obviously, because why would you have a definition for that?

What is the best way to handle this. my CI/CD checks for every yaml and then tries to validate it hence why I am getting this.. I could exclude it there with some trickery but I thought there might be a better way and to ask here?

Thanks

Support for document separators

Some of our specs use a valid multi-document YAML format:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: admin-access
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
  - nonResourceURLs: ["*"]
    verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: kubelet-role-binding
subjects:
- kind: User
  name: kubelet
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin-access

It seems like the YAML library used in kubeval doesn't support this yet: https://github.com/go-yaml/yaml#compatibility, so at the very least kubeval should warn the user that this is not supported (in testing, only the first YAML document in the file is parsed and the rest silently discarded).

creationTimestamp=null should be valid. Regression

My local minikube by default has a deployment kube-dns in kube-system namespace. If I get it as json and try to validate with kubeval without changing anything, I get:

kubeval kube-dns.json
The document kube-dns.json contains an invalid Deployment
---> spec.template.metadata.creationTimestamp: Invalid type. Expected: string, given: null

I guess kubeval thinks it's invalid because of creationTimestamp definition. But having creationTimestamp as null must be valid.

This looks like regression #16

kubeval version 0.7.0

can not valid field name error which is the main key for deployment

Hi, I make a fake yaml like below

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: lq
    load_balancer: AAA
    name: AAA
    namespace: algorithm
  name: AAA
  namespace: algorithm
**specASDUWIUE**:
  replicas: 1
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      load_balancer: AAA

the highlight key specASDUWIUE should be spec. But this tool still pass the validation.

docker image fails when -it not specified

I'm trying to use kubeval docker image with Teamcity.
When -it is not specified I'm getting the following error:
The document stdin appears to be empty

The command I used:
docker run --rm -v $(pwd)/namespaces:/namespaces garethr/kubeval:offline namespaces/*

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.