Giter Club home page Giter Club logo

kubenix's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubenix's Issues

Helm `lookup`s are not executed

While doing some work on the Piped Helm Chart, I realised that it relies on the lookup Helm function, which by default does not run when helm template is executed. This requires the --dry-run=server flag to be passed to Helm when executing.

Ideally, we are able to activate this flag only when necessary, maybe with a dryRunServer or enableServer(?) parameter in https://github.com/hall/kubenix/blob/main/lib/helm/chart2json.nix.

I don't super love the idea of using this since it could break the reproducibility Nix offers, but some charts do rely on this and are incompatible with Kubenix.

Deprecated: kubectl apply will no longer prune non-namespaced resources by default

Running the built-in deployer yields the following warning:

# result/bin/kubenix
W1128 21:21:52.260311  310470 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.

Relevant documentation: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#alternative-kubectl-apply-f-directory-prune

Running this on NixOS 23.05 with the k3s service.

generate module options docs

Would be nice to create a static site that lists module options.

I've got some WIP stuff on the linked branch (see sidebar) but there's currently a possible infinite recursion error that causes a segfault when evaluating the generated Kubernetes modules (e.g., ./modules/generated/v1.24.nix).

Preview of the current state, only generating options for the docker module:
image

`nix run .#kubenix`, `error: cannot coerce null to a string`

I am getting the following error when trying to follow the samples in the docs. I was attempting to do everything in a single file as I learned how things work. I've included the flake I created as well.

System:

  • NixOS - unstable - ref: ad57eef4ef0659193044870c731987a6df5cf56b
  • Nix 2.18.2
error:
       … while calling the 'derivationStrict' builtin

         at /builtin/derivation.nix:9:12: (source not available)

       … while evaluating derivation 'kubenix'
         whose name attribute is located at /nix/store/rbiwpy57k5wag6mvac1s0bh482q79wm1-source/pkgs/stdenv/generic/make-derivation.nix:303:7

       … while evaluating attribute 'buildCommand' of derivation 'kubenix'

         at /nix/store/rbiwpy57k5wag6mvac1s0bh482q79wm1-source/pkgs/build-support/trivial-builders/default.nix:87:14:

           86|       enableParallelBuilding = true;
           87|       inherit buildCommand name;
             |              ^
           88|       passAsFile = [ "buildCommand" ]

       error: cannot coerce null to a string

flake.nix

{
  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/9401a0c780b49faf6c28adf55764f230301d0dce";
    kubenix = {
      url = "github:hall/kubenix";
      inputs = {
        nixpkgs.follows = "nixpkgs";
      };
    };
    systems.url = "github:nix-systems/default-linux";
    flake-utils = {
      url = "github:numtide/flake-utils";
      inputs.systems.follows = "systems";
    };
  };

  outputs = { self, nixpkgs, flake-utils, kubenix, ... } @ inputs:
  flake-utils.lib.eachDefaultSystem (system:
    let pkgs = nixpkgs.legacyPackages.${system}; in
    {
      devShells.default = pkgs.mkShell {
        buildInputs = [
          pkgs.nixd
          pkgs.jq
        ];
      };

      modules.default = { kubenix, ... }: {
        imports = [ kubenix.modules.k8s ];
        kubernetes.resources.pods.example.spec.containers.nginx.image = "nginx";
      };

      packages = {
        default = (kubenix.evalModules.${system} {
          module = self.modules.${system}.default;
        }).config.kubernetes.result;

        kubenix = (kubenix.packages.${system}.default.override {
          module = self.modules.${system}.default;
        });
      };
    }
  );
}

Kubenix-managed namespaces

I would like to have Kubenix create namespaces, which I intent to make Kubenix deploy its resources in using kubernetes.namespace. However, if I do this, I get: Error from server (NotFound): namespaces "kubenix" not found. A workaround I can think of is deploying the namespaces manually with kubectl, and then using Kubenix for the rest. Is there perhaps a better way?

Generated kubernetes module requires `protocol` in `Container.ports` which is not required by spec

Line

type = (types.nullOr (coerceAttrsOfSubmodulesToListByKey "io.k8s.api.core.v1.ContainerPort" "name" [ "containerPort" "protocol" ]));
requires that protocol key be present in all helm generated Pod specifications. For example the following kubenix resource definition

resources.pods.hello.spec = {
  containers = [
    {
      name = "hello";
      image = "hello";
      ports = [
        {containerPort = 1234;}
      ];
    }
  ];
};

fails with the following error

       error: attribute 'protocol' missing

       at /nix/store/jznzhivlpsahv83fsvkp866bda50nd23-source/modules/generated/v1.27.nix:82:28:

           81|               (key:
           82|                 if isAttrs value.${key}
             |                            ^
           83|                 then toString value.${key}.content

It should not fail when protocol is missing from the ports.

There are certainly other similar issues with e.g. EphemeralContainer which need to be addressed as well.

Workaround

Currently I have just deleted the "protocol" from

type = (types.nullOr (coerceAttrsOfSubmodulesToListByKey "io.k8s.api.core.v1.ContainerPort" "name" [ "containerPort" "protocol" ]));
and everything is working smoothly.

Backstory

I ran into this issue while trying to deploy prometheus as a Helm chart. After a bit of debugging I noticed that the prometheus chart doesn't specify the port protocol: https://github.com/prometheus-community/helm-charts/blob/d628ebad62f119ef2985319a5f7a1dd5bee1863b/charts/prometheus/templates/deploy.yaml#L157

Validate custom resources without having to manually define modules for them

Currently, if custom resources are being used, they have to be specified manually using kubernetes.customTypes. \

Furthermore, custom resources won't be validated at build time without manually defining a nix module in kubernetes.customTypes.*.module. This is redundant since CRDs already define the schema.

It would be nice to not have to do any of the above. Some potential avenues:

  • Generate nix module types automatically during build, or provide a tool that does so (the former would probably require IFD?)
  • Use a tool such as kubeconform to validate resources

secrets management

Seems the general nix approach to secrets management is to read a file at (app) runtime. The easiest approach here is probably kustomize's secretGenerator (there are also tools like helm-secrets or vals; which I'm open to but will leave for later consideration as I'd rather not add them for the sake of having them).

The outcome of this issue should be both the ability to read secrets from an arbitrary file, without writing anything to the store, and a section on how to do so in the docs. Given my current leaning toward kustomize (as it's builtin to kubectl anyway), the former half will like occur as part of a more general task to support kustomize.

Remove assumption that custom resources have a spec field

kubernetes.api.resources options generated as a result of specifying kubernetes.customTypes config values currently assume that the custom types have a spec field.

Though it may be convention, AFAIK it is not a requirement that CRD's have such a field. Here's an example in the wild. And currently there is no clean way to specify different toplevel options such as configuration from the provided example.

FWIW I was able to add additional toplevel options using kubernetes.api.defaults, though this is verbose and probably not its intended use, and as such I consider it an ugly hack :-)

More importantly, AFAICT there is no way to prevent a spec value of {} from showing up in the output, which is invalid if the CRD doesn't have spec in its schema.

duplicate `containerPort` with distinct `protocol` are not rendered correctly

Exposing the same containerPort on both TCP and UDP does not work as only one port ends up in the manifest, see the example below.

After some digging, I found out that the x-kubernetes-patch-merge-key for ports in io.k8s.api.core.v1.Container in the swagger spec consists of only containerPort but not protocol. And since this is used by the generator to create the module, such ports get dropped during the evaluation in the module.

Since this is a known issue in kubernetes, kubenix probably shouldn't use x-kubernetes-patch-strategy for ports to generate the k8s manifests.

Example

nix build . on the flake

{
  inputs.kubenix.url = "github:hall/kubenix";

  outputs = {self, kubenix, ... }@inputs: let
    system = "aarch64-linux";
  in {
    packages.${system}.default = (kubenix.evalModules.${system} {
      module = { kubenix, ... }: {
        imports = with kubenix.modules; [k8s];
        kubernetes.resources.deployments.test.spec = {
          selector = {};
          template.spec.containers.test = {
            ports = [
              { containerPort = 53;
                protocol = "TCP";
              }
              { containerPort = 53;
                protocol = "UDP";
              }
            ];
          };
        };
      };
    }).config.kubernetes.resultYAML;
  };
}

only produces

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kubenix/k8s-version: '1.26'
    kubenix/project-name: kubenix
  name: test
spec:
  selector: {}
  template:
    spec:
      containers:
      - name: test
        ports:
        - containerPort: 53
          protocol: TCP

and is missing

        - containerPort: 53
          protocol: UDP

ConfigMap data keys with leading underscore get removed

I need to populate a ConfigMap with a key that is starting with a underscore but the resulting ConfigMap doesn't contain the expected keys.

Example:

  kubernetes.resources.configMaps.foo.data._FOO = "_bar";

It seems this like is the culprit:

then mapAttrs (_n: moduleToAttrs) (filterAttrs (n: v: v != null && !(hasPrefix "_" n)) value)

Do you know what the purpose of this filter is?

I tried going back the git history but I reached cbf84e2 (first commit).

the order of `initContainers` is not preserved

The initContainers of a Deployment are sorted by name, instead of the order they appear in. This affects e.g. the Gitea Helm Chart where the order of the init containers is relevant.

Example

The kubenix configuration

{
  inputs.kubenix.url = "github:hall/kubenix";

  outputs = {self, kubenix, ... }@inputs: let
    system = "aarch64-linux";
  in {
    packages.${system}.default = (kubenix.evalModules.${system} {
      module = { kubenix, ... }: {
        imports = with kubenix.modules; [k8s];
        kubernetes.resources.deployments.test.spec = {
          selector = {};
          template.spec = {
            initContainers = [
              { name = "c"; }
              { name = "b"; }
              { name = "a"; }
            ];

            containers = [{ name = "test"; }];
          };
        };
      };
    }).config.kubernetes.resultYAML;
  };
}

produces

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kubenix/k8s-version: '1.26'
    kubenix/project-name: kubenix
  name: test
spec:
  selector: {}
  template:
    spec:
      containers:
      - name: test
      initContainers:
      - name: a
      - name: b
      - name: c

Note that the order of the initContainers in the generated yaml is a, b, c, whereas it is c, b, a in the nix definition.

vals cannot read secrets from sops when using gpg

I've been getting these errors when running kubenix to apply my changes:

Error getting data key: 0 successful groups required, got 0

I've spent some time debugging today and I found that c00c78b caused us to override PATH with just the dependencies for the kubenix script (vals and kubectl). However, this breaks decrypting secrets from sops with gpg, as vals cannot use gpg from the PATH anymore.

I believe the best solution here is to suffix the PATH variable instead of setting it, so I'll put a PR for that.

fully qualify objects

As it stands, objects do not need to specify a namespace which makes creating resources with the same name but in different namespaces a bit awkward. Since config keys cannot clash, the metadata.name must be explicitly set and the duplicate config name (i.e., example2 here) is now something arbitrary and ignored, e.g.,

{
  kubernetes.resources.pods = {
    example.metadata.namespace = "default";
    example2.metadata = {
      name = "example";
      namespace = "other";
    };
  };
}

Whereas we could treat an object's namespace in the same way we treat its name and it becomes something like:

{
  kubernetes.namespaces = {
    default.resources.pods.example = {};
    other.resources.pods.example = {};
  };
}

The new namespaces key is useful (as opposed to just kubernetes.${namespace}) to prevent clashing with a namespace named resources or helm, for example. And non-namespaced resources can remain defined in the old kubernetes.resources location.

{
  kubernetes = {
    namespaces.default.resources.pods.example = {};
    resources.persistentvolumes.example = {};
  };
}

Along these lines, it might also be useful to create a context key in order to move toward multi-cluster deployments.

{
  kubernetes.${context}.namespaces.${namespace}.resources = {};
}

So the full config, with unambiguous namespace and context, becomes:

{
  kubernetes.my-cluster = {
    kubeconfig = "/path/to/kubeconfig";
    namespaces.default.resources = {}; # namespaced
    resources = {};                    # non-namespaced
  };
}

This is a bit more verbose but I think it might be decent a step towards deterministic executions as it no longer relies on the user's environment to determine the current context/namespace (obviously we're still pulling arbitrary kubeconfig files as string values but those contain sensitive data so should be handled with a proper nix secrets manager).

add examples to docs site

The docs site should include a handful of examples:

  • simple pod creation
    Enough to get someone started without overwhelming information.
  • image build
    Build (and deploy) an image instead of using a pre-existingt one.
  • full deployment
    Create a full deployment with some optional resources such as a ConfigMap
    - [ ] secrets management
    Provide options to create a Secret without leaking values into the nix store.
    - [ ] testing
    Options for testing and some example setups.
  • helm chart
    Pull and deploy a helm chart with custom values.

In a future iteration: each example should include tests which are registered as a flake check where they can be verified in CI.

Talk for NixCon Berlin

Not sure if there is a better channel for this, but I am thinking of giving a talk at NixCon Berlin about Kubenix and how I use it in my homelab. Mainly to spread some awareness because I like this project :)

Some things I am thinking of presenting:

  • How the k8s resource definitions are generated from the swagger API
  • How Helm charts are handled
  • How I am using it personally in my homelab
  • Maybe a small demo? :)

I am not intimately familiar with the inner workings of the code base so it will be a researching exercise for me as well. Any ideas, help or suggestions are appreciated!

error: undefined variable 'system' when trying to initialise example with nix v2.21.0

steps to reproduce:

mkdir test
cd test
echo > '{
  inputs.kubenix.url = "github:hall/kubenix";
  outputs = {self, kubenix, ... }@inputs: let
    system = "x86_64-linux";
  in {
    packages.${system}.default = (kubenix.evalModules.${system} {
      module = { kubenix, ... }: {
        imports = [ kubenix.modules.k8s ];
        kubernetes.resources.pods.example.spec.containers.nginx.image = "nginx";
      };
    }).config.kubernetes.result;
  };
}' > flake.nix
git init
git add flake.nix
nix flake lock

results following error:

warning: Git tree '/code/kubenix-hall-test' is dirty
error:
       … while updating the lock file of flake 'git+file:///code/kubenix-hall-test'

       … while updating the flake input 'kubenix'

       error: undefined variable 'system'
       at /nix/store/q9f3b4mmzf33q1gmc5a3pqvg3g5hvpa8-source/flake.nix:62:36:
           61|         default = pkgs.callPackage ./pkgs/kubenix.nix {
           62|           inherit (self.packages.${system});
             |                                    ^
           63|           evalModules = self.evalModules.${pkgs.system};

which points at this line:

inherit (self.packages.${system});

this like seems to not actually do anything and was previously ignored, however in latest version of nix this is an error

document some deployment options

One of kubenix's main goals is to create resource manifests. While there's a basic CLI included for simple use-cases (e.g., my own), there are several, more robust and featureful, approaches using the plethora of tools from the broader community. I'd like to document some of these options for users; things like Adrian described here:

  • decrypting secrets from within a cluster
  • using a gitops-based approach

It doesn't have to be anything terribly detailed. Mostly a general guide of what's possible and why someone might pick one approach over another.

Can't evaluate `.#generate-k8s`: `invalid regular expression`

Hi,

I just tried a nix build .#generate-k8s which fails with a Nix evaluation error:

error: invalid regular expression '(  |)+'

       at /nix/store/255f8yvapjp479bi5h3ngyqsrc25qh94-source/jobs/generators/k8s/default.nix:24:64:

           23|
           24|     removeEmptyLines = str: concatStringsSep "\n" (filter (l: (builtins.match "(  |)+" l) == null) (splitString "\n" str));

I am using Nix 2.11.1.

The regex ( |)+ in principle seems correct to me but shouldn't builtins.match "\s*" l !+ null be a simpler way to test whether a line is "empty" (which I assume means blank, zero characters or only space characters). Or is there some edge case I am missing?

pull in buildImage package from nixpkgs for docker module evaluation

In order to evaluate the docker module (for generating documentation), I had to expand the default value (in e3127e8) of image.tag and image.name to include an empty string if config.image is null. It's likely that pkgs.dockerTools.buildImage (where the imageName and imageTag attrs are built) needs to be referenced in some way to avoid config.image being null.

I'm not confident at this time whether that's actually true but I'm creating this issue to not lose track of the (potential) problem.

Unable to remove the last defined resource with bulit-in deploy tool

Let's say I have two simple resources:

{
  kubernetes.resources.pods.web1.spec.containers.nginx.image = "nginx";
  kubernetes.resources.pods.web2.spec.containers.nginx.image = "nginx";
}

Running this with nix run .#kubenix creates these resources. If I remove one line, say the web2 pod, it gets removed on the next run. However, if I then remove the line of the web1 pod, nothing happens and the pod is not removed.

kubeconfig: unbound variable

Hi, I wanted to try Kubenix, but I couldn't get the example to work. I have the following flake taken from the examples:

{
  inputs.kubenix.url = "github:hall/kubenix";
  outputs = { self, ... }@inputs:
    let system = "x86_64-linux";
    in {
      kubenix = inputs.kubenix.packages.${system}.default.override {
        module = { kubenix, ... }: {
          imports = [ kubenix.modules.k8s ];
          kubernetes.resources.pods.example.spec.containers.nginx.image =
            "nginx";
        };
      };
    };
}

Then I use nix run .#kubenix, which produces:

error: builder for '/nix/store/7v7s2h891xicsa9bcijvsb6016aj82fk-kubenix.drv' failed with exit code 1;
       last 1 log lines:
       > /build/.attr-0l2nkwhif96f51f4amnlf414lhl4rv9vh8iffyp431v6s28gsr90: line 8: kubeconfig: unbound variable
       For full logs, run 'nix log /nix/store/7v7s2h891xicsa9bcijvsb6016aj82fk-kubenix.drv'.

Default namespace is incorrectly applied to clusterwide resources

Hello,

Defining kubernetes.namespace is a practical way to set the namespace to all resources but it's also applied to ones which shouldn't have it.

let
  kubenix = import (builtins.fetchGit {
    url = "https://github.com/hall/kubenix.git";
    ref = "main";
  });
in
  (kubenix.evalModules.${builtins.currentSystem} {
    module = {kubenix, ...}: {
      imports = [
        kubenix.modules.k8s
        { kubernetes.resources.namespaces.a-namespace = {}; }
       ];
      kubernetes.namespace = "an-other-namespace";
    };
  })
  .config
  .kubernetes
  .objects
$ nix eval -f poc.nix --json | jq .
[
  {
    "apiVersion": "v1",
    "kind": "Namespace",
    "metadata": {
      "annotations": {
        "kubenix/k8s-version": "1.30",
        "kubenix/project-name": "kubenix"
      },
      "name": "a-namespace",
      "namespace": "an-other-namespace"
    }
  }
]

Kubenix should be aware of namespaced/clusterwide resources to decide if the namespace is needed.

I'm not familiar enough with the codebase to open a PR but I can probably do it with some direction pointing :) .

reevaluate CLI design

I'm starting to come around to the idea of templating charts out of Helm and then piping them to kubectl (as opposed to installing with helm). In part, this is because the current CLI doesn't handle releases defined within submodules. So I've an immediate need to make improvements there. However, the current implementation is already clunky and I'm not convinced the existing complexity is worthwhile.

Overall, this is what I'd like from a CLI:

  • access to rendered manifests before they're applied; this is necessary to, e.g., inject secrets
    • keeping secrets out of manifests makes persisting to the nix store reasonable; thereby bringing config more inline with the overall ecosystem (rollbacks being a potentially major advantage should anyone wish to develop post-deploy testing).
    • currently, I'm using the pattern of writing all cluster secrets to a file with agenix then piping to vals; I'm pretty satisfied with this setup and supporting it is hardly one line of shell script on its own.
  • generate a diff prior to apply
    • it's important to know what will change; I'd be happy if we got this functionality from more generic tooling though (e.g., something that can diff arbitrary module config)
  • prune deleted resources
    • maybe the most "complicated" bit but everything should already be in place for this

All of these on their own are fairly simple with kubectl alone. Once I threw helm in the mix, things started getting a little hairy (with questionable benefit); especially, since the current modules all appear to be designed under the assumption that manifests will be piped to kubectl.

All this to say, I'm going to (on a branch for now) drop helm from the CLI to focus solely on the above tasks. I'd be happy to hear how others feel and what they'd like to see accomplished here.

verify workaround for self-referential module types

In 6dfa8e8, I've set any option whose type is its own module's type to unspecified. I'm opening this issue to go back, at some point, and verify that this is the correct approach (as doing so removes validation for any option which meets the definition).

This scenario occurs b/c the CRD type includes the JSONSchemaProps field which has fields (anyOf, allOf, oneOf, and not) of that same type; that is, JSONSchemaProps "appears in" JSONSchemaProps:

image

Vals errors should abort kubectl apply

If Vals reports an error when replacing secrets refs, the kubectl apply should not be executed.

Here is an example. I have a Kubernetes secret with a Vals ref that is broken (the file does not exist).

{
	kubernetes.resources.secrets.freshrss.stringData.adminPassword = "ref+sops://secrets.yaml#/freshrss/password";
}

Rendering this shows that Vals tries to expand the secret ref, but fails opening the file:

$ nix run .#kubenix.x86_64-linux render
expand sops://secrets.yaml#/freshrss/password: Failed to read "secrets.yaml": open secrets.yaml: no such file or directory

However, when I then try to apply the Kubenix configuration, I expect it to fail as well which it does not:

$ nix run .#kubenix.x86_64-linux
expand sops://secrets.yaml#/freshrss/password: Failed to read "secrets.yaml": open secrets.yaml: no such file or directory
W0414 14:30:40.686142 2206795 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.
diff -N -u -I ' kubenix/hash: ' -I ' generation: ' /run/user/1000/LIVE-1858589435/v1.PersistentVolume..bazarr-config /run/user/1000/MERGED-859534972/v1.PersistentVolume..bazarr-config
--- /run/user/1000/LIVE-1858589435/v1.PersistentVolume..bazarr-config   2024-04-14 14:30:40.710017293 +0200
+++ /run/user/1000/MERGED-859534972/v1.PersistentVolume..bazarr-config  1970-01-01 01:00:00.000000000 +0100
@@ -1,90 +0,0 @@
-apiVersion: v1
-kind: PersistentVolume
-metadata:
-  annotations:
-    kubectl.kubernetes.io/last-applied-configuration: |
...

It reports the error, but continues anyway. The "result" of Vals is an empty manifest, which then causes kubectl apply to prune all of my existing resources.

kubenix script has no shebang, leading to "Exec format error"

see for yourelf by running:

> nix run 'github:adrian-gierakowski/kubenix-hall-test?rev=1c28b59527c9645a83973bf9df4c34252353444d#kubenix'
error: unable to execute '/nix/store/grfgha6r1r02j4cgqj3fmqx5x0cklvbm-kubenix/bin/kubenix': Exec format error

or building and cating the file:

> cat $(nix build 'github:adrian-gierakowski/kubenix-hall-test?rev=1c28b59527c9645a83973bf9df4c34252353444d#kubenix' --no-link --print-out-paths)/bin/kubenix
  set -uo pipefail

  export KUBECONFIG=$HOME/.kube/config
  export KUBECTL_EXTERNAL_DIFF=/nix/store/dzbp20ma86l2xycay3h7prpgwpf173kp-kubenix-diff
...

Trouble with custom resources within helm charts

I'm using the following helm chart in my config:

  kubernetes.helm.releases.pgoperator = {
    chart = kubenix.lib.helm.fetch {
      repo = "https://opensource.zalando.com/postgres-operator/charts/postgres-operator";
      chart = "postgres-operator";
      version = "1.10.1";
      sha256 = "sha256-cwJuDpXyVjchTqKgy+q4UA4L/U0p37Cn3fJIkHqOlxM=";
    };
    includeCRDs = true;
  };

The chart contains CRDs and also happens to contain a CR whose type is defined by one of those CRDs. When the config is evaluated, I get the following error:

       error: The option `kubernetes.api.resources."acid.zalan.do"' does not exist. Definition values:
       - In `/nix/store/vl3x28s0pa5i8nk07pl8jb751p2dasng-source/modules/helm.nix':
           {
             v1 = {
               OperatorConfiguration = {
                 pgoperator-postgres-operator = {
                   _type = "merge";
           ...

Any way around this? Do I first need to generate Nix module options from the CRDs?

Thanks & BTW kubenix is super awesome :)

support kustomize

Kustomize is builtin to the kubectl CLI and has some decent features (such as patching and secret generation [#9]) that I think probably make it worth adding. I don't have a ton of experience with kustomize and am currently considering it only for the specific examples given so support will likely exist in the CLI first and then be "back-ported", so to speak, to the module rendering capabilities (as I'm not sure at this time if that requires a mess of rendering files to a directory, running them through kustomize and then importing them back again).

unfurl list of objects

The generic List kind is a client-side implementation and therefore cannot be validated by the API. So we need to unfurl its items key into individual resources.

For example, the Traefik helm chart uses this kind for their service which results in the following error:

error: The option `kubernetes.api.resources.core.v1.List' does not exist. Definition values:
       - In `/nix/store/jmndr7q0gkw34hkjgycg85s0n36khx4q-source/modules/helm.nix':
           {
             traefik = {
               _type = "merge";
               contents = [
                 {
           ...
(use '--show-trace' to show detailed location information)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.