Giter Club home page Giter Club logo

puppetlabs-kubernetes's Introduction

Puppet Forge Puppet Forge Downloads

Kubernetes

Table of Contents

  1. Description
  2. Setup - The basics of getting started with kubernetes
  3. Reference - An under-the-hood peek at what the module is doing and how
  4. Limitations - OS compatibility, etc.
  5. License
  6. Development - Guide for contributing to the module
  7. Examples - Puppet Bolt task examples

Description

This module installs and configures Kubernetes which is an open-source system for automating deployment, scaling, and management of containerized applications. For efficient management and discovery, containers that make up an application are grouped into logical units.

To bootstrap a Kubernetes cluster in a secure and extensible way, this module uses the kubeadm toolkit.

Setup

Install this module, generate the configuration, add the OS and hostname yaml files to Hiera, and configure your node.

Included in this module is Kubetool, a configuration tool that auto-generates the Hiera security parameters, the discovery token hash, and other configurations for your Kubernetes cluster. To simplify installation and use, the tool is available as a Docker image.

Generating the module configuration

If Docker is not installed on your workstation, install it from here.

The Kubetool Docker image takes each parameter as an environment variable.

Note:: The version of Kubetool you use must match the version of the module on the Puppet Forge. For example, if using the module version 1.0.0, use puppet/kubetool:1.0.0.

To output a yaml file into your working directory that corresponds to the operating system you want Kubernetes to run on, and for each controller node, run either of these docker run commands:

docker run --rm -v $(pwd):/mnt --env-file env puppet/kubetool:{$module_version}

The docker run command above includes an env file which is included in the root folder of this repo.

docker run --rm -v $(pwd):/mnt -e OS=ubuntu -e VERSION=1.10.2 -e CONTAINER_RUNTIME=docker -e CNI_PROVIDER=cilium -e CNI_PROVIDER_VERSION=1.4.3 -e ETCD_INITIAL_CLUSTER=kube-control-plane:172.17.10.101,kube-replica-control-plane-01:172.17.10.210,kube-replica-control-plane-02:172.17.10.220 -e ETCD_IP="%{networking.ip}" -e KUBE_API_ADVERTISE_ADDRESS="%{networking.ip}" -e INSTALL_DASHBOARD=true puppet/kubetool:{$module-version}

The above parameters are:

  • OS: The operating system Kubernetes runs on.
  • VERSION: The version of Kubernetes to deploy. Must follow X.Y.Z format. (Check kubeadm regex rule for more information)
  • CONTAINER_RUNTIME: The container runtime Kubernetes uses. Set this value to docker (officially supported) or cri_containerd. Advanced Kubernetes users can use cri_containerd, however this requires an increased understanding of Kubernetes, specifically when running applications in a HA cluster. To run a HA cluster and access your applications, an external load balancer is required in front of your cluster. Setting this up is beyond the scope of this module. For more information, see the Kubernetes documentation.
  • CNI_PROVIDER: The CNI network to install. Set this value to weave, flannel, calico or cilium.
  • CNI_PROVIDER_VERSION The CNI version to use. calico, calico-tigera, and cilium providers use this variable to reference the correct deployment file. Current version cilium is 1.4.3, calico is 3.18, calico-tigera is 3.26.0
  • ETCD_INITIAL_CLUSTER: The server hostnames and IPs in the form of hostname:ip. When in production, include three, five, or seven nodes for etcd.
  • ETCD_IP: The IP each etcd member listens on. We recommend passing the fact for the interface to be used by the cluster.
  • KUBE_API_ADVERTISE_ADDRESS: The IP each etcd/apiserver instance uses on each controller. We recommend passing the fact for the interface to be used by the cluster.
  • INSTALL_DASHBOARD: A boolean which specifies whether to install the dashboard.
  • KEY_SIZE: Number of bits in certificates (default: 2048).

Kubetool creates:

  • A yaml file that corresponds to the operating system specified by the OS parameter. To view the file contents, run cat Debian.yaml for a Debian system, or run cat RedHat.yaml for RedHat. The yaml files produced for each member of the etcd cluster contain certificate information to bootstrap an initial etcd cluster. Ensure these are also placed in your hieradata directory at the node level.

  • A discovery token hash and encoded values required by Kubernetes. To regenerate the values, including certificates and tokens, run the kubetool command again.

Adding the {$OS}.yaml and {$hostname}.yaml files to Hiera

Add the {$OS}.yaml file to the same control repo where your Hiera data is, usually the data directory. By leveraging location facts, such as the pp_datacenter trusted fact, each cluster can be allocated its own configuration.

Possible Error fetching hiera data

If the below error is encounterd

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Class[Kubernetes]:
  parameter 'api_server_count' expects an Integer value, got Undef
  parameter 'token' expects a String value, got Undef
  parameter 'discovery_token_hash' expects a String value, got Undef (file: /etc/puppetlabs/code/environments/production/manifests/site.pp, line: 138, column: 3) on node xxx.example.local

It means that hiera is not getting the values from the associated yaml files stored in the data folder so it sets some of the required values as Undefined.

Check your hiera.yaml file and ensure that it contains entries for {OS}.yaml and {$hostname}.yaml

hierarchy:
  - name: "Family"
    path: Debian.yaml
  - name: "Host"
    path: xxx.example.local.yaml  

Configuring your node

After the {$OS}.yaml and {$hostname}.yaml files have been added to the Hiera directory on your Puppet server, configure your node as the controller or worker.

A controller node contains the control plane and etcd. In a production cluster, you should have three, five, or seven controllers. A worker node runs your applications. You can add as many worker nodes as Kubernetes can handle. For information about nodes in Kubernetes, see the Kubernetes documentation.

Note: A node cannot be a controller and a worker. It must be one or the other.

To make a node a controller, add the following code to the manifest:

class {'kubernetes':
  controller => true,
}

To make a node a worker, add the following code to the manifest:

class {'kubernetes':
  worker => true,
}

Network Plugins

Kubernetes supports multiple networking plugins that implements the networking model.

This module supports following Container Network Interface (CNI) plugins:

  • flannel
kubernetes::cni_network_provider: https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
kubernetes::cni_pod_cidr: 10.244.0.0/16
kubernetes::cni_provider: flannel
  • weave
  • calico-node
  • cilium
kubernetes::cni_network_provider: https://raw.githubusercontent.com/cilium/cilium/1.4.3/examples/kubernetes/1.26/cilium.yaml
kubernetes::cni_pod_cidr: 10.244.0.0/16
kubernetes::cni_provider: cilium

Installing Kubernetes on different OS

Currently, puppetlab-kubernetes is compatible with Ubuntu Xenial. For different OS, below parameters can be assigned.

For instance, installing kubernetes version 1.20.0 on Debian buster

# Docker repo and key as documented in
# https://docs.docker.com/install/linux/docker-ce/debian/
  docker_apt_location => 'https://download.docker.com/linux/debian',
  docker_apt_repos    => 'stable',
  docker_apt_release  => 'buster',
  docker_key_id       => '9DC858229FC7DD38854AE2D88D81803C0EBFCD88',
  docker_key_source   => 'https://download.docker.com/linux/debian/gpg',
# Different available version can be found by apt-cache madison docker-ce
  docker_version => '5:20.10.5~3-0~debian-buster',
  docker_package_name => 'docker-ce',
# Kubernetes Version
  kubernetes_version  => '1.20.0',

Validating and unit testing the module

This module is compliant with the Puppet Development Kit (PDK), which provides tools to help run unit tests on the module and validate the modules's metadata, syntax, and style.

Note: To run static validations and unit tests against this module using the pdk validate and pdk test unit commands, you must have Puppet 5 or higher installed. In the following examples we have specified Puppet 5.3.6.

To validate the metadata.json file, run the following command:

pdk validate metadata --puppet-version='5.3.6'

To validate the Puppet code and syntax, run the following command:

pdk validate puppet --puppet-version='5.3.6'

Note: The pdk validate ruby command ignores the excluded directories specified in the .rubocop.yml file. Therefore, to validate the Ruby code style and syntax you must specify the directory the code exists in.

In the following example we validate the Ruby code contained in the lib directory:

pdk validate ruby lib --puppet-version='5.3.6'

To unit test the module, run the following command:

pdk test unit --puppet-version='5.3.6'

Reference

Classes

Public classes

  • kubernetes

Private classes

  • kubernetes::cluster_roles
  • kubernetes::config
  • kubernetes::kube_addons
  • kubernetes::packages
  • kubernetes::repos
  • kubernetes::service

Defined types

  • kubernetes::kubeadm_init
  • kubernetes::kubeadm_join

Parameters

The following parameters are available in the kubernetes class.

apiserver_cert_extra_sans

A string array of Subject Alternative Names for the API server certificates.

Defaults to [].

apiserver_extra_arguments

A string array of extra arguments passed to the API server.

Defaults to [].

apiserver_extra_volumes

A hash of extra volumes mounts mounted on the API server.

For example,

apiserver_extra_volumes => {
  'volume-name' => {
    hostPath  => '/data',
    mountPath => '/data',
    readOnly: => 'false',
    pathType: => 'DirectoryOrCreate'
  },
}

Defaults to {}.

cloud_provider

The name of the cloud provider configured in /etc/kubernetes/cloud-config.

Note: This file is not managed within this module and must be present before bootstrapping the Kubernetes controller.

Defaults to undef.

cloud_config

The location of the cloud config file used by cloud_provider. For use with v1.12 and above.

Note: This file is not managed within this module and must be present before bootstrapping the Kubernetes controller.

Defaults to undef.

cni_network_provider

The URL to get the CNI providers yaml file. kube_tool sets this value.

Defaults to undef.

cni_rbac_binding

The download URL for the cni providers rbac rules. Only for use with Calico.

Defaults to undef.

cni_pod_cidr

Specifies the overlay (internal) network range to use. This value is set by kube_tool per CNI_PROVIDER.

Defaults to undef.

container_runtime

Specifies the runtime that the Kubernetes cluster uses.

Valid values are cri_containerd or docker.

Defaults to docker.

container_runtime_use_proxy

When set to true will cause the new proxy variables to be applied to the container runtime. Currently only implemented for Docker.

Valid values are true, false.

Defaults to false.

controller

Specifies whether to set the node as a Kubernetes controller.

Valid values are true, false.

Defaults to false.

containerd_version

Specifies the version of the containerd runtime the module installs.

Defaults to 1.4.3.

containerd_install_method

The method used to install containerd. Either archive or package.

Defaults to archive.

containerd_package_name

The package name for containerd when containerd_install_method is package.

Defaults to containerd.io

containerd_archive

The name of the containerd archive.

Defaults to containerd-${containerd_version}.linux-amd64.tar.gz.

containerd_source

The download URL for the containerd archive.

Defaults to https://github.com/containerd/containerd/releases/download/v${containerd_version}/${containerd_archive}.

containerd_plugins_registry

The configuration for the image registries used by containerd.

See https://github.com/containerd/containerd/blob/master/docs/cri/registry.md

Defaults to {'docker.io' => {'mirrors' => {'endpoint' => 'https://registry-1.docker.io'}}}.

For example,

'containerd_plugins_registry' => {
    'docker.io' => {
        'mirrors' => {
            'endpoint' => 'https://registry-1.docker.io'
        },
    },
    'docker.private.example.com' => {
        'mirrors' => {
            'endpoint' => 'docker.private.example.com'
        },
        'tls' => {
            'ca_file' => 'ca.pem',
            'cert_file' => 'cert.pem',
            'key_file' => 'key.pem',
            'insecure_skip_verify' => true,
        },
        'auth' => {
            'auth' => '1azhzLXVuaXQtdGVzdDpCQ0NwNWZUUXlyd3c1aUxoMXpEQXJnUT==',
        },
    },
    'docker.private.example2.com' => {
        'mirrors' => {
            'endpoint' => 'docker.private.example2.com'
        },
        'tls' => {
            'insecure_skip_verify' => true,
        },
        'auth' => {
            'username' => 'user2',
            'password' => 'secret2',
        },
    },
}

containerd_sandbox_image

The configuration for the image pause container.

Default registry.k8s.io/pause:3.2.

containerd_socket

The path to containerd GRPC socket.

Default: /run/containerd/containerd.sock

controller_address

The IP address and port for the controller the worker node joins. For example 172.17.10.101:6443.

Defaults to undef.

controllermanager_extra_arguments

A string array of extra arguments passed to the controller manager.

Defaults to [].

controllermanager_extra_volumes

A hash of extra volumes mounts mounted on the controller manager container.

For example,

controllermanager_extra_volumes => {
  'volume-name' => {
    hostPath  => '/data',
    mountPath => '/data',
    readOnly: => 'false',
    pathType: => 'DirectoryOrCreate'
  },
}

Defaults to {}.

scheduler_extra_arguments

A string array of extra arguments passed to the scheduler.

Defaults to [].

create_repos

Specifies whether to install the upstream Kubernetes and Docker repos.

Valid values are true, false.

Defaults to true.

disable_swap

Specifies whether to turn off swap setting. This is required for kubeadm.

Valid values are true, false.

Defaults to true.

manage_kernel_modules

Specifies whether to manage the kernel modules needed for kubernetes

Valid values are true, false.

Defaults to true

manage_sysctl_settings

Specifies whether to manage the the sysctl settings needed for kubernetes

Valid values are true, false.

Defaults to true

discovery_token_hash

The string used to validate to the root CA public key when joining a cluster. This value is created by kubetool.

Defaults to undef.

docker_apt_location

The APT repo URL for the Docker packages.

Defaults to https://apt.dockerproject.org/repo.

docker_apt_release

The release name for the APT repo for the Docker packages.

Defaults to 'ubuntu-${::lsbdistcodename}'.

docker_apt_repos

The repos to install from the Docker APT url.

Defaults to main.

docker_version

Specifies the version of the Docker runtime to install.

Defaults to:

  • 17.03.0.ce-1.el7.centos on RedHat.
  • 17.03.0~ce-0~ubuntu-xenial on Ubuntu.

docker_package_name

The docker package name to download from an upstream repo.

Defaults to docker-engine.

docker_key_id

The gpg key for the Docker APT repo.

Defaults to '58118E89F3A912897C070ADBF76221572C52609D'.

docker_key_source

The URL for the Docker APT repo gpg key.

Defaults to https://apt.dockerproject.org/gpg.

docker_yum_baseurl

The YUM repo URL for the Docker packages.

Defaults to https://download.docker.com/linux/centos/7/x86_64/stable.

docker_yum_gpgkey

The URL for the Docker yum repo gpg key.

Defaults to https://download.docker.com/linux/centos/gpg.

docker_storage_driver

The storage driver for Docker (added to '/etc/docker/daemon.json')

Defaults to overlay2.

docker_storage_opts

The storage options for Docker (Array added to '/etc/docker/daemon.json')

Defaults to undef.

docker_extra_daemon_config

Extra daemons options

Defaults to undef.

etcd_version

Specifies the version of etcd.

Defaults to 3.1.12.

etcd_archive

Specifies the name of the etcd archive.

Defaults to etcd-v${etcd_version}-linux-amd64.tar.gz.

etcd_source

The download URL for the etcd archive.

Defaults to https://github.com/coreos/etcd/releases/download/v${etcd_version}/${etcd_archive}.

etcd_install_method

The method on how to install etcd. Can be either wget (using etcd_source) or package (using $etcd_package_name)

Defaults to wget.

etcd_package_name

The system package name for installing etcd

Defaults to etcd-server.

etcd_hostname

Specifies the name of the etcd instance.

A Hiera is kubernetes::etcd_hostname:"%{::fqdn}".

Defaults to $hostname.

etcd_ip

Specifies the IP address etcd uses for communications.

A Hiera is kubernetes::etcd_ip:"%{networking.ip}".

Defaults to undef.

etcd_initial_cluster

Informs etcd on how many nodes are in the cluster.

A Hiera example is kubernetes::etcd_initial_cluster: kube-control-plane:172.17.10.101,kube-replica-control-plane-01:172.17.10.210,kube-replica-control-plane-02:172.17.10.220.

Defaults to undef.

etcd_initial_cluster_state

Informs etcd on the state of the cluster when starting. Useful for adding single nodes to a cluster. Allowed values are new or existing.

Defaults to new

etcd_peers

Specifies how etcd lists the peers to connect to the cluster.

A Hiera example is kubernetes::etcd_peers:

  • 172.17.10.101
  • 172.17.10.102
  • 172.17.10.103

Defaults to undef

etcd_ca_key

The CA certificate key data for the etcd cluster. This value must be passed as string and not as a file.

Defaults to undef.

etcd_ca_crt

The CA certificate data for the etcd cluster. This value must be passed as string and not as a file.

Defaults to undef.

etcdclient_key

The client certificate key data for the etcd cluster. This value must be passed as string and not as a file.

Defaults to undef.

etcdclient_crt

The client certificate data for the etcd cluster. This value must be passed as string not as a file.

Defaults to undef.

etcdserver_key

The server certificate key data for the etcd cluster. This value must be passed as string not as a file.

Defaults to undef.

etcdserver_crt

The server certificate data for the etcd cluster . This value must be passed as string not as a file.

Defaults to undef.

etcdpeer_crt

The peer certificate data for the etcd cluster. This value must be passed as string not as a file.

Defaults to undef.

etcdpeer_key

The peer certificate key data for the etcd cluster. This value must be passed as string not as a file.

Defaults to undef.

http_proxy

The string value to set for the HTTP_PROXY environment variable.

Defaults to undef.

https_proxy

The string value to set for the HTTPS_PROXY environment variable.

Defaults to undef.

image_repository

The container registry to pull control plane images from.

Defaults to registry.k8s.io

install_dashboard

Specifies whether the Kubernetes dashboard is installed.

Valid values are true, false.

Defaults to false.

kubernetes_ca_crt

The cluster's CA certificate. Must be passed as a string and not a file.

Defaults to undef.

kubernetes_ca_key

The cluster's CA key. Must be passed as a string and not a file.

Defaults to undef.

kubernetes_front_proxy_ca_crt

The cluster's front-proxy CA certificate. Must be passed as a string and not a file.

Defaults to undef.

kubernetes_front_proxy_ca_key

The cluster's front-proxy CA key. Must be passed as a string and not a file.

Defaults to undef.

kube_api_advertise_address

The IP address you want exposed by the API server.

A Hiera example is kubernetes::kube_api_advertise_address:"%{networking.ip}".

Defaults to undef.

kubernetes_version

The version of the Kubernetes containers to install. Must follow X.Y.Z format.

Defaults to 1.10.2.

kubernetes_package_version

The version the Kubernetes OS packages to install, such as kubectl and kubelet.

Defaults to 1.10.2.

kubeadm_extra_config

A hash containing extra configuration data to be serialised with to_yaml and appended to the config.yaml file used by kubeadm.

Defaults to {}.

kubelet_extra_config

A hash containing extra configuration data to be serialised with to_yaml and appended to Kubelet configuration file for the cluster. Requires DynamicKubeletConfig.

Defaults to {}.

kubelet_extra_arguments

A string array to be appended to kubeletExtraArgs in the Kubelet's nodeRegistration configuration. It is applied to both control-planes and nodes. Use this for critical Kubelet settings such as pod-infra-container-image which may be problematic to configure via kubelet_extra_config and DynamicKubeletConfig.

Defaults to [].

kubernetes_apt_location

The APT repo URL for the Kubernetes packages.

Defaults to https://apt.kubernetes.io.

kubernetes_apt_release

The release name for the APT repo for the Kubernetes packages.

Defaults to 'kubernetes-${::lsbdistcodename}'.

kubernetes_apt_repos

The repos to install using the Kubernetes APT URL.

Defaults to main.

kubernetes_key_id

The gpg key for the Kubernetes APT repo.

Defaults to '54A647F9048D5688D7DA2ABE6A030B21BA07F4FB'.

kubernetes_key_source

The URL for the APT repo gpg key.

Defaults to https://packages.cloud.google.com/apt/doc/apt-key.gpg.

kubelet_use_proxy

When set to true will cause the new proxy variables to be applied to the Kubelet.

Valid values are true, false.

Defaults to false.

kubernetes_yum_baseurl

The YUM repo URL for the Kubernetes packages.

Defaults to https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64.

kubernetes_yum_gpgkey

The URL for the Kubernetes yum repo gpg key.

Defaults to https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg.

manage_docker

Specifies whether to install Docker repositories and packages via this module.

Valid values are true, false.

Defaults to true.

manage_etcd

Specifies whether to install an external Etcd via this module.

Valid values are true, false.

Defaults to true.

no_proxy

The string value to set for the NO_PROXY environment variable.

Defaults to undef.

node_label

An override to the label of a node.

Defaults to hostname.

node_extra_taints

Additional taints for node. Defaults to undef.

For example,

  [{'key' => 'dedicated','value' => 'NewNode','effect' => 'NoSchedule', 'operator', => 'Equal'}]

About kubernetes taints https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

runc_source

The download URL for runc.

Defaults to https://github.com/opencontainers/runc/releases/download/v${runc_version}/runc.amd64.

runc_version

Specifies the version of runc to install.

Defaults to 1.0.0.

sa_key

The key for the service account. This value must be a certificate value and not a file.

Defaults to undef.

sa_pub

The public key for the service account. This value must be a certificate value and not a file.

Defaults to undef.

schedule_on_controller

Specifies whether to remove the control plane role and allow pod scheduling on controllers.

Valid values are true, false.

Defaults to false.

service_cidr

The IP address range for service VIPs.

Defaults to 10.96.0.0/12.

token

The string used to join nodes to the cluster. This value must be in the form of [a-z0-9]{6}.[a-z0-9]{16}.

Defaults to undef.

ttl_duration

The duration before the bootstrap token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire

Defaults to 24h

worker

Specifies whether to set the node as a Kubernetes worker.

Valid values are true, false.

Defaults to false.

Limitations

This module supports:

  • Puppet 4 or higher.
  • Kubernetes 1.10.x or higher.
  • Ruby 2.3.0 or higher.

This module has been tested on the following operating systems:

  • RedHat 7.x.
  • CentOS 7.x.
  • Ubuntu 16.04

Docker is the supported container runtime for this module.

License

This codebase is licensed under the Apache2.0 licensing, however due to the nature of the codebase the open source dependencies may also use a combination of AGPL, BSD-2, BSD-3, GPL2.0, LGPL, MIT and MPL Licensing.

Development

If you would like to contribute to this module, please follow the rules in the CONTRIBUTING.md. For more information, see our module contribution guide.

To run the acceptance tests you can use Puppet Litmus with the Vagrant provider by using the following commands:

# install rvm with and ruby >2.5
rvm install "ruby-2.5.1"
gem install bundler
bundler install
bundle exec rake 'litmus:provision_list[all_supported]'
bundle exec rake 'litmus:install_agent[puppet5]'
bundle exec rake 'litmus:install_module'
bundle exec rake 'litmus:acceptance:parallel'

For more information about Litmus please see the wiki.

As currently Litmus does not allow memory size and cpu size parameters for the Vagrant provisioner task we recommend to manually update the Vagrantfile used by the provisioner and add at least the following specifications for the puppetlabs-kubernetes module acceptance tests:

Update Vagrantfile in the file: spec/fixtures/modules/provision/tasks/vagrant.rb

    vf = <<-VF 
    Vagrant.configure(\"2\") do |config|
    config.vm.box = '#{platform}'
    config.vm.boot_timeout = 600
    config.ssh.insert_key = false
    config.vm.hostname = "testkube"
    config.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
    vb.cpus = "2"
    end
    #{network}
    #{synced_folder}
    end
    VF

Examples

In the examples folder you will find a bash script containg a few sample Puppet Bolt commands for the usage of the tasks. The example script is intended to be used with a Kubernetes API that requires the token authentication header, but the token parameter is optional by default.

puppetlabs-kubernetes's People

Contributors

admont avatar adrianiurca avatar baronmsk avatar chelnak avatar daianamezdrea avatar danifr avatar davejrt avatar david22swan avatar davids avatar deric avatar disha-maker avatar eamonntp avatar eimlav avatar gregohardy avatar gspatton avatar jordanbreen28 avatar jorhett avatar lionce avatar lukasaud avatar malikparvez avatar nickperry avatar pmcmaw avatar ralimi avatar ramesh7 avatar sanfrancrisko avatar scotty-c avatar sheenaajay avatar simonhoenscheid avatar treydock avatar yoshz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppetlabs-kubernetes's Issues

Multimaster?

I'm trying to use this module to build a 3-node multi master kubernetes environment. However, I'm confused about some of the values to put into the initial .env file. It's not clear whether it's expecting a single ip, or multiple, or even how to format them. I'm getting errors like this when trying to do a puppet run:
invalid value "https://1.2.3.4,1.2.3.5,1.2.3.6:2379" for flag -listen-client-urls: URL address does not have the form "host:port": https://1.2.3.4
It seems like it should only have one IP. But how can I configure multiple master nodes?

Unavailable default version for kubernetes-cli/wrong default version for kubernetes-cni

I'm trying to set up a kubernetes cluster using this puppet module, but I've run into an error: Version '0.5.1-01' for 'kubernetes-cni' was not found. When checking apt (using apt search kubernetes-cni), it indeed shows that only version 0.6.0 is known.

The readme indicates that 0.5.1 is the default, and the actual values in params.pp say 0.5.1-01 (I'm running on Ubuntu 16.04). However, the documentation in init.pp says the default value should be 0.6.0. Why is there this mismatch?

Assuming 0.6.0 is the correct version to use, I can fix my issue, but still the documentation should match the actual manifests.

EDIT: my statement about the documentation wasn't entirely accurate; fixed

Cannot use API server certificate for both external kubectl and internal kubelet

I am using kubetool to generate my hieradata for me, including of coures all certificates:

docker run --rm -v "$(pwd)":/mnt \
  -e OS=debian \
  -e VERSION=1.9.2 \
  -e CONTAINER_RUNTIME=docker \
  -e CNI_PROVIDER=weave \
  -e FQDN=<LOAD BALANCER HOSTNAME>.westeurope.cloudapp.azure.com \
  -e IP=<PUBLIC IP> \
  -e BOOTSTRAP_CONTROLLER_IP=10.2.3.7 \
  -e ETCD_INITIAL_CLUSTER="etcd-kubernetes-01=http://10.2.3.7:2380" \
  -e ETCD_IP="%{::ipaddress}" \
  -e KUBE_API_ADVERTISE_ADDRESS="%{::ipaddress}" \
  -e INSTALL_DASHBOARD=false \
  puppet/kubetool

Using kubectl, I can reach my Kubernetes API server without issue. However, Kubelet is not able to reach it. Occasionally, it will complain that the certificate was not signed for the IP 10.2.3.7, only for or 10.96.0.1.

Where is this 10.96.0.1 coming from? Shouldn't this be equal to my master's internal IP address, e.g. 10.2.3.7?

Also, are all kubelets now directed to a single master node? What happens if that node fails?

Kubetool version and ip options behavior

Hello,

I encountered a few issues with the kubetool options.

Firstly, the "ip" option does not seem to do anything (I gave a quick look at the code but I'm not a Ruby expert so...). I supposed it was meant to specify the ip range to use for the kubernetes pods subnet (eg. 172.16.0.0/16). Could you confirm?

Secondly, when you specify the kubernetes version it only sets the kubernetes_version value but not the kubernetes_package_version which is kept at the default version (ie. 1.7.3). That causes the installation process to fail. I feel like it's not the anticipated behaviour but maybe i'm wrong.

Thanks for your help.

Upgrade module dependencies version

Hi guys, i would like to adopt this module but i have newer version of some other module that are currently used by yours:

{"name":"puppetlabs-apt","version_requirement":">= 4.1.0 < 4.3.0"}
{"name":"puppet-archive","version_requirement":">= 2.0.0 < 2.1.0"}

Is there any reason why both are strictly necessary to those versions? I am currently using them in my project and both are set to version greater than yours ( mod 'puppetlabs-apt', '4.5.1', mod 'puppet-archive', '2.2.0'), so when i try to add them, librarian-puppet is not able to satisfy dependencies.
Can you set a larger version interval in order to let use your module to those that have a newer version of modules? Thanks!

Add support for adding additional pki certificate files

** FEATURE REQUEST **

We need to be able to manage addtional PKI certs under /etc/kubernetes/pki, beyond those currently defined in config.pp's $kube_pki_files,

Our immediate use-case is to add a CA cert for our OIDC auth server. The module gives us the flexibility to tell kubernetes where the OIDC CA file should be (by adding --oidc-ca-file=/etc/kubernetes/pki/dex-ca.crt to kubernetes::apiserver_extra_arguments), but I cannot create the certificate on disk.

Currently, if I add a file resource for '/etc/kubernetes/pki/dex-ca.crt' outside of the module, I will get a dependency cycle error.

file{'/etc/kubernetes/pki/dex-ca.crt':
ensure => file,
content => $dex_ca_crt
} ->

class {'kubernetes':
bootstrap_controller_ip => $bootstrap_controller_ip,
controller => true,
bootstrap_controller => true,
}

Error: Failed to apply catalog: Found 1 dependency cycle:
(File[/etc/kubernetes/] => File[/etc/kubernetes/pki] => File[/etc/kubernetes/pki/dex-ca.crt] => Class[Kubernetes] => Class[Kubernetes::Config] => File[/etc/kubernetes/])

Because we have configured kubernetes (via kubernetes::apiserver_extra_arguments) to use OIDC auth, it must able to retrieve and validate the OIDC provider's cert. It needs the CA file on disk to be able to do this. Without it, the API server cannot start.

kubernetes Module that support puppet 5 ?

Does this module ( kubernetes ) supports Puppet 5, I want to confirm since I see supports Puppet >= 4.2.1 < 5.0.0? however do you have any kubernetes module that support puppet 5

Thank you

Erian

RBAC and version adjustments for calico atleast

According to https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/calico#installing-with-the-etcd-datastore

the Current version of etcd is 3.1 (and path a little diff.), so kubetool needs an update, and also I don't see this module setting up RBAC on etcd, as described in above guide from calico.. ?

Shouldn't there be an "RBAC" switch..or atleast just a "if cni == calico" and kubernetes ver => 1.10 - also install calico rbac for etcd ?

(I'll gladly make PR for this)

Support Ubuntu 18.04 Bionic

I think this module needs to be updated to support Ubuntu 18. The apt repositories for both Docker and Kubernetes don't seem to work for this version. Both repositories have change their structure and location since version 16.04.

etcd_version ignored

What you expected to happen?

Setting etcd_version from hiera or the kubernetes class should reflect the installed etcd version.

What happened?

Etcd version 3.1.12 is installed.

How to reproduce it?

Pass an etcd version to kubernetes class.

Versions:

$ puppet --version
5.5.6

$ kubectl version
1.11.0

$ docker version OR crictl version
Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:23:03 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:25:29 2018
  OS/Arch:          linux/amd64
  Experimental:     false

$ facter os
{
  architecture => "x86_64",
  family => "RedHat",
  hardware => "x86_64",
  name => "CentOS",
  release => {
    full => "7.5.1804",
    major => "7",
    minor => "5"
  },
  selinux => {
    enabled => false
  }
}

$ puppet module list
/etc/puppet/environments/production/modules
├── ecmlib (???)
├── network (???)
├── puppet-archive (v3.0.0)
├── puppet-wget (v2.0.0)
├── puppetlabs-apt (v5.0.1)
├── puppetlabs-docker (v3.0.0)
├── puppetlabs-kubernetes (v3.0.1)
├── puppetlabs-reboot (v2.0.0)
├── puppetlabs-stdlib (v4.25.1)
├── puppetlabs-translate (v1.0.0)
└── saz-resolv_conf (v4.0.0)

Logs:

Paste any relevant logs from a puppet run, kubernetes and syslog/messages

Notice: /Stage[main]/Kubernetes::Packages/Archive[etcd-v3.1.12-linux-amd64.tar.gz]/ensure: download archive from https://github.com/coreos/etcd/releases/download/v3.1.12/etcd-v3.1.12-linux-amd64.tar.gz to /etcd-v3.1.12-linux-amd64.tar.gz and extracted in /usr/local/bin with cleanup

Kubernetes 1.10:unknown flag: --require-kubeconfig

Because I have not been able to get the network overlay to work in my configuration, I decided to try 1.10 of kubernetes as the module says it supports 1.6 and newer.

Unfortunately, on deploy, the following is produced:

Apr 24 13:17:57 kubeadmin-vpn kubelet[29590]: F0424 13:17:57.777301   29590 server.go:145] unknown flag: --require-kubeconfig

Kube-proxy configured to be using proxy-mode = userspace and a really old version

This is less of a feature request or bug report and more of a question of intent...
I noticed the kube-proxy-daemonset that gets created by this project is using a really old version (1.6.6) and is hardcoded to the userspace mode. Neither of these is the default and I couldn't find any reasoning behind configuring it this way.

Why isn't the kube-proxy version independent? (like kube-dns-daemonset)

Why is the proxy-mode set to userspace?

Thanks for the nice module!

Fails on Xenial

I tested out kubernetes module - just vanilla. (latest master), and it fails in kubeadm init with:

[init] Using Kubernetes version: v1.10.5
.. (above - even though settings say to use 1.10.2)
Kubeadm_init.. [markmaster] Will mark node kube-cam02 as master by adding a label and a taint
error marking master: timed out waiting for the condition

I am assuming its because kube-cam02 points (in hosts file) to 127.0.0.1 and service is only listening on public ip ?

I ran tooling to get config with these options:
docker run --rm -v $(pwd):/mnt -e OS=debian -e VERSION=1.10.2 -e CONTAINER_RUNTIME=docker -e CNI_PROVIDER=calico -e ETCD_INITIAL_CLUSTER=kube-cam01.example.org:192.168.63.198,kube-cam02.example.org:192.168.63.199,kube-cam03.example.org:192.168.63.200 -e ETCD_IP="%{::ipaddress_ens192}" -e KUBE_API_ADVERTISE_ADDRESS="192.168.63.29" -e INSTALL_DASHBOARD=true puppet/kubetool:2.0.2

I also had to fix two things to even get this far.

  1. change https://github.com/puppetlabs/puppetlabs-kubernetes/blob/master/templates/etcd/etcd.service.erb#L14 from $hostname to $fqdn - otherwise etcd would not start.
  2. after that I had to manually install crictl - by doing: apt-get install cri-tools

and then I landed at the above error..

Setting kubetool options for HA setup

Hi,
I have some problems to find the correct syntax when trying to generate the hiera values and certificates for HA setup.
As far as I understand, I need to set all controller IPs at the ip-option for kubetool, else the installation will fail because certificate is not valid for IPs of the controller nodes.

   opts.on('-i', '--ip ip', 'ip') do |ip|
     options[:ip] = ip;
   end

So, this will be stored into "ip" and taken by

  def CreateCerts.api_servers(fqdn, ip)
    puts "Creating api server certs"
    dns = fqdn
    ip = ip

    csr = { "CN": "kube-apiserver", "hosts": [  "kube-master", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "cluster.local", dns,  ip, "10.96.0.1"  ], "key": { "algo": "rsa", "size": 2048 }}
    File.open("kube-api-csr.json", "w+") { |file| file.write(csr.to_json) }  
    system("cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-conf.json -profile server kube-api-csr.json | cfssljson -bare apiserver")
    FileUtils.rm_f('kube-api-csr.csr')
    data = Hash.new
    cer = File.read("apiserver.pem")
    key = File.read("apiserver-key.pem")
    data['kubernetes::apiserver_crt'] = cer
    data['kubernetes::apiserver_key'] = key
    File.open("kubernetes.yaml", "a") { |file| file.write(data.to_yaml) }
  end

That makes me think, that I could set all my controller IPs but then I get the message from the kubelet, that certificate is valid for 10.96.0.1, not 10.12.70.110

This is how I go:

docker run --rm -v $(pwd):/mnt \
-e OS=ubuntu \
-e VERSION=1.9.5 \
-e CONTAINER_RUNTIME=docker \
-e CNI_PROVIDER=weave \
-e FQDN=<FQDN of haproxy> \
-e IP="10.12.70.110, 10.12.70.111, 10.12.70.112" \
-e BOOTSTRAP_CONTROLLER_IP=10.12.70.110 \
-e ETCD_INITIAL_CLUSTER="etcd-kp00=http://10.12.70.110:2380,etcd-kp01=http://10.12.70.111:2380,etcd-kp02=http://10.12.70.112:2380" \
-e ETCD_IP="%{::ipaddress_eth0}" \
-e KUBE_API_ADVERTISE_ADDRESS="%{::ipaddress_eth0}" \
-e INSTALL_DASHBOARD=true \
puppet/kubetool

Can you point me to the correct syntax?

btw, non-HA setup is working fine.

Use without kubetool / dynamically gernerate certs / keys

The use of the module seems to be based on the assumption that kubetool or some other method will be used to generate certs and keys in advance of the master nodes being built AND that the IP addresses of the masters are known at that point so they can be baked in.

This approach does not work in our Puppet Enterprise environment because I have no control over the IP addresses my nodes will assigned by IPAM. It would also not be viable to run kubetool to update hiera.yaml directly on the nodes.

Containerd service: don't exec modprobe overlay

What you expected to happen?

we don't want the containerd service running /sbin/modprobe overlay on systems where $::is_virtual == true

What happened?

the service tries to load the overlay kernel module.
this doesn't work, because e.g. on LXC the kernel modules are managed on the hostserver.
and the service start fails:

Aug 09 14:10:12 kube-controller2 modprobe[28618]: modprobe: ERROR: ../libkmod/libkmod.c:514 lookup_builtin_file() could not open builtin file '/lib/modules/4.16.0-0.bpo.2-amd64/modules.builtin.bin'
Aug 09 14:10:12 kube-controller2 modprobe[28618]: modprobe: FATAL: Module overlay not found in directory /lib/modules/4.16.0-0.bpo.2-amd64
Aug 09 14:10:12 kube-controller2 systemd[1]: containerd.service: Control process exited, code=exited status=1
Aug 09 14:10:12 kube-controller2 systemd[1]: Failed to start containerd container runtime.

How to reproduce it?

Setup a lxc container, install the kubernetes controller with containerd.

Anything else we need to know?

https://github.com/puppetlabs/puppetlabs-kubernetes/blob/master/templates/containerd.service.erb#L7
make the template configurable and e.g. check for the is_virtual fact

Versions:

$ puppet --version
4.10.12
$ kubectl version

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

$ docker version

Client:
 Version:           18.06.0-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        0ffa825
 Built:             Wed Jul 18 19:09:33 2018
 OS/Arch:           linux/amd64
 Experimental:      false

$ facter os

{
  architecture => "amd64",
  distro => {
    codename => "stretch",
    description => "Debian GNU/Linux 9.5 (stretch)",
    id => "Debian",
    release => {
      full => "9.5",
      major => "9",
      minor => "5"
    }
  },
  family => "Debian",
  hardware => "x86_64",
  name => "Debian",
  release => {
    full => "9.5",
    major => "9",
    minor => "5"
  },
  selinux => {
    enabled => false
  }
}

$ containerd version

containerd github.com/containerd/containerd v1.1.0 209a7fc3e4a32ef71a8c7b50c68fc8398415badf

Default structure doesn't inherit correctly

What you expected to happen?

A change to etcd_version should update the etcd_archive and etcd_source to correct derived values.

What happened?

The derived values are constructed too early in the process. Unless they are overridden as well, the old version of etcd is installed from the old etcd_source.

How to reproduce it?

Set etcd_archive in hiera, and compare to the version installed.

Anything else we need to know?

I'm testing code to fix this, and can build a PR for it (unless you're already fixing it).

Versions:

$ puppet --version
5.5.6

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

$ docker version OR crictl version
Client:
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:05:44 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:05:44 2017
 OS/Arch:      linux/amd64
 Experimental: false

$ facter os
{
  architecture => "x86_64",
  distro => {
    codename => "Core",
    description => "CentOS Linux release 7.5.1804 (Core)",
    id => "CentOS",
    release => {
      full => "7.5.1804",
      major => "7",
      minor => "5"
    },
    specification => ":core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch"
  },
  family => "RedHat",
  hardware => "x86_64",
  name => "CentOS",
  release => {
    full => "7.5.1804",
    major => "7",
    minor => "5"
  },
  selinux => {
    enabled => false
  }
}

$ puppet module list
/opt/puppetlabs/puppet/modules (no modules installed)

Configuration of kubelet daemon arguments

I might have overlooked this, but is there a way in this module to configure and pass extra arguments to the kubelet service?

For example, I have edited /etc/systemd/system/kubelet.service.d/override.conf to start the kubelet server with an additional authentication method:

[Service]
Environment="KUBELET_EXTRA_ARGS=--authentication-token-webhook"

I'm not sure all such parameters can be managed solely in kubelet.conf.

I would try and raise a PR but I'm not sure of the conventional way to override systemd units via Puppet.

There is not a default container runtime

The readme says that docker is the default container runtime. However, configuring a worker without configuring container_runtime fails:

Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, Please specify a valid container runtime at /etc/puppetlabs/code/environments/.../modules/kubernetes/manifests/service.pp:72:7 on node ...

Support Windows

Windows 2019 will support Kubernetes. Please consider updating this module to support managing a Windows kubernetes node.

Test cluster is failing

I'm not sure what to put in for fqdn, so I put in the bootstrap controller's fqdn
Bringing the system up, it fails with:
Notice: /Stage[main]/Kubernetes::Service/Exec[Checking for the Kubernets cluster to be ready]/returns: The connection to the server bootstrap-controller.local:6443 was refused - did you specify the right host or port?
Is there a prerequisite for the Kubernetes controllers and workers that I am missing?

Support repo parameters

It would be nice if the apt source repos could be parameterized, instead of being hard-coded.

On a Debian stretch machine, it doesn't make so much sense to use the release kubernetes-xenial when it should instead be kubernetes-stretch, nor does it make sense to use the ubuntu-xenail release for the docker apt source, when there is a debian-stretch one.

Support for using existing etcd

I was looking at using this module to bootstrap my kubernetes cluster, but does not setup RBAC in etcd, and it does not support setting etcd up in SSL only mode.
i am using cristifalcos/puppet-etcd - and if this module could have an option for using an existing etcd, that would be most useful.

network plugin is not ready: cni config uninitialized

What you expected to happen?

Kubernetes to get installed and configured, with weave, with one master, and two nodes.

What happened?

It seems as if the puppet recipes all ran and the various pieces got installed, but weave doesn't appear to be working properly.

root@kubeadmin:/var/log/containers# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   etcd-kubeadmin                          1/1       Running            1          5h
kube-system   kube-apiserver-kubeadmin                1/1       Running            1          5h
kube-system   kube-controller-manager-kubeadmin       1/1       Running            1          5h
kube-system   kube-scheduler-kubeadmin                1/1       Running            1          5h
kube-system   kubernetes-dashboard-5bd6f767c7-wg8nt   0/1       Pending            0          5h
kube-system   weave-net-qbmgz                         1/2       CrashLoopBackOff   5          8m
kube-system   weave-net-rc6jw                         1/2       CrashLoopBackOff   5          7m
kube-system   weave-net-vtnl4                         1/2       CrashLoopBackOff   68         5h
root@kubeadmin:/var/log/containers# 

How to reproduce it?

Workers have this configured in puppet:

  class { 'kubernetes':
    worker                              => true,
    docker_package_name    => 'docker-ce',
    docker_package_version => '18.03.0~ce-0~debian'
  }

Master has this:

  class { 'kubernetes':
    controller                           => true,
    bootstrap_controller          => true,
    docker_package_name    => 'docker-ce',
    docker_package_version => '18.03.0~ce-0~debian'
  }
}

Anything else we need to know?

Versions:

$ puppet --version
4.8.2

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

$ docker version OR crictl version
Client:
 Version:       18.03.0-ce
 API version:   1.37
 Go version:    go1.9.4
 Git commit:    0520e24
 Built: Wed Mar 21 23:10:06 2018
 OS/Arch:       linux/amd64
 Experimental:  false
 Orchestrator:  swarm

Server:
 Engine:
  Version:      18.03.0-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.4
  Git commit:   0520e24
  Built:        Wed Mar 21 23:08:35 2018
  OS/Arch:      linux/amd64
  Experimental: false

$ facter os
{"name"=>"Debian", "family"=>"Debian", "release"=>{"major"=>"9", "minor"=>"4", "full"=>"9.4"}, "lsb"=>{"distcodename"=>"stretch", "distid"=>"Debian", "distdescription"=>"Debian GNU/Linux 9.4 (stretch)", "distrelease"=>"9.4", "majdistrelease"=>"9", "minordistrelease"=>"4"}}

$ puppet module list
├── maestrodev-wget (v1.7.2)
├── puppet-archive (v2.0.0)
├── puppetlabs-apt (v4.2.0)
├── puppetlabs-kubernetes (v1.1.0)
├── puppetlabs-stdlib (v4.25.1)
├── puppetlabs-translate (v1.0.0)
├── shorewall (???)
└── sshd (???)

Logs:

The following errors appear in the journal on each node (including the master):

Apr 17 19:49:40 kube01 kubelet[3690]: E0417 19:49:40.880814    3690 pod_workers.go:186] Error syncing pod 067d4d45-4278-11e8-a2cf-aa0000201861 ("weave-net-qbmgz_kube-system(067d4d45-4278-11e8-a2cf-aa0000201861)"), skipping: failed to "StartContainer" for "weave" with CrashLoopBackOff: "Back-off 40s restarting failed container=weave pod=weave-net-qbmgz_kube-system(067d4d45-4278-11e8-a2cf-aa0000201861)"
Apr 17 19:49:42 kube01 kubelet[3690]: W0417 19:49:42.847748    3690 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 17 19:49:42 kube01 kubelet[3690]: E0417 19:49:42.849220    3690 kubelet.go:2105] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr 17 19:49:45 kube01 kubelet[3690]: I0417 19:49:45.396441    3690 kuberuntime_manager.go:514] Container {Name:weave Image:weaveworks/weave-kube:2.3.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI}]} VolumeMounts:[{Name:weavedb ReadOnly:false MountPath:/weavedb SubPath: MountPropagation:<nil>} {Name:cni-bin ReadOnly:false MountPath:/host/opt SubPath: MountPropagation:<nil>} {Name:cni-bin2 ReadOnly:false MountPath:/host/home SubPath: MountPropagation:<nil>} {Name:cni-conf ReadOnly:false MountPath:/host/etc SubPath: MountPropagation:<nil>} {Name:dbus ReadOnly:false MountPath:/host/var/lib/dbus SubPath: MountPropagation:<nil>} {Name:lib-modules ReadOnly:false MountPath:/lib/modules SubPath: MountPropagation:<nil>} {Name:weave-net-token-m98zc ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/status,Port:6784,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Apr 17 19:49:45 kube01 kubelet[3690]: I0417 19:49:45.397485    3690 kuberuntime_manager.go:758] checking backoff for container "weave" in pod "weave-net-qbmgz_kube-system(067d4d45-4278-11e8-a2cf-aa0000201861)"
Apr 17 19:49:45 kube01 kubelet[3690]: I0417 19:49:45.398180    3690 kuberuntime_manager.go:768] Back-off 40s restarting failed container=weave pod=weave-net-qbmgz_kube-system(067d4d45-4278-11e8-a2cf-aa0000201861)

There is definitely nothing in /etc/cni/net.d.

I'm wondering if weave 2.3.0 was installed? Is that compatible?

Warning: Could not find resource 'File[/etc/kubernetes]' in parameter 'require'
   (at /root/puppet/code/modules/kubernetes/manifests/config.pp:90)

Make yum repo creation optional?

The repos used by this Kubernetes module are hardcoded in the file repos.pp. Unfortunately, I am installing Kubernetes in an environment where we can't easily get out to the internet. For other applications, we've simply mirrored the repos internally, and provided our own yumrepo commands.

Would it be possible to make the yumrepo creations optional, or overrideable? The way the code is now, installing this module completely messes up all yum interactions, because the two yum repos entries (one for docker, one for kubernetes) provide baseurl entries that are unreachable by the host.

Missing puppet/kubetool:2.0.0 docker image

According to the project documentation the version of kubetool you use must match the version of the module on the Puppet Forge. Current forge version is 2.0.2 👍

image

What you expected to happen?

Docker image pulled from the repository

What happened?

Image cannot be found:

Unable to find image 'puppet/kubetool:2.0.2' locally
Pulling repository docker.io/puppet/kubetool
docker: Tag 2.0.2 not found in repository docker.io/puppet/kubetool.

How to reproduce it?

Run

docker run --rm -v $(pwd):/mnt --env-file .env puppet/kubetool:2.0.2

$ docker --version
Docker version 1.11.2, build b9f10c9

How to deploy k8s with this module

Im trying to use this module with foreman and hiera, but Im not sure how to use it.
The document doesn't really specific how can I choose where is the master and nodes. Or how I want to add a new node.

etcd service start timeout

What you expected to happen?

if the service hangs, don't wait for the etcd service during the puppet run.

What happened?

if my etcd service can't start, the puppet runs waits forever because of TimeoutStartSec=0
https://github.com/puppetlabs/puppetlabs-kubernetes/blob/master/templates/etcd/etcd.service.erb#L12

official service file:
https://github.com/coreos/etcd/blob/master/contrib/systemd/etcd.service

How to reproduce it?

configure some wrong hostnames for kubernetes::etcd_initial_cluster: which let etcd fail.

Versions:

$ puppet --version
4.10.12
$ kubectl version

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

$ docker version

Client:
 Version:           18.06.0-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        0ffa825
 Built:             Wed Jul 18 19:09:33 2018
 OS/Arch:           linux/amd64
 Experimental:      false

$ facter os

{
  architecture => "amd64",
  distro => {
    codename => "stretch",
    description => "Debian GNU/Linux 9.5 (stretch)",
    id => "Debian",
    release => {
      full => "9.5",
      major => "9",
      minor => "5"
    }
  },
  family => "Debian",
  hardware => "x86_64",
  name => "Debian",
  release => {
    full => "9.5",
    major => "9",
    minor => "5"
  },
  selinux => {
    enabled => false
  }
}

Systemd version: 237-3~bpo9+1

kubernetes controller never starts apiserver

Using puppetlabs-kubernetes 1.0.1 with an expression like:

class { 'ntp':
    package_ensure     => 'latest',
    servers            => [ '0.ubuntu.pool.ntp.org', '1.ubuntu.pool.ntp.org' ],
    iburst_enable      => true,
    restrict           => [ '127.0.0.1' ],
}

class { 'kubernetes':
    controller           => true,
    bootstrap_controller => true,
    container_runtime    => 'docker',
}

on an Ubuntu 16.04.3 system (on AWS EC2, image id ami-41e0b93b), the Kubernetes api server never comes up. The kubelet logs complain that CNI has not been configured:

Jan 31 16:33:19 ip-172-31-65-171 kubelet[9873]: W0131 16:33:19.982907    9873 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
Jan 31 16:33:19 ip-172-31-65-171 kubelet[9873]: E0131 16:33:19.983012    9873 kubelet.go:2136] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Though I don't know if this is the cause of the problem or a symptom of something else.

The kubetool invocation looks like:

MASTER=ip-172-31-65-171.ec2.internal
MASTER_IP=172.31.65.171

docker run -v $(pwd):/mnt \
    -e OS=ubuntu \
    -e VERSION=1.7.12 \
    -e FQDN=${MASTER} \
    -e IP=${MASTER_IP} \
    -e BOOTSTRAP_CONTROLLER_IP=${MASTER_IP} \
    -e ETCD_INITIAL_CLUSTER="etcd-kube-master=http://${MASTER_IP}:2380" \
    -e ETCD_IP="%{::ipaddress_eth0}" \
    -e KUBE_API_ADVERTISE_ADDRESS="%{::ipaddress_eth0}" \
    -e INSTALL_DASHBOARD=true \
    puppet/kubetool

I haven't tried to supply a CNI configuration myself. From the README, it's not clear that I need to do this.

overlay2 support

Though module allows one manage docker installation independently, is there a plan to make it support overlay2 storage driver? Out of the box, using latest module version, on CentOS7 - overlay driver is set. Been trying to search official docker documentation - and it looks that overlay2 is recommended.

Cannot run beaker tests

I'm trying to run beaker tests. When I run bundle exec rake beaker, it returns nothing.
Step to reproduce :

  • clone the repo
  • bundle install --path vendor
  • bundle exec rake beaker (or BEAKER_set=centos-72-x64 bundle exec rake beaker)

Etcd version 3.1.12 always gets installed, even when overriden

What you expected to happen?

I expected the module to install version 3.2.18 of Etcd instead of the default version (3.1.12).

What happened?

Version 3.1.12 is installed instead of the specified version.

How to reproduce it?

I tried overriding it via Hiera and by directly declaring the class.

Hiera:

kubernetes::etcd_version: 3.2.18

Puppet:

class  { 'kubernetes':
    controller             => true,
    manage_docker          => true,
    etcd_version           => '3.2.18',
    docker_version         => '17.03.0~ce-0~debian-stretch',
    docker_apt_release     => 'stretch',
    docker_apt_repos       => 'stable',
    docker_apt_location    => 'https://download.docker.com/linux/debian',
    docker_package_name    => 'docker-ce',
    kubernetes_apt_release => 'kubernetes-xenial',
  }

Anything else we need to know?

I'm trying to set up a Kubernetes cluster with one master node on a local Vagrant environment. I bumped into this problem when trying to set up Kubernetes 1.11.0; this version of Kubernetes needs Etcd >= 3.2.17.
Installing Etcd 3.2.18 has always worked in the past.

Versions:

$ puppet --version
4.10.12
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
$ docker version OR crictl version
Client:
 Version:      17.03.0-ce
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   3a232c8
 Built:        Tue Feb 28 08:02:23 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.0-ce
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   3a232c8
 Built:        Tue Feb 28 08:02:23 2017
 OS/Arch:      linux/amd64
 Experimental: false

$ facter os
{
  architecture => "amd64",
  distro => {
    codename => "stretch",
    description => "Debian GNU/Linux 9.4 (stretch)",
    id => "Debian",
    release => {
      full => "9.4",
      major => "9",
      minor => "4"
    }
  },
  family => "Debian",
  hardware => "x86_64",
  name => "Debian",
  release => {
    full => "9.4",
    major => "9",
    minor => "4"
  },
  selinux => {
    enabled => false
  }
}
$ puppet module list
<empty>

Bootstrap controller - check if cni provider has been installed?

Versions:

$ puppet --version = 4.7.0

$ kubectl version = 1.9.7

$ docker version OR crictl version = 17.03.0-ce

$ facter os = RHEL 7.4

It looks like the main reason that the bootstrap_controller flag has to be manually turned off after bootstrap is because there's no gate on the following exec:

exec { 'Install cni network provider':
  command => "kubectl apply -f ${cni_network_provider}",
  onlyif  => 'kubectl get nodes',
  }

I was looking into possible ways to be able to tell if the network provider is applied, but I am only working with weave currently, and I'm sure that coverage for flannel and calico would be needed as well.

Is it sufficient to check if there's a file under /etc/cni/net.d (e.g. 10-weave.conf)? If so, I'm happy to put in a PR, but I'd need to know the file names for flannel / calico.

If not, is there a suggestion as to what to investigate, or is it unlikely that an attempted solution will get accepted (or is someone else already looking at it)?

pre-flight checks failing when using module

What you expected to happen?

Use of Kubernetes 2.0.2 module installs without errors

What happened?

I'm getting some errors, one that's critical and prevents things from working:

# puppet apply --hiera_config=hiera.yaml --modulepath modules:site manifests/nodes.pp
Warning: ModuleLoader: module 'kubernetes' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules\n   (file & line not available)
Notice: Compiled catalog for foobar.llnl.gov in environment production in 0.81 seconds
Notice: /Stage[main]/Profiles::Container_orchestration::Kubernetes_master/Exec[swapoff]/returns: executed successfully
Notice: /Stage[main]/Kubernetes::Repos/Yumrepo[docker]/ensure: created
Notice: /Stage[main]/Kubernetes::Repos/Yumrepo[kubernetes]/ensure: created
Notice: /Stage[main]/Kubernetes::Packages/Exec[set up bridge-nf-call-iptables]/returns: executed successfully
Notice: /Stage[main]/Kubernetes::Packages/Package[docker-engine]/ensure: created
Notice: /Stage[main]/Kubernetes::Packages/File_line[set systemd cgroup docker]/ensure: created
Notice: /Stage[main]/Kubernetes::Packages/Archive[etcd-v3.1.12-linux-amd64.tar.gz]/ensure: download archive from https://github.com/coreos/etcd/releases/download/v3.1.12/etcd-v3.1.12-linux-amd64.tar.gz to /etcd-v3.1.12-linux-amd64.tar.gz and extracted in /usr/local/bin with cleanup
Notice: /Stage[main]/Kubernetes::Packages/Package[kubelet]/ensure: created
Notice: /Stage[main]/Kubernetes::Packages/Package[kubectl]/ensure: created
Notice: /Stage[main]/Kubernetes::Packages/Package[kubeadm]/ensure: created
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki]/ensure: created
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/etcd]/ensure: created
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/etcd/ca.crt]/ensure: defined content as '{md5}3b5f2b95da2e673f0cd8320e8d2d6bc0'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/etcd/ca.key]/ensure: defined content as '{md5}fbf9a5cefc0a618db703858d2b1fe23e'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/etcd/client.crt]/ensure: defined content as '{md5}decbe90d92876366143a24d97521dcf7'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/etcd/client.key]/ensure: defined content as '{md5}874345a1c97f1c6dd5a52571773a2a71'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/etcd/peer.crt]/ensure: defined content as '{md5}42761016e75a81fb37283f88adebfb31'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/etcd/peer.key]/ensure: defined content as '{md5}44cda0692a965e16b7e1b548e261d70a'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/etcd/server.crt]/ensure: defined content as '{md5}5c86245114732e0bacf4188e8f598ac7'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/etcd/server.key]/ensure: defined content as '{md5}2c41ad0d91b4dba6b423a72b3211859c'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/ca.crt]/ensure: defined content as '{md5}335356d0b00534d634f1eae328e4c72c'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/ca.key]/ensure: defined content as '{md5}ef7cdce1cdc4a223fff42b92de052f99'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/sa.pub]/ensure: defined content as '{md5}059fc1ddf74b92de8fa6a674a41542ed'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/pki/sa.key]/ensure: defined content as '{md5}43d9712a1109da1f65b6eb71c419398a'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/systemd/system/etcd.service]/ensure: defined content as '{md5}d94bdb20fb9238e7f76b8e3ecdc6bfa9'
Notice: /Stage[main]/Kubernetes::Config/File[/etc/kubernetes/config.yaml]/ensure: defined content as '{md5}29d9105d8fbaf5731a6929b7a2f3a3b4'
Notice: /Stage[main]/Kubernetes::Service/Service[docker]/ensure: ensure changed 'stopped' to 'running'
Notice: /Stage[main]/Kubernetes::Service/Service[etcd]/ensure: ensure changed 'stopped' to 'running'
Notice: /Stage[main]/Kubernetes::Cluster_roles/Kubernetes::Kubeadm_init[foobar]/Exec[kubeadm init]/returns: [init] Using Kubernetes version: v1.10.4
Notice: /Stage[main]/Kubernetes::Cluster_roles/Kubernetes::Kubeadm_init[foobar]/Exec[kubeadm init]/returns: [init] Using Authorization modes: [Node RBAC]
Notice: /Stage[main]/Kubernetes::Cluster_roles/Kubernetes::Kubeadm_init[foobar]/Exec[kubeadm init]/returns: [preflight] Running pre-flight checks.
Notice: /Stage[main]/Kubernetes::Cluster_roles/Kubernetes::Kubeadm_init[foobar]/Exec[kubeadm init]/returns:     [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
Notice: /Stage[main]/Kubernetes::Cluster_roles/Kubernetes::Kubeadm_init[foobar]/Exec[kubeadm init]/returns:     [WARNING FileExisting-crictl]: crictl not found in system path
Notice: /Stage[main]/Kubernetes::Cluster_roles/Kubernetes::Kubeadm_init[foobar]/Exec[kubeadm init]/returns: Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
Notice: /Stage[main]/Kubernetes::Cluster_roles/Kubernetes::Kubeadm_init[foobar]/Exec[kubeadm init]/returns: [preflight] Some fatal errors occurred:
Notice: /Stage[main]/Kubernetes::Cluster_roles/Kubernetes::Kubeadm_init[foobar]/Exec[kubeadm init]/returns:     [ERROR ExternalEtcdVersion]: couldn't parse external etcd version "": Version string empty
Notice: /Stage[main]/Kubernetes::Cluster_roles/Kubernetes::Kubeadm_init[foobar]/Exec[kubeadm init]/returns: [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
Error: 'kubeadm init --config '/etc/kubernetes/config.yaml'' returned 2 instead of one of [0]
Error: /Stage[main]/Kubernetes::Cluster_roles/Kubernetes::Kubeadm_init[foobar]/Exec[kubeadm init]/returns: change from 'notrun' to ['0'] failed: 'kubeadm init --config '/etc/kubernetes/config.yaml'' returned 2 instead of one of [0]
Notice: /Stage[main]/Kubernetes::Kube_addons/Exec[Install cni network provider]: Dependency Exec[kubeadm init] has failures: true
Warning: /Stage[main]/Kubernetes::Kube_addons/Exec[Install cni network provider]: Skipping because of failed dependencies
Notice: /Stage[main]/Kubernetes::Kube_addons/Exec[Install Kubernetes dashboard]: Dependency Exec[kubeadm init] has failures: true
Warning: /Stage[main]/Kubernetes::Kube_addons/Exec[Install Kubernetes dashboard]: Skipping because of failed dependencies
Notice: Applied catalog in 88.03 seconds

How to reproduce it?

See the command I ran above.

Versions:

$ puppet --version
5.3.5

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

$ docker version OR crictl version
Docker version 17.03.1-ce, build c6d412e

$ facter os
{
  architecture => "x86_64",
  family => "RedHat",
  hardware => "x86_64",
  name => "RedHat",
  release => {
    full => "7.5",
    major => "7",
    minor => "5"
  },
  selinux => {
    enabled => false
  }
}

$ puppet module list
/etc/puppetlabs/code/environments/production/modules (no modules installed)
/etc/puppetlabs/code/modules (no modules installed)
/opt/puppetlabs/puppet/modules (no modules installed)

Other Logs

The file /etc/kubernetes/config.yaml is probably relevant. I replaced my hostname with foobar, and my IP address with 4.3.2.1:

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: 4.3.2.1
etcd:
  endpoints:
  - https://4.3.2.1:2379
  caFile: /etc/kubernetes/pki/etcd/ca.crt
  certFile: /etc/kubernetes/pki/etcd/client.crt
  keyFile: /etc/kubernetes/pki/etcd/client.key
CertificatesDir: /etc/kubernetes/pki
nodeName: foobar
networking:
  podSubnet: 10.32.0.0/12
  serviceSubnet: 10.96.0.0/12
apiServerExtraArgs:
  endpoint-reconciler-type: lease

token: 74cdce.4a06bc09e319b53b
apiServerCertSANs:

Kubernetes puppet module and external ETCd service.

What you expected to happen?

I would like to use this module to install Kubernetes with external ETCd support. For now there is no way to do it.

What happened?

Cannot run Puppet without full ETCd configuration, but i do not want to use ETCd as docker containers.

How to reproduce it?

N/A

Anything else we need to know?

Versions:

$ puppet --version
4.10.6

$ kubectl version
N/A

$ docker version OR crictl version


$ facter os
{
  architecture => "x86_64",
  family => "RedHat",
  hardware => "x86_64",
  name => "CentOS",
  release => {
    full => "7.3.1611",
    major => "7",
    minor => "3"
  },
  selinux => {
    enabled => false
  }
}
$ puppet module list

Logs:

==> srv01: Error: Evaluation Error: Error while evaluating a Function Call, Class[Kubernetes::Config]:
==> srv01: parameter 'etcdserver_crt' expects a String value, got Undef
==> srv01: parameter 'etcdserver_key' expects a String value, got Undef
==> srv01: parameter 'etcdpeer_crt' expects a String value, got Undef
==> srv01: parameter 'etcdpeer_key' expects a String value, got Undef
==> srv01: parameter 'etcd_peers' expects an Array value, got Undef
==> srv01: parameter 'etcd_ip' expects a String value, got Undef
==> srv01: parameter 'cni_pod_cidr' expects a String value, got Undef

PS:

  • I think ETCd configuration should be done in other module
  • Please remove also Docker installation from this module and add as dependencies
  • Please remove or change this dependencies: puppetlabs-apt (>= 4.1.0 < 4.3.0) - because of that i cannot use Docker puppet module

Thanks for good work.

FQDN / shortname mismatch

I am running 1.0.2 of the module. My cluster looks like this:

kubectl get nodes
NAME                                  STATUS    ROLES     AGE       VERSION
foo001.mydomain.com   Ready     <none>    4h        v1.9.2
foo002.mydomain.com   Ready     <none>    4h        v1.9.2
foo003.mydomain.com   Ready     <none>    4h        v1.9.2

These two blocks of code in kube_addons.pp are causing my Puppet runs to fail because they expect my nodes to registered by their short name, rather than FQDN.

 77   if $controller {
 78     exec { 'Assign master role to controller':
 79       command => "kubectl label node ${::hostname} node-role.kubernetes.io/master=",
 80       unless  => "kubectl describe nodes ${::hostname} | tr -s ' ' | grep 'Roles: master'",
 81     }
 94       exec { 'Taint master node':
 95         command => "kubectl taint nodes ${::hostname} key=value:NoSchedule",
 96         onlyif  => 'kubectl get nodes',
 97         unless  => "kubectl describe nodes ${::hostname} | tr -s ' ' | grep 'Taints: key=value:NoSchedule'"
 98       }
Error: kubectl label node foo001 node-role.kubernetes.io/master= returned 1 instead of one of [0]
Error: /Stage[main]/Kubernetes::Kube_addons/Exec[Assign master role to controller]/returns: change from notrun to 0 failed: kubectl label node foo001 node-role.kubernetes.io/master= returned 1 instead of one of [0]
Notice: /Stage[main]/Kubernetes::Kube_addons/Exec[Taint master node]/returns: Error from server (NotFound): nodes "foo001" not found
Error: kubectl taint nodes foo001 key=value:NoSchedule returned 1 instead of one of [0]
Error: /Stage[main]/Kubernetes::Kube_addons/Exec[Taint master node]/returns: change from notrun to 0 failed: kubectl taint nodes foo001 key=value:NoSchedule returned 1 instead of one of [0]

Basic Kubernetes installation failing on yum steps

What you expected to happen?

Kubernetes to install on my node

What happened?

I get this error:

Error: Could not update: Execution of '/bin/yum -d 0 -e 0 -y install docker-engine-1.12.6' returned 1: Error: Nothing to do
Loaded plugins: langpacks, product-id, subscription-manager, versionlock
Error: /Stage[main]/Kubernetes::Packages/Package[docker-engine]/ensure: change from 'purged' to '1.12.6' failed: Could not update: Execution of '/bin/yum -d 0 -e 0 -y install docker-engine-1.12.6' returned 1: Error: Nothing to do
Loaded plugins: langpacks, product-id, subscription-manager, versionlock
Error: Could not update: Execution of '/bin/yum -d 0 -e 0 -y install kubernetes-cni-0.5.1' returned 1: One of the configured repositories failed (docker),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:
. . . 

How to reproduce it?

Puppetfile contents:

# These are dependencies for Puppet
mod 'puppetlabs-stdlib', '4.24.0'
mod 'puppetlabs-apt', '4.5.1'
mod 'puppetlabs-translate', '1.0.0'

# Some nodes run docker but not kubernetes
mod 'puppetlabs-docker', '1.1.0'

# These are dependencies for kubernetes
mod 'puppet-archive', '2.0.0'
mod 'maestrodev-wget', '1.7.0'

mod 'puppetlabs-kubernetes', '1.0.3'

Here's the simple code I used:

  class { 'kubernetes':
    controller => true,
    bootstrap_controller => true,
  }

Versions:

$ puppet --version
5.3.5

$ kubectl version
(not installed yet)

$ docker version OR crictl version
(not installed yet)

$ facter os
{
  architecture => "x86_64",
  distro => {
    codename => "Maipo",
    description => "Red Hat Enterprise Linux Workstation release 7.4 (Maipo)",
    id => "RedHatEnterpriseWorkstation",
    release => {
      full => "7.4",
      major => "7",
      minor => "4"
    },
    specification => ":core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch"
  },
  family => "RedHat",
  hardware => "x86_64",
  name => "RedHat",
  release => {
    full => "7.4",
    major => "7",
    minor => "4"
  },
  selinux => {
    enabled => false
  }
}

$ puppet module list
/etc/puppetlabs/code/environments/production/modules (no modules installed)
/etc/puppetlabs/code/modules (no modules installed)
/opt/puppetlabs/puppet/modules (no modules installed)

Debian.yaml or RedHat.yaml aren't generated

What you expected to happen?

Running the example kubetool to generate the yaml files with either CentOS or Debian params
docker run --rm -v $(pwd):/mnt -e OS=debian ....

i'd expect to find a CentOS.yaml or a Debian.yaml

What happened?

It does generate the kubernetes.yaml but not the platform specific yaml files.

docker run --rm -v pwd:/mnt -e OS=RedHat -e VERSION=1.10 -e CONTAINER_RUNTIME=docker -e CNI_PROVIDER=flannel -e ETCD_INITIAL_CLUSTER=kubemaster.mgmtplay.inuits.eu:10.0.228.151,kubenode01.mgmtplay.inuits.eu:10.0.228.157,kubenode02.mgmtplay.inuits.eu:10.0.228.158,kubenode03.mgmtplay.inuits.eu:10.0.228.159 -e ETCD_IP="%{::ipaddress_eth0}" -e KUBE_API_ADVERTISE_ADDRESS="%{::ipaddress_eth0}" -e INSTALL_DASHBOARD=true puppet/kubetool

/usr/bin/cfssl
Creating root ca
2018/06/21 09:26:07 [INFO] generating a new CA key and certificate from CSR
2018/06/21 09:26:07 [INFO] generate received request
2018/06/21 09:26:07 [INFO] received CSR
2018/06/21 09:26:07 [INFO] generating key: rsa-2048
2018/06/21 09:26:07 [INFO] encoded CSR
2018/06/21 09:26:07 [INFO] signed certificate with serial number 435220836992971546436699709563516313141666939307
Creating api server certs
2018/06/21 09:26:07 [INFO] generate received request
2018/06/21 09:26:07 [INFO] received CSR
2018/06/21 09:26:07 [INFO] generating key: rsa-2048
2018/06/21 09:26:07 [INFO] encoded CSR
2018/06/21 09:26:07 [INFO] signed certificate with serial number 188998156215883815178391128054336481798532299115
/usr/bin/cfssl
Creating service account certs
Creating admin cert
2018/06/21 09:26:08 [INFO] generate received request
2018/06/21 09:26:08 [INFO] received CSR
2018/06/21 09:26:08 [INFO] generating key: rsa-2048
2018/06/21 09:26:08 [INFO] encoded CSR
2018/06/21 09:26:08 [INFO] signed certificate with serial number 6275064873546433540740821375813416734731166397
2018/06/21 09:26:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
Creating api server kubelet client certs
2018/06/21 09:26:08 [INFO] generate received request
2018/06/21 09:26:08 [INFO] received CSR
2018/06/21 09:26:08 [INFO] generating key: rsa-2048
2018/06/21 09:26:08 [INFO] encoded CSR
2018/06/21 09:26:08 [INFO] signed certificate with serial number 504760718114927167433514522748056436792494663195
2018/06/21 09:26:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
Creating front proxy ca certs
2018/06/21 09:26:08 [INFO] generating a new CA key and certificate from CSR
2018/06/21 09:26:08 [INFO] generate received request
2018/06/21 09:26:08 [INFO] received CSR
2018/06/21 09:26:08 [INFO] generating key: rsa-2048
2018/06/21 09:26:08 [INFO] encoded CSR
2018/06/21 09:26:08 [INFO] signed certificate with serial number 449732461948672722537521933954715510627513505976
Creating front proxy client certs
2018/06/21 09:26:08 [INFO] generate received request
2018/06/21 09:26:08 [INFO] received CSR
2018/06/21 09:26:08 [INFO] generating key: rsa-2048
2018/06/21 09:26:08 [INFO] encoded CSR
2018/06/21 09:26:08 [INFO] signed certificate with serial number 295645399263465752228513794931463449710046392292
2018/06/21 09:26:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
Creating system node certs
2018/06/21 09:26:08 [INFO] generate received request
2018/06/21 09:26:08 [INFO] received CSR
2018/06/21 09:26:08 [INFO] generating key: rsa-2048
2018/06/21 09:26:08 [INFO] encoded CSR
2018/06/21 09:26:08 [INFO] signed certificate with serial number 491097179514277803195326443617362730927050766351
2018/06/21 09:26:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
Creating kube controller manager certs
2018/06/21 09:26:08 [INFO] generate received request
2018/06/21 09:26:08 [INFO] received CSR
2018/06/21 09:26:08 [INFO] generating key: rsa-2048
2018/06/21 09:26:09 [INFO] encoded CSR
2018/06/21 09:26:09 [INFO] signed certificate with serial number 417707854871368318066373692790560303017704936656
2018/06/21 09:26:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
Creating kube scheduler certs
2018/06/21 09:26:09 [INFO] generate received request
2018/06/21 09:26:09 [INFO] received CSR
2018/06/21 09:26:09 [INFO] generating key: rsa-2048
2018/06/21 09:26:09 [INFO] encoded CSR
2018/06/21 09:26:09 [INFO] signed certificate with serial number 330529218418326597606887274879362200665829089625
2018/06/21 09:26:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
Creating kube worker certs
2018/06/21 09:26:09 [INFO] generate received request
2018/06/21 09:26:09 [INFO] received CSR
2018/06/21 09:26:09 [INFO] generating key: rsa-2048
2018/06/21 09:26:09 [INFO] encoded CSR
2018/06/21 09:26:09 [INFO] signed certificate with serial number 435270718859948344635543708231556017484227168204
2018/06/21 09:26:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
Creating bootstrap token
Cleaning up files
Cleaning up yaml
[root@mine27 kubernetes]# ls -alt
total 152
drwxrwxr-x. 10 sdog sdog 4096 Jun 21 11:26 .
-rw-r--r--. 1 root root 39975 Jun 21 11:26 kubernetes.yaml

Improve Readme documentation with more assumptions and detailed instructions

While visiting at a customer site who is trying out this new module, they commented that the current documentation has some room for improvement to make it easier to get up and started with this module. Specifically, all assumptions re: which OS(es) are being used to which directory commands are expected to be run in would be very helpful additions. I've provided a couple specific examples below:

  1. The section "Setup Requirements" references required software that needs to be installed. I would recommend breaking this up into separate sections labelled "Software Pre-requisites" and "Setup Requirements". The "Setup Requirements" section would then list out the OS version(s) that these steps have been tested on. And the "Software Pre-requisites" would cover off any software that needs to be installed prior to starting the core kubernetes configuration work. For example, there were some questions around particular versions of Go and Ruby that need to be installed on RHEL 7.2. If there are specific version(s) of this software required it would be helpful to call this out in a special note, etc. and not just refer to the external documentation.

  2. In general, provide more detail on steps that are listed. For example, there is reference to running a "bundle install" command under setup but as the instructions are embedded inline in the sentence, it's not immediately clear that this is a specific script to run. I would recommend breaking out the commands to be run into a separate indented section like the following:

a. Install cfssl by following the documentation here.
b. 'cd ' (would be helpful to include an actual example of a default module path)
c. run './bundle install' (?)
d. 'cd '/tools' (again, a specific example would be helpful here)
e. run './kube_tool' and confirm the following output:

The above is just an example but the comment applies generally to the entire README file.

Kubetool 3.0.0 crashes unabled to read hostname-server.pem

What you expected to happen?

When running docker run --rm -v $(pwd):/mnt --env-file .env puppet/kubetool:3.0.0 I expect it to finish without error.

What happened?

Create certs crashed, unable to read the server certificates from file.

docker run --rm -v $(pwd):/mnt --env-file .env puppet/kubetool:3.0.0
/usr/bin/cfssl
Creating etcd ca
2018/10/04 19:17:42 [INFO] generating a new CA key and certificate from CSR
2018/10/04 19:17:42 [INFO] generate received request
2018/10/04 19:17:42 [INFO] received CSR
2018/10/04 19:17:42 [INFO] generating key: rsa-2048
2018/10/04 19:17:42 [INFO] encoded CSR
2018/10/04 19:17:42 [INFO] signed certificate with serial number 263467903853981777106031379089335872914386696860
Creating etcd client certs
2018/10/04 19:17:42 [INFO] generate received request
2018/10/04 19:17:42 [INFO] received CSR
2018/10/04 19:17:42 [INFO] generating key: rsa-2048
2018/10/04 19:17:42 [INFO] encoded CSR
2018/10/04 19:17:42 [INFO] signed certificate with serial number 203213226858113760732288002600610907065875654557
Creating etcd peer and server certificates
not enough arguments are supplied --- please refer to the usage
not enough arguments are supplied --- please refer to the usage
/etc/k8s/kube_tool/create_certs.rb:62:in `read': No such file or directory @ rb_sysopen - "kube01.local-server.pem (Errno::ENOENT)
        from /etc/k8s/kube_tool/create_certs.rb:62:in `block in etcd_server'
        from /etc/k8s/kube_tool/create_certs.rb:47:in `each'
        from /etc/k8s/kube_tool/create_certs.rb:47:in `etcd_server'
        from /etc/k8s/kube_tool.rb:67:in `build_hiera'
        from /etc/k8s/kube_tool.rb:77:in `<main>'

How to reproduce it?

Module version: 3.0.1

.env contents:

OS=Debian
VERSION="1.12"
CONTAINER_RUNTIME=docker
CNI_PROVIDER=weave
ETCD_INITIAL_CLUSTER="kube01.local:192.168.122.2,kube02.local:192.168.122.10,kube03.local:192.168.122.11"
ETCD_IP="%{::ipaddress_eth1}"
KUBE_API_ADVERTISE_ADDRESS="%{::ipaddress_eth1}"
INSTALL_DASHBOARD=true

Note: I tried with 'VERSION' 1.11 as well. There also isn't a published docker image for puppet/kubetool:3.0.1 so I used the closest possible version.

Anything else we need to know?

I tried older puppet/kubetool images, and 1.02 "worked" in that it didn't completely crash. I imagine it won't produce output that is compatible with the puppetlabs/kubernetes module v3.0.1

Versions:

$ docker version OR crictl version

docker-ce: 18.06.1~ce~3-0~debian

$ facter os

{
  architecture => "amd64",
  distro => {
    codename => "sid",
    description => "Debian GNU/Linux unstable (sid)",
    id => "Debian",
    release => {
      full => "unstable",
      major => "unstable"
    }
  },
  family => "Debian",
  hardware => "x86_64",
  name => "Debian",
  release => {
    full => "buster/sid",
    major => "buster/sid"
  },
  selinux => {
    enabled => false
  }
}

Allow using the Puppet docker module to manage Docker configuration, for private registry?

We are already heavy Puppet users and are getting going with trying to use this module to run our own Kubernetes cluster on ec2.

We have a private docker registry and have previously used the Puppet Docker module to configure Docker on nodes with the docker::registry config

I get the following error when I tried to use our standard Docker configuration :

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: Apt::Source[docker] is already declared in file /etc/puppetlabs/code/environments/INFRA_866_kubernetes/modules/docker/manifests/repos.pp:19; cannot redeclare at /etc/puppetlabs/code/environments/INFRA_866_kubernetes/modules/kubernetes/manifests/repos.pp:23 at /etc/puppetlabs/code/environments/INFRA_866_kubernetes/modules/kubernetes/manifests/repos.pp:23:11 on node k8-bootstrap-controller

It appears that the Kubernetes module adds the docker apt source here :

apt::source { 'docker':
leading to the conflict.

Wondering what my best options are for managing the Docker config through Puppet; private registries etc. I can probably define and slam in place a Docker config.json. I could also use some guidance on the docker and kubernetes service user configs as well. Would I put that in a kubernetes user home ~/.docker/config.json ? What linux users does the kubernetes module create/expect and run the processes under ?

.

What you expected to happen?

What happened?

How to reproduce it?

Anything else we need to know?

Versions:

$ puppet --version

$ kubectl version

$ docker version OR crictl version

$ facter os

$ puppet module list

Logs:

Paste any relevant logs from a puppet run, kubernetes and syslog/messages

Calico nodes can't connect to etcd

Hey there, I'm trying to deploy a Kubernetes cluster with puppet, using six CentOS7 VMs (3 controllers and 3 workers). I'm using puppetlabs-kubernetes v1.0.3.

I've dropped weave and tried using calico instead.

I use the following facts in puppet to do so:

kubernetes::kubernetes_version: 1.9.3
kubernetes::kubernetes_package_version: 1.9.3
kubernetes::container_runtime: docker
kubernetes::cni_version: 0.6.0
kubernetes::cni_network_provider: "https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml"
kubernetes::cni_cluster_cidr: "- --cluster-cidr=172.17.20.0/16"
kubernetes::cni_node_cidr: "- --allocate-node-cidrs=true"
kubernetes::kube_dns_version: 1.14.8
kubernetes::etcd_version: 3.1.11
kubernetes::kubernetes_fqdn: {MY_K8S_CLUSTER_FQDN}
kubernetes::bootstrap_controller_ip: {CONTROLLER1_IP}
kubernetes::etcd_initial_cluster: etcd-ctl01=http://{CONTROLLER1_IP}:2380,etcd-ctl02=http://{CONTROLLER2_IP}:2380,etcd-ctl03=http://{CONTROLLER3_IP}:2380
kubernetes::etcd_ip: "%{::ipaddress_eth0}"
kubernetes::kube_api_advertise_address: "%{::ipaddress_eth0}"
kubernetes::install_dashboard: true

Deployment goes well, until calico tries to boot.

admin@ctl01:$ kubectl -n kube-system get pods  
NAME                                      READY     STATUS              RESTARTS   AGE                    
calico-kube-controllers-d554689d5-6jsnq   1/1       Running             0          30m                    
calico-node-cmsks                         1/2       CrashLoopBackOff    10         30m                    
calico-node-qbl6h                         1/2       CrashLoopBackOff    10         30m                    
calico-node-t99gj                         1/2       CrashLoopBackOff    10         30m                    
etcd-ctl01                            1/1       Running             0          23m                    
etcd-ctl02                            1/1       Running             0          23m                    
etcd-ctl03                            1/1       Running             0          24m                    
kube-apiserver-ctl01                  1/1       Running             0          23m                    
kube-apiserver-ctl02                  1/1       Running             0          23m                    
kube-apiserver-ctl03                  1/1       Running             0          24m                    
kube-controller-manager-ctl01         1/1       Running             1          23m                    
kube-controller-manager-ctl02         1/1       Running             0          23m                    
kube-controller-manager-ctl03         1/1       Running             0          24m                    
kube-dns-6b5c779967-zcvtp                 0/3       ContainerCreating   0          30m                    
kube-proxy-55wbx                          1/1       Running             0          30m                    
kube-proxy-csnjq                          1/1       Running             0          30m                    
kube-proxy-n5fkl                          1/1       Running             0          30m                    
kube-scheduler-ctl01                  1/1       Running             0          23m                    
kube-scheduler-ctl02                  1/1       Running             0          23m                    
kube-scheduler-ctl03                  1/1       Running             0          23m                    
kubernetes-dashboard-5bd6f767c7-9lw4d     0/1       ContainerCreating   0          30m 

When i give a look at calico nodes logs, I get:

admin@ctl01:$ kubectl -n kube-system logs calico-node-t99gj -c calico-node
2018-02-14 14:41:24.140 [INFO][9] startup.go 173: Early log level set to info
2018-02-14 14:41:24.142 [INFO][9] startup.go 184: NODENAME environment not specified - check HOSTNAME
2018-02-14 14:41:24.142 [INFO][9] client.go 202: Loading config from environment
2018-02-14 14:41:24.143 [INFO][9] startup.go 83: Skipping datastore connection test
2018-02-14 14:41:24.149 [INFO][9] etcd.go 373: Unhandled error: client: etcd cluster is unavailable or misconfigured; error #0: EOF

2018-02-14 14:41:24.149 [INFO][9] startup.go 254: Unable to query node configuration Name="nod02" error=client: etcd cluster is unavailable or misconfigured; error #0: EOF

2018-02-14 14:41:24.149 [WARNING][9] startup.go 255: Unable to access datastore to query node configuration
2018-02-14 14:41:24.149 [WARNING][9] startup.go 915: Terminating

I tried to look a little deeper into it, if find that the calico configmap may be badly configured.

admin@ctl01$ kubectl -n kube-system describe configmap calico-config                                                                                                                                         
Name:         calico-config                          
Namespace:    kube-system                            
Labels:       <none>                                 
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","data":{"calico_backend":"bird","cni_network_config":"{\n    \"name\": \"k8s-pod-network\",\n    \"cniVersion\": \"0.1.0\",\n    \"t...

Data                                                 
====                                                 
calico_backend:                                      
----                                                 
bird                                                 
cni_network_config:                                  
----                                                 
{                                                    
    "name": "k8s-pod-network",                       
    "cniVersion": "0.1.0",                           
    "type": "calico",                                
    "etcd_endpoints": "__ETCD_ENDPOINTS__",          
    "log_level": "info",                             
    "mtu": 1500,                                     
    "ipam": {                                        
        "type": "calico-ipam"                        
    },                                               
    "policy": {                                      
        "type": "k8s",                               
         "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",               
         "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"                                                     
    },                                               
    "kubernetes": {                                  
        "kubeconfig": "/etc/cni/net.d/__KUBECONFIG_FILENAME__"                                            
    }                                                
}                                                    
etcd_endpoints:                                      
----                                                 
http://10.96.232.136:6666                            
Events:  <none>                          

It seems to me like all the values in capital letters with underscores should have been replaced at some point. I imagine it's not something I should do manually? All calico deployments seem to get some of their parameters from this configmap.

Moreover, in /etc/cni/net.d there is a calico config file that seems to possess all the values that configmap should hold:

admin@nod01:$ cat /etc/cni/net.d/10-calico.conf
{                                                    
    "name": "k8s-pod-network",                       
    "cniVersion": "0.1.0",                           
    "type": "calico",                                
    "etcd_endpoints": "http://10.96.232.136:6666",   
    "log_level": "info",                             
    "mtu": 1500,                                     
    "ipam": {                                        
        "type": "calico-ipam"                        
    },                                               
    "policy": {                                      
        "type": "k8s",                               
         "k8s_api_root": "https://10.96.0.1:443",    
         "k8s_auth_token": "eyJhbGciOiJSUzI1N[...]rQQSHRZb0HUKlqaf5P6gKqDnMQ9fAraYMs_Oaw"                             
    },                                               
    "kubernetes": {                                  
        "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"                                                  
    }                                                
}                           

Anyone has any idea what I might be doing wrong?

Thanks!

the file `.env` is not included in the puppetlabs-kubernetes module on the Forge

What you expected to happen?

The documentation in the README.md file indicates that there is a .env file included with the package that you can use to build hiera data for deploying Kubernetes via puppet. This file is not actually included in the puppetlabs module (e.g. https://forge.puppet.com/v3/files/puppetlabs-kubernetes-2.0.2.tar.gz). It is in the github page.

How to reproduce it?

wget https://forge.puppet.com/v3/files/puppetlabs-kubernetes-2.0.2.tar.gz
tar -xzf puppetlabs-kubernetes-2.0.2.tar.gz
ls -la puppetlabs-kubernetes-2.0.2

Basic install of Kubernetes failing for me

What you expected to happen?

I expected the module to do a basic installation, given the following:

What happened?

I got this error:

# puppet apply --modulepath modules:site manifests/dev_nodes.pp
Error: Evaluation Error: Error while evaluating a Resource Statement, Class[Kubernetes]:
  parameter 'kube_dns_ip' expects a String value, got Undef
  parameter 'kube_api_service_ip' expects a String value, got Undef 

How to reproduce it?

I didn't change anything from the basic example given by the documentation:

  class { 'kubernetes':
    controller => true,
    bootstrap_controller => true,
  }

Anything else we need to know?

This was a fresh system setup, with no previous installation via puppet.

Versions:

$ puppet --version
5.3.5

$ kubectl version
didn't get installed

$ docker version OR crictl version
didn't get installed

$ facter os
{
  architecture => "x86_64",
  family => "RedHat",
  hardware => "x86_64",
  name => "RedHat",
  release => {
    full => "7.4",
    major => "7",
    minor => "4"
  },
  selinux => {
    enabled => false
  }
}

$ puppet module list

mod 'puppet-yum', '2.2.1'
mod 'puppetlabs-mysql', '5.3.0'
mod 'puppet-jira', '3.4.1'
mod 'thewired-bitbucket', '2.2.5'
mod 'puppet-confluence', '3.1.1'
mod 'puppetlabs-java', '2.4.0'
mod 'puppetlabs-docker', '1.1.0'
mod 'puppetlabs-kubernetes', '1.1.0'
mod 'puppetlabs-apt', '4.3.0'
mod 'maestrodev-wget', '1.7.0'
mod 'puppetlabs-apache', '3.1.0'
mod 'puppetlabs-concat', '4.2.1'
mod 'puppetlabs-translate', '1.1.0'
mod 'puppetlabs-stdlib', '4.25.0'
mod 'puppet-staging', '2.2.0'
mod 'puppet-archive', '2.1.0'
mod 'camptocamp-systemd', '1.1.1'
mod 'puppetlabs-inifile', '2.2.0'
mod 'mkrakowitzer-deploy', '0.0.3'
mod 'puppet-mysql_java_connector', '3.0.1'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.