Giter Club home page Giter Club logo

kubernetes's Introduction

LinuxKit

CircleCI

LinuxKit, a toolkit for building custom minimal, immutable Linux distributions.

  • Secure defaults without compromising usability
  • Everything is replaceable and customisable
  • Immutable infrastructure applied to building Linux distributions
  • Completely stateless, but persistent storage can be attached
  • Easy tooling, with easy iteration
  • Built with containers, for running containers
  • Designed to create reproducible builds [WIP]
  • Designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes
  • Designed from the experience of building Docker Editions, but redesigned as a general-purpose toolkit
  • Designed to be managed by external tooling, such as Infrakit (renamed to deploykit which has been archived in 2019) or similar tools
  • Includes a set of longer-term collaborative projects in various stages of development to innovate on kernel and userspace changes, particularly around security

LinuxKit currently supports the x86_64, arm64, and s390x architectures on a variety of platforms, both as virtual machines and baremetal (see below for details).

Subprojects

  • LinuxKit kubernetes aims to build minimal and immutable Kubernetes images. (previously projects/kubernetes in this repository).
  • LinuxKit LCOW LinuxKit images and utilities for Microsoft's Linux Containers on Windows.
  • linux A copy of the Linux stable tree with branches LinuxKit kernels.
  • virtsock A go library and test utilities for virtio and Hyper-V sockets.
  • rtf A regression test framework used for the LinuxKit CI tests (and other projects).
  • homebrew Homebrew packages for the linuxkit tool.

Getting Started

Build the linuxkit tool

LinuxKit uses the linuxkit tool for building, pushing and running VM images.

Simple build instructions: use make to build. This will build the tool in bin/. Add this to your PATH or copy it to somewhere in your PATH eg sudo cp bin/* /usr/local/bin/. Or you can use sudo make install.

If you already have go installed you can use go install github.com/linuxkit/linuxkit/src/cmd/linuxkit@latest to install the linuxkit tool.

On MacOS there is a brew tap available. Detailed instructions are at linuxkit/homebrew-linuxkit, the short summary is

brew tap linuxkit/linuxkit
brew install --HEAD linuxkit

Build requirements from source using a container

  • GNU make
  • Docker
  • optionally qemu

For a local build using make local

  • go
  • make
  • go get -u golang.org/x/lint/golint
  • go get -u github.com/gordonklaus/ineffassign

Building images

Once you have built the tool, use

linuxkit build linuxkit.yml

to build the example configuration. You can also specify different output formats, eg linuxkit build --format raw-bios linuxkit.yml to output a raw BIOS bootable disk image, or linuxkit build --format iso-efi linuxkit.yml to output an EFI bootable ISO image. See linuxkit build -help for more information.

Booting and Testing

You can use linuxkit run <name> or linuxkit run <name>.<format> to execute the image you created with linuxkit build <name>.yml. This will use a suitable backend for your platform or you can choose one, for example VMWare. See linuxkit run --help.

Currently supported platforms are:

Running the Tests

The test suite uses rtf To install this you should use make bin/rtf && make install. You will also need to install expect on your system as some tests use it.

To run the test suite:

cd test
rtf -v run -x

This will run the tests and put the results in a the _results directory!

Run control is handled using labels and with pattern matching. To run add a label you may use:

rtf -v -l slow run -x

To run tests that match the pattern linuxkit.examples you would use the following command:

rtf -v run -x linuxkit.examples

Building your own customised image

To customise, copy or modify the linuxkit.yml to your own file.yml or use one of the examples and then run linuxkit build file.yml to generate its specified output. You can run the output with linuxkit run file.

The yaml file specifies a kernel and base init system, a set of containers that are built into the generated image and started at boot time. You can specify the type of artifact to build eg linuxkit build -format vhd linuxkit.yml.

If you want to build your own packages, see this document.

Yaml Specification

The yaml format specifies the image to be built:

  • kernel specifies a kernel Docker image, containing a kernel and a filesystem tarball, eg containing modules. The example kernels are built from kernel/
  • init is the base init process Docker image, which is unpacked as the base system, containing init, containerd, runc and a few tools. Built from pkg/init/
  • onboot are the system containers, executed sequentially in order. They should terminate quickly when done.
  • services is the system services, which normally run for the whole time the system is up
  • files are additional files to add to the image

For a more detailed overview of the options see yaml documentation

Architecture and security

There is an overview of the architecture covering how the system works.

There is an overview of the security considerations and direction covering the security design of the system.

Roadmap

This project was extensively reworked from the code we are shipping in Docker Editions, and the result is not yet production quality. The plan is to return to production quality during Q3 2017, and rebase the Docker Editions on this open source project during this quarter. We plan to start making stable releases on this timescale.

This is an open project without fixed judgements, open to the community to set the direction. The guiding principles are:

  • Security informs design
  • Infrastructure as code: immutable, manageable with code
  • Sensible, secure, and well-tested defaults
  • An open, pluggable platform for diverse use cases
  • Easy to use and participate in the project
  • Built with containers, for portability and reproducibility
  • Run with system containers, for isolation and extensibility
  • A base for robust products

Development reports

There are monthly development reports summarising the work carried out each month.

Adopters

We maintain an incomplete list of adopters. Please open a PR if you are using LinuxKit in production or in your project, or both.

FAQ

See FAQ.

Released under the Apache 2.0 license.

kubernetes's People

Contributors

deitch avatar djs55 avatar ernoaapa avatar errordeveloper avatar eyz avatar ijc avatar jadametz avatar justincormack avatar magnuss avatar riyazdf avatar rn avatar thajeztah avatar tpires avatar w9n avatar yankcrime avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes's Issues

Linuxkit/Kubernetes & Raspberry Pi

I've tried to get this to build for the past 3 days, and I don't know what else to do but ask here.
I've got this to build in a few minutes using KUBE_FORMATS=iso-bios and KUBE_RUNTIME=cri-containerd. After I tried to install this onto my PIs, which obviously did not work, i found docs/platform-rpi3 which then did not build on my Windows machine (amd64 I think, i still haven't found out how to figure this out on Windows), this is where my problems startet.
Tried building on a PI, couldn't get linuxkit to build (from the tags/v0.7 branch, as well as the master branch)
I've setup a Ubuntu Machine using Hyper-V, from there i installed Docker and qemu, started a docker container from there, linked in the docker daemon (Docker-in-Docker told me to do this), started using qemu-aarch64-static (also tried qemu-arm-static, which gave me the same environment a Raspberry PI has too, armv7 aka armhf) this have me an environment which was arm, but I could not get past building the master .tar because at that point things would go really wrong because of qemu some Fatal Error which i could find nothing about.
And here I am now. I'd be very greatful if anyone has some advice, or even something specific I can look up / try.

Switch to cri as containerd plugin

In #70 we switched to the final standalone version of the cri daemon which has now been integrated into containerd as a plugin. At some point we will need a newer CRI (e.g. to work with a newer Kubernetes) so we should arrange to switch to the plugin version once containerd v1.1 is released containing it (currently v1.1 is at the rc stage).

The plugin version works with Kube v1.10 so once #70 is merged and containerd v1.1 is released (and integrated with LinuxKit) we can update without waiting for a newer Kube to force the issue.

There are two options for integration I think:

  • Enable the CRI plugin on the system containerd (can be done at runtime via the config file) and ditch the current cri-containerd container.
  • Run an appropriately configured second containerd in a container, superceding the existing cri-containerd container.

Host mounts are currently resolved by the system containerd (having passed through the cri daemon) and we share most of the interesting paths such that cri and kubelet have a reasonably complete (for their needs) shared world with the host. It's unclear what the plugin might require to be running in the system context (CNI plugins? findmnt?) which might make it more desirable to continue running in a container.

Kernel panic with "BUG: unable to handle kernel NULL pointer dereference at (null)"

Description

A kernel panic occurs when two nodes connect with vxlan.

Steps to reproduce the issue:

Just follow this section. booting-and-initialising-os-images

$ make all

# boot master
$ ./boot.sh

# login to master
$ ./ssh_into_kubelet.sh 192.168.65.11 
# launch master
linuxkit-025000000009:/# kubeadm-init.sh

# join to master
$ ./boot.sh 1 192.168.65.11:6443 --token 43gcdz.40q62te3f1xprg7r --discovery-token-ca-cert-hash sha256:d79e5239a534ae0296410e0fdfa532664b92c65eda5dad7c2924d3ad05cb7313

# and kernel panic occurs

Describe the results you received:

When vxlan is connected then kernel panic is occured.

[  197.502285] BUG: unable to handle kernel NULL pointer dereference at           (null)
[  197.503112] IP:           (null)
[  197.503630] PGD 8000000035950067 P4D 8000000035950067 PUD 3598e067 PMD 0
[  197.504438] Oops: 0010 [#1] SMP PTI
[  197.504815] Modules linked in: dummy vport_vxlan openvswitch xfrm_user xfrm_algo
[  197.505716] CPU: 0 PID: 2246 Comm: weaver Not tainted 4.14.32-linuxkit #1
[  197.506486] Hardware name:   BHYVE, BIOS 1.00 03/14/2014
[  197.507043] task: ffff9a3bb7715040 task.stack: ffffaeb081c64000
[  197.507862] RIP: 0010:          (null)
[  197.508239] RSP: 0018:ffffaeb081c67788 EFLAGS: 00010286
[  197.508784] RAX: ffffffff9d83a080 RBX: ffff9a3bb8214000 RCX: 00000000000005aa
[  197.509473] RDX: ffff9a3bb54b4200 RSI: 0000000000000000 RDI: ffff9a3bb5480400
[  197.510234] RBP: ffffaeb081c67880 R08: 0000000000000006 R09: 0000000000000002
[  197.510980] R10: 0000000000000000 R11: ffff9a3bb5469300 R12: ffff9a3bb54b4200
[  197.511763] R13: ffff9a3bb54804a8 R14: ffff9a3bb5469300 R15: ffff9a3bb8214040
[  197.512323] FS:  0000000003795880(0000) GS:ffff9a3bbe600000(0000) knlGS:0000000000000000
[  197.513038] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  197.513666] CR2: 0000000000000000 CR3: 00000000380b2004 CR4: 00000000000606b0
[  197.514417] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  197.515142] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  197.515865] Call Trace:
[  197.516147]  ? vxlan_xmit_one+0x4af/0x837
[  197.516568]  ? vxlan_xmit+0xb2f/0xb5a
[  197.516969]  ? vxlan_xmit+0xb2f/0xb5a
[  197.517427]  ? skb_network_protocol+0x55/0xb3
[  197.517884]  ? dev_hard_start_xmit+0xd0/0x194
[  197.518441]  ? dev_hard_start_xmit+0xd0/0x194
[  197.518938]  ? __dev_queue_xmit+0x47c/0x5c4
[  197.519437]  ? do_execute_actions+0x99/0x1069 [openvswitch]
[  197.519952]  ? do_execute_actions+0x99/0x1069 [openvswitch]
[  197.520454]  ? slab_post_alloc_hook.isra.52+0xa/0x1a
[  197.520999]  ? __kmalloc+0xc1/0xd3
[  197.521371]  ? ovs_execute_actions+0x77/0xfd [openvswitch]
[  197.521897]  ? ovs_execute_actions+0x77/0xfd [openvswitch]
[  197.522469]  ? ovs_packet_cmd_execute+0x1bb/0x230 [openvswitch]
[  197.523063]  ? genl_family_rcv_msg+0x2db/0x349
[  197.523433]  ? genl_rcv_msg+0x4e/0x69
[  197.523787]  ? genlmsg_multicast_allns+0xf1/0xf1
[  197.524209]  ? netlink_rcv_skb+0x97/0xe8
[  197.524700]  ? genl_rcv+0x24/0x31
[  197.525137]  ? netlink_unicast+0x11a/0x1b5
[  197.525676]  ? netlink_sendmsg+0x2e2/0x308
[  197.526192]  ? sock_sendmsg+0x2d/0x3c
[  197.526754]  ? SYSC_sendto+0xfc/0x138
[  197.527233]  ? do_syscall_64+0x69/0x79
[  197.527681]  ? entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[  197.528326] Code:  Bad RIP value.
[  197.528660] RIP:           (null) RSP: ffffaeb081c67788
[  197.529281] CR2: 0000000000000000
[  197.529634] ---[ end trace 391521052893e451 ]---
[  197.533425] Kernel panic - not syncing: Fatal exception in interrupt
[  197.534602] Kernel Offset: 0x1b000000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[  197.538715] Rebooting in 10 seconds..
[  207.586797] ACPI MEMORY or I/O RESET_REG.
FATA[0211] Cannot run hyperkit: exit status 2

Describe the results you expected:

without kernel panic

Additional information you deem important (e.g. issue happens only occasionally):

I just test linuxkit/linux@6a3c946 commit can fix this problem which is in 4.14.40 kernel.

WARN[0003] certificate with CN ABC DEF is near expiry

Got the following when running make all on a clone repo:

WARN[0003] certificate with CN Justin Cormack is near expiry
WARN[0003] certificate with CN  is near expiry
WARN[0003] certificate with CN  is near expiry
WARN[0003] certificate with CN [email protected] is near expiry
WARN[0003] certificate with CN Ian Campbell is near expiry
WARN[0004] certificate with CN  is near expiry
WARN[0004] certificate with CN  is near expiry
WARN[0004] certificate with CN [email protected] is near expiry
WARN[0004] certificate with CN Ian Campbell is near expiry
WARN[0004] certificate with CN Justin Cormack is near expiry
WARN[0004] certificate with CN Justin Cormack is near expiry

make update-hashes is not portable

It fails on macOS:

> make update-hashes
set -e ; for tag in $(linuxkit pkg show-tag pkg/kubelet) \
	           $(linuxkit pkg show-tag pkg/cri-containerd) \
	           $(linuxkit pkg show-tag pkg/kubernetes-docker-image-cache-common) \
	           $(linuxkit pkg show-tag pkg/kubernetes-docker-image-cache-control-plane) ; do \
	    image=${tag%:*} ; \
	    git grep -E -l "\b$image:" | xargs --no-run-if-empty sed -i.bak -e "s,$image:[[:xdigit:]]\{40\}\(-dirty\)\?,$tag,g" ; \
	done
xargs: illegal option -- -
usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J replstr]
             [-L number] [-n number [-x]] [-P maxprocs] [-s size]
             [utility [argument ...]]
make: *** [update-hashes] Error 1

e2e tests make apiserver crash on d4m

I noticed this while working #35. Not sure if this is because I run 1.9 e2e on 1.8, TBC.

E0105 16:31:58.397048       1 runtime.go:66] Observed a panic: "duplicate node port" (duplicate node port)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:514
/usr/local/go/src/runtime/panic.go:489
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/registry/core/service/rest.go:598
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/registry/core/service/rest.go:341
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:910
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:1187
/usr/local/go/src/runtime/asm_amd64.s:2197
panic: duplicate node port [recovered]
	panic: duplicate node port

goroutine 3779 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0xc42abd5f78, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x126
panic(0x2fd7c80, 0xc423a444e0)
	/usr/local/go/src/runtime/panic.go:489 +0x2cf
k8s.io/kubernetes/pkg/registry/core/service.(*REST).updateNodePorts(0xc420d34c30, 0xc42a2950e0, 0xc42a295860, 0xc42abd5e50, 0xc420d34c01, 0x810b380)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/registry/core/service/rest.go:598 +0x262
k8s.io/kubernetes/pkg/registry/core/service.(*REST).Update(0xc420d34c30, 0x7f7690822d78, 0xc4256b1e00, 0xc42a315fc5, 0x17, 0x8113c80, 0xc429cb9480, 0x0, 0x0, 0xf02300, ...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/registry/core/service/rest.go:341 +0xb12
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.UpdateResource.func1.2(0xc400000018, 0x39db590, 0xc420d57778, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:910 +0x114
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.finishRequest.func1(0xc4239d5860, 0xc42ab0f5e0, 0xc4239d5800, 0xc4239d57a0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:1187 +0x99
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.finishRequest
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:1192 +0xd9

kubelet reports version and v1.9.0-dirty

Noticed by @errordeveloper in #43 (comment), but it seems to be an existing issue with the build, using the current head's image on hub gives the same:

$ docker run -ti --entrypoint /usr/bin/kubelet linuxkit/kubelet:03205e3daddfeedeb64d4e023b42c225c8e00945 --version
Unable to find image 'linuxkit/kubelet:03205e3daddfeedeb64d4e023b42c225c8e00945' locally
03205e3daddfeedeb64d4e023b42c225c8e00945: Pulling from linuxkit/kubelet
1ed3bb82ce05: Pull complete 
Digest: sha256:0f442097dcd90c1e46e5a90c68a2a7ea7173d451e2d4e8b81d18742903106d77
Status: Downloaded newer image for linuxkit/kubelet:03205e3daddfeedeb64d4e023b42c225c8e00945
Kubernetes v1.9.0-dirty

Can not boot with head version of linuxkit on macOS

Description
I reinstalled latest version of linuxkit then I can not boot with ./boot.sh on macOS.

Steps to reproduce the issue:

  1. Install or reinstall latest version of linuxkit on MacOS
  2. And .. ./boot.sh
...
$ brew install --HEAD linuxkit # or brew reinstall --HEAD linuxkit
$ ./boot.sh

Describe the results you received:

$ linuxkit version
linuxkit version v0.3+
commit: 9d2c57564bd568b384def0f2a00091c13d296f91
$ ls -al *.iso
-rw-r--r-- 1 al staff 1803530240  4 23 16:22 kube-master-efi.iso
-rw-r--r-- 1 al staff 1166034944  4 23 16:24 kube-node-efi.iso
$ ./boot.sh
+ '[' -n '' ']'
+ mkdir -p kube-master-state
+ touch kube-master-state/metadata.json
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ exec linuxkit run -networking default -cpus 2 -mem 1024 -state kube-master-state -disk size=4G -data-file kube-master-state/metadata.json --uefi kube-master-efi.iso
FATA[0000] Cannot find kernel file: kube-master-efi.iso-kernel

Describe the results you expected:

Successful boot.

all cadvisor metrics have id="/"

Description

cAdvisor metrics do not have correct cgroup path.

Describe the results you received:

The metrics can be obtain with the following command:

curl -s -k https://localhost:6443/api/v1/nodes/linuxkit-025000000002/proxy/metrics/cadvisor --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key

What you will see is e.g.

container_memory_max_usage_bytes{container_name="",id="/",image="",name="",namespace="",pod_name=""} 3.03910912e+08

Note id="/".

Describe the results you expected:

On an Ubuntu install, you will see more metrics instead with different ids, e.g.:

container_memory_max_usage_bytes{container_name="weave",id="/kubepods/burstable/pod7db61ed7-e655-11e7-a92e-065f2a149e22/1c7e2c87fbdbf35542a2e060147b245455c45ca3cada8c68a9d730a12551d46e",image="weaveworks/weave-kube@sha256:07a3d56b8592ea3e00ace6f2c3eb7e65f3cc4945188a9e2a884b8172e6a0007e",name="k8s_weave_weave-net-vlw97_kube-system_7db61ed7-e655-11e7-a92e-065f2a149e22_1",namespace="kube-system",pod_name="weave-net-vlw97"} 8.4353024e+07

Additional information you deem important (e.g. issue happens only occasionally):

Kubernetes version 1.9.0 was used in both cases. Ubuntu-based cluster was installed using weaveworks/kubernetes-ami#15.

See details here https://gist.github.com/errordeveloper/2847ea94df2b2b0cccb60f0a6aa2b20f.

Cannot boot kube-master using boot.sh when metadata.json is empty

Description
I experienced an issue when I tried to run a k8s master on my mac.

I use the following versions:

  • MacOS-X High Sierra 10.13
  • linuxkit version 0.0 / commit: 41a4c2df108bc739897e6a6f9234e6c794ab380f
  • moby version 0.0 / commit: 6ba3288963c52b0831e72f99851b565baf6498e4

Steps to reproduce the issue:

  1. Clone the repo
  2. cd to the repo root
  3. run: make all
  4. run: ./boot.sh

Describe the results you received:
I get the following exception:

linuxkit-kubernetes git/master  12s
โฏ ./boot.sh
+ '[' -n '' ']'
+ mkdir -p kube-master-state
+ touch kube-master-state/metadata.json
+ '[' -n '' ']'
+ linuxkit run -networking default -cpus 2 -mem 1024 -state kube-master-state -disk size=4G -data kube-master-state/metadata.json --uefi kube-master-efi.iso
FATA[0000] Cannot write user data ISO: input buffer must be at least 1 byte in size

Describe the results you expected:
The k8s master should start.

Additional information you deem important (e.g. issue happens only occasionally):
I was able to resolve the issue by just adding an empty json object to the metadata.json file. I've committed the 'fix' to my fork: https://github.com/synax/linuxkit-kubernetes/commit/56ad664bb721cb918ee6047b5b09f879f1b44ab1

Failed to get cgroup stats for docker and kubelet in kubelet container

Description

This log gets thrown every 10sek.
Steps to reproduce the issue:
run docker cluster and check /var/log/kubelet.err.log

Describe the results you received:
many

E1127      617 summary.go:92] Failed to get system container stats for "/kubelet": failed to get cgroup stats for "
/kubelet": failed to get container info for "/kubelet": unknown container "/kubelet"
E1127      617 summary.go:92] Failed to get system container stats for "/docker": failed to get cgroup stats for "/
docker": failed to get container info for "/docker": unknown container "/docker"

Describe the results you expected:
no errors

Additional information you deem important (e.g. issue happens only occasionally):
also noticed instable metrics in grafana 1-2 weeks ago but needs further research

How to set up a development environment?

How should I set up my development environment (which tools, CI etc.) to be able to build functional "local" versions of the pkgs like kubelet used in this project? - If I simply build the kubelet pkg using 'docker build .' , put its id into yml/kube.yml and build new image it seems that the files and folders like '/var/lib/kubeadm' which are created by other service containers are not visible.

I'd be happy to help on fixing issues like #71 and #72 but first I need to get a working development environment. :)

cri-containerd kubectl exec not working

Description

Steps to reproduce the issue:
spin up cluster and do kubectl exec [-it] some_container some_parameter remotly.

Describe the results you received:
timeout on client side when exec is -it and Error from server: when without -it
cri-containerd -it log:

I1213 23:19:08.292468     539 instrumented_service.go:199] Exec for "7b072c636b211989024c2ab6b43bf4ed591be11eb6fbb7298d1a5e93de0dc
bc3" with command [sh], tty true and stdin true
I1213 23:19:08.292520     539 instrumented_service.go:205] Exec for "7b072c636b211989024c2ab6b43bf4ed591be11eb6fbb7298d1a5e93de0dc
bc3" returns URL "http://10.10.10.191:10010/exec/tTwdKOCY"
I1213 23:19:38.297681     539 exec_io.go:79] Container exec "af2d6858af41f13010ccab82bd172271cfbc4c20759c90d5cce10c4fa5e43786" std
in closed
E1213 23:19:38.297823     539 exec_io.go:99] Failed to pipe "stdout" for container exec "af2d6858af41f13010ccab82bd172271cfbc4c207
59c90d5cce10c4fa5e43786": read /proc/self/fd/68: file already closed
I1213 23:19:38.297880     539 exec_io.go:108] Finish piping "stdout" of container exec "af2d6858af41f13010ccab82bd172271cfbc4c2075
9c90d5cce10c4fa5e43786"

cri-containerd without -it log:

I1213 23:21:34.755099     539 instrumented_service.go:199] Exec for "7b072c636b211989024c2ab6b43bf4ed591be11eb6fbb7298d1a5e93de0dc
bc3" with command [ls /], tty false and stdin false
E1213 23:21:34.755132     539 instrumented_service.go:203] Exec for "7b072c636b211989024c2ab6b43bf4ed591be11eb6fbb7298d1a5e93de0dc
bc3" failed, error: rpc error: code = InvalidArgument desc = one of stdin, stdout, or stderr must be set

Describe the results you expected:
attached io on shell

Additional information you deem important (e.g. issue happens only occasionally):

Flakey rtf tests

I ran 100 iterations of rtf test on commit cc58ae9. Results were:

Total Iterations: 100

Failures:
      3 FAIL await kube-dns ready (timeout)
      7 FAIL intra-pod networking (timeout)


kubernetes.smoke.cri-bridge.log
	[STDOUT  ] 2018-02-14T10:57:40.601415418Z: FAIL await kube-dns ready (timeout)$

kubernetes.smoke.cri-weave.log
	[STDOUT  ] 2018-02-13T19:21:07.417410968Z: FAIL await kube-dns ready (timeout)$
	[STDOUT  ] 2018-02-14T15:30:28.524790977Z: linuxkit-3e7dec682e01:/# ^[[6nFAIL await kube-dns ready (timeout)$

kubernetes.smoke.docker-bridge.log
	[STDOUT  ] 2018-02-13T16:39:28.684607810Z: FAIL intra-pod networking (timeout)$
	[STDOUT  ] 2018-02-14T00:37:21.189979194Z: FAIL intra-pod networking (timeout)$
	[STDOUT  ] 2018-02-14T02:53:00.333052128Z: FAIL intra-pod networking (timeout)$
	[STDOUT  ] 2018-02-14T04:13:15.442319709Z: FAIL intra-pod networking (timeout)$
	[STDOUT  ] 2018-02-14T06:18:38.001665601Z: FAIL intra-pod networking (timeout)$
	[STDOUT  ] 2018-02-14T06:33:31.060937576Z: FAIL intra-pod networking (timeout)$
	[STDOUT  ] 2018-02-14T06:48:24.444592701Z: FAIL intra-pod networking (timeout)$

The await kube-dns ready failures (which appear cri specific, although that might just be a timing thing) all follow the pattern:

[STDOUT  ] 2018-02-13T19:16:07.972630880Z: kubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:07.972653156Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:08.124612578Z: Pending
[STDOUT  ] 2018-02-13T19:16:09.131521270Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:09.131669920Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:09.287245054Z: Pending
[STDOUT  ] 2018-02-13T19:16:10.288787244Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:10.289749277Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:10.450560763Z: Pending
[STDOUT  ] 2018-02-13T19:16:11.451770429Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:11.451812803Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:11.595824096Z: Pending
[STDOUT  ] 2018-02-13T19:16:12.599325210Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:12.599444260Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:12.767129133Z: Pending
[STDOUT  ] 2018-02-13T19:16:13.773401297Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:13.773562433Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:13.944594902Z: Pending
[STDOUT  ] 2018-02-13T19:16:14.950696402Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:14.955823955Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:15.120279903Z: Pending
[STDOUT  ] 2018-02-13T19:16:16.125945435Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:16.126079752Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:16.329709531Z: Pending
[STDOUT  ] 2018-02-13T19:16:17.331563723Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:17.331616545Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:17.520332757Z: Pending
[STDOUT  ] 2018-02-13T19:16:18.521722092Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:18.521760301Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:18.678583094Z: Pending
[STDOUT  ] 2018-02-13T19:16:19.680123494Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:19.680166132Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:19.835904177Z: Pending
[STDOUT  ] 2018-02-13T19:16:20.839888363Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:20.839992693Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:20.995085155Z: Pending
[STDOUT  ] 2018-02-13T19:16:22.000421241Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:22.000592879Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:22.195642693Z: Pending
[STDOUT  ] 2018-02-13T19:16:23.216196491Z: linuxkit-925ccd792946:/# ESC[6nkubectl --namespace=kube-system get --selector='k8s-app
[STDOUT  ] 2018-02-13T19:16:23.222987654Z: '='kube-dns' -o jsonpath='{.items[*].status.phase}' pods ; echo
[STDOUT  ] 2018-02-13T19:16:23.416875245Z: Pending
[STDOUT  ] 2018-02-13T19:21:07.417410968Z: FAIL await kube-dns ready (timeout)

That is around a dozen iterations over 40-80s and then silence until the overall timeout after 300s. I suspect this is a test case issue.

The intra-pod networking failure which appears docker-bridge specific is the same in every case too, that is it is actually failing to install curl:

[STDOUT  ] 2018-02-14T06:17:27.383724129Z: SUCCESS nginx responded well
[STDOUT  ] 2018-02-14T06:17:27.387994374Z: kubectl exec $(kubectl get pods -l name==alpine -o=json
[STDOUT  ] 2018-02-14T06:17:27.390734315Z: path='{.items[*].metadata.name}') -- apk add --update curl
[STDOUT  ] 2018-02-14T06:17:27.959741604Z: fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
[STDOUT  ] 2018-02-14T06:17:32.967752636Z: fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
[STDOUT  ] 2018-02-14T06:17:32.967932428Z: ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.7/main: temporary error (try again later)
[STDOUT  ] 2018-02-14T06:17:32.969122634Z: WARNING: Ignoring APKINDEX.70c88391.tar.gz: No such file or directory
[STDOUT  ] 2018-02-14T06:17:37.975192705Z: ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.7/community: temporary error (try again later)
[STDOUT  ] 2018-02-14T06:17:37.975362186Z: WARNING: Ignoring APKINDEX.5022a8a2.tar.gz: No such file or directory
[STDOUT  ] 2018-02-14T06:17:37.978083230Z: ERROR: unsatisfiable constraints:
[STDOUT  ] 2018-02-14T06:17:37.984197706Z:   curl (missing):
[STDOUT  ] 2018-02-14T06:17:37.984328853Z:     required by: world[curl]
[STDOUT  ] 2018-02-14T06:17:38.020854558Z: command terminated with exit code 1

This isn't caught so it then does lots of

[STDOUT  ] 2018-02-14T06:17:38.412079312Z: OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"curl\": executable file not found in $PATH": unknown
[STDOUT  ] 2018-02-14T06:17:38.413279545Z: command terminated with exit code 126

before timing out.

results-cc58ae93ccbfe8f4acdbb209394fe8af3d06bede.zip

Investigate and remove rootfsPropagation workaround

#70 introduced a workaround (mount --make-shared / on entry) to the cri and docker containers because we were seeing issues like:

time="2018-04-05T14:21:11.075653345Z" level=error msg="Handler for POST /v1.31/containers/2a2de13fe4203cfc33457b5e8d265a7bc6df303d4d4e4190c9ba9fcdb4c5e97a/start returned error: linux mounts: path /etc/ssl/certs is mounted on / but it is not a shared or slave mount"

There were other similar instances relating to binds (e.g. path /etc/kubernetes/pki/etcd is mounted on /etc/kubernetes but it is not a shared or slave mount, where /etc/kubernetes is a bind mount) but they were resolved by a newer linuxkit which included moby/tool#210 switching all binds to shared by default (there are likely some explicit tags which can now be dropped).

The remaining issue with / needs investigation. Could be opencontainers/runc#1755 ?

./ssh_into_kubelet.sh <masterIP> gives /root/.ssh/config: terminating, 1 bad configuration options

Description

Steps to reproduce the issue:
When you run this step:

Login to the kubelet container:
./ssh_into_kubelet.sh 192.168.65.4 # Which is my Master IP
Describe the results you received:
then you get this error:
/root/.ssh/config: terminating, 1 bad configuration options

Describe the results you expected:
I expected to login into Master node directly:
linuxkit-025000000002:/#
Additional information you deem important (e.g. issue happens only occasionally):
I fix it by :

$ nano ~/.ssh/config
you suppose to see such output ...
Host *
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_rsa
=> then I REMOVED this line ONLY
UseKeychain yes

=> then ctrl + X to save the file

  • I use macOS Mojave 10.14 and I don't know what the side effect of this new changes on my MacOs
    ** It's working with me, I know this may seems silly for some people, but there is no clear error message, the fix was easy but to find what is the problem exactly took huge mount of time

I hope that someone could update docs or any one has this problem will find the solution here.

Request: update k8s

We are in the process of investigating using immutable linuxkit k8s images for production clusters, and for this we would like to use a newer version of kubernetes e.g. 1.14 or newer. We also need to be able to use the Ceph RBD provisioner for PVCs.

I have a working setup in my repo, which depends on linuxkit/linuxkit#3383
I'll create a PR over here as well, because it contains (I think) some valuable information on how to get k8s 1.14 + Ceph RBD to work.

To start the already installed kubernetes cluster

Description

Steps to reproduce the issue:

  1. Forcefully power down the host machine

Describe the results you received:
Kubernetes cluster does not start.
To start , have to clear all the state directories, which in essence creates new cluster and all changes are lost

Describe the results you expected:
How to start the stopped cluster. If you could tell me how , can setit up as init script

Additional information you deem important (e.g. issue happens only occasionally):

Docker daemon dies on boot when cgroupsPath used

After #14 Docker does not start. Shortly after boot without taking any other action the state becomes:

(ns: getty) linuxkit-d6247f201b73:~# ctr t ls
TASK                                           PID    STATUS    
rngd                                           808    STOPPED
sshd                                           851    RUNNING
docker                                         547    STOPPED
getty                                          586    RUNNING
kubelet                                        644    RUNNING
kubernetes-docker-image-cache-common           678    STOPPED
kubernetes-docker-image-cache-control-plane    730    STOPPED
ntpd                                           771    RUNNING

The logs are uninteresting:

(ns: getty) linuxkit-d6247f201b73:~# cat /var/log/docker.err.log 
[WARN  tini (547)] Tini is not running as PID 1 and isn't registered as a child subreaper.
Zombie processes will not be re-parented to Tini, so zombie reaping won't work.
To fix the problem, use the -s option or set the environment variable TINI_SUBREAPER to register Tini as a child subreaper, or run Tini as PID 1.
time="2017-12-08T14:55:34.404476201Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
time="2017-12-08T14:55:34.405395092Z" level=info msg="libcontainerd: new containerd process, pid: 591"
time="2017-12-08T14:55:36.515729941Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
time="2017-12-08T14:55:36.557961200Z" level=info msg="Graph migration to content-addressability took 0.00 seconds"
time="2017-12-08T14:55:36.558552846Z" level=info msg="Loading containers: start."
time="2017-12-08T14:55:36.717274716Z" level=warning msg="Running modprobe nf_nat failed with message: `ip: can't find device 'nf_nat'\nmodprobe: module nf_nat not found in modules.dep`, error: exit status 1"
time="2017-12-08T14:55:36.731672396Z" level=warning msg="Running modprobe xt_conntrack failed with message: `ip: can't find device 'xt_conntrack'\nmodprobe: module xt_conntrack not found in modules.dep`, error: exit status 1"
time="2017-12-08T14:55:37.065878091Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
time="2017-12-08T14:55:37.171766223Z" level=info msg="Loading containers: done."
time="2017-12-08T14:55:37.274945197Z" level=info msg="Docker daemon" commit=f4ffd25 graphdriver(s)=overlay2 version=17.10.0-ce
time="2017-12-08T14:55:37.275544195Z" level=info msg="Daemon has completed initialization"
time="2017-12-08T14:55:37.288354509Z" level=info msg="API listen on /var/run/docker.sock"

It seems that reverting #14 fixes things. I'll double check and raise a PR to revert while we sort this out.

/cc @justincormack. This was also mentioned in #11 (comment).

cri-containerd: hostpath mounted read only

Description
Tried the cri-containerd runtime but the master node never becomes ready. In the logs of weave I can see that it can't write its configuration to disk because the file system is mounted read only.

Steps to reproduce the issue:
Used the master branch. Build the kube master like this: KUBE_RUNTIME=cri-containerd make master. Used ./boot.sh to boot it and then run kubeadm-init.sh
I'm on Mac OS 10.13.

Describe the results you received:
Weave tries to write it's configuration to a host path but fails:

linuxkit-0800279ca819:/# kubectl -n kube-system logs -f weave-net-h7gv9  weave
...
cp: can't create '/host/opt/cni/bin/weave-plugin-2.1.3': Read-only file system
/home/weave/weave: line 1576: can't create /host/etc/cni/net.d/10-weave.conf: Read-only file system
INFO: 2018/02/11 12:28:03.586451 Discovered local MAC 4a:7a:5a:07:dc:d1
INFO: 2018/02/11 12:28:04.427532 Weave version 2.2.0 is available; please update at https://github.com/weaveworks/weave/releases/download/v2.2.0/weave

The master node never becomes ready:

linuxkit-0800279ca819:/# kubectl describe nodes
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  Ready            False   Sun, 11 Feb 2018 13:12:00 +0000   Sun, 11 Feb 2018 12:23:48 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni config uninitialized

In the weave container I can see that the hostPath ist mounted ro:

linuxkit-0800279ca819:/# kubectl -n kube-system exec -it  weave-net-h7gv9  -c weave sh
/home/weave # mount | grep host
rootfs on /host/opt type tmpfs (ro,relatime)
rootfs on /host/home type tmpfs (ro,relatime)
rootfs on /host/etc type tmpfs (ro,relatime)
/dev/sda1 on /host/var/lib/dbus type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,data=ordered)

cri-containerd doesnt successfully reboot kubelet

Description

Steps to reproduce the issue:
start master, wait until all pods are started, poweroff and restart. In ~<10% it reboots successfully

Describe the results you received:
kubelet log

kubelet.sh: kubelet already configured
kubelet.sh: waiting for /etc/kubernetes/kubelet.conf
kubelet.sh: /etc/kubernetes/kubelet.conf has arrived
I1213 19:31:02.058981     572 feature_gate.go:156] feature gates: map[]
I1213 19:31:02.059026     572 controller.go:114] kubelet config controller: starting controller
I1213 19:31:02.059031     572 controller.go:118] kubelet config controller: validating combination of defaults and flags
I1213 19:31:02.075793     572 feature_gate.go:156] feature gates: map[]
W1213 19:31:02.075912     572 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set e
xplicitly
I1213 19:31:02.079398     572 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu"
W1213 19:31:02.080964     572 manager.go:153] Unable to connect to Docker: Cannot connect to the Docker daemon. Is the docker daemo
n running on this host?
W1213 19:31:02.082506     572 manager.go:157] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 
[::1]:15441: getsockopt: connection refused
W1213 19:31:02.082564     572 manager.go:166] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dia
l unix /var/run/crio.sock: connect: no such file or directory
I1213 19:31:02.082950     572 fs.go:139] Filesystem UUIDs: map[d358f4da-8808-4f6c-8b8a-fcc647914e7d:/dev/sda1]
I1213 19:31:02.082962     572 fs.go:140] Filesystem partitions: map[shm:{mountpoint:/dev/shm major:0 minor:18 fsType:tmpfs blockSiz
e:0} tmpfs:{mountpoint:/run major:0 minor:15 fsType:tmpfs blockSize:0} /dev/sda1:{mountpoint:/var/lib major:8 minor:1 fsType:ext4 b
lockSize:0}]
I1213 19:31:02.091136     572 info.go:51] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"

I1213 19:31:02.093083     572 manager.go:216] Machine: {NumCores:1 CpuFrequency:3400022 MemoryCapacity:4574240768 HugePages:[{PageS
ize:2048 NumPages:0}] MachineID: SystemUUID:D740F169-85BB-4FD2-9F1E-A81CED65D3FD BootID:799bdd7f-4d34-4d88-9189-c376c589bf85 Filesy
stems:[{Device:shm DeviceMajor:0 DeviceMinor:18 Capacity:2287120384 Type:vfs Inodes:558379 HasInodes:true} {Device:tmpfs DeviceMajo
r:0 DeviceMinor:15 Capacity:457424896 Type:vfs Inodes:558379 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity
:8386961408 Type:vfs Inodes:524288 HasInodes:true} {Device:overlay DeviceMajor:0 DeviceMinor:49 Capacity:2287120384 Type:vfs Inodes
:558379 HasInodes:true}] DiskMap:map[43:0:{Name:nbd0 Major:43 Minor:0 Size:0 Scheduler:none} 43:3:{Name:nbd3 Major:43 Minor:3 Size:
0 Scheduler:none} 43:5:{Name:nbd5 Major:43 Minor:5 Size:0 Scheduler:none} 43:9:{Name:nbd9 Major:43 Minor:9 Size:0 Scheduler:none} 8
:0:{Name:sda Major:8 Minor:0 Size:8589934592 Scheduler:deadline} 43:15:{Name:nbd15 Major:43 Minor:15 Size:0 Scheduler:none} 43:2:{N
ame:nbd2 Major:43 Minor:2 Size:0 Scheduler:none} 43:7:{Name:nbd7 Major:43 Minor:7 Size:0 Scheduler:none} 43:8:{Name:nbd8 Major:43 M
inor:8 Size:0 Scheduler:none} 43:10:{Name:nbd10 Major:43 Minor:10 Size:0 Scheduler:none} 43:4:{Name:nbd4 Major:43 Minor:4 Size:0 Sc
heduler:none} 43:12:{Name:nbd12 Major:43 Minor:12 Size:0 Scheduler:none} 43:13:{Name:nbd13 Major:43 Minor:13 Size:0 Scheduler:none}
 43:14:{Name:nbd14 Major:43 Minor:14 Size:0 Scheduler:none} 43:6:{Name:nbd6 Major:43 Minor:6 Size:0 Scheduler:none} 43:1:{Name:nbd1
 Major:43 Minor:1 Size:0 Scheduler:none} 43:11:{Name:nbd11 Major:43 Minor:11 Size:0 Scheduler:none}] NetworkDevices:[{Name:eth0 Mac
Address:82:3f:cf:43:6d:fc Speed:-1 Mtu:1500} {Name:ip6tnl0 MacAddress:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 Speed:0 Mtu:1
452} {Name:tunl0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:0 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Ty
pe:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[{Size:16777216 Type:Unified L
evel:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I1213 19:31:02.099803     572 manager.go:222] Version: {KernelVersion:4.9.62-linuxkit ContainerOsVersion:LinuxKit Kubernetes Projec
t DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:}
I1213 19:31:02.122241     572 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgrou
psName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:false CgroupRoot: CgroupDriver:cgroupfs Protec
tKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[] KubeRes
erved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentag
e:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:
0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<
nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
I1213 19:31:02.124036     572 container_manager_linux.go:288] Creating device plugin handler: false
I1213 19:31:02.124167     572 kubelet.go:273] Adding manifest file: /etc/kubernetes/manifests
I1213 19:31:02.124211     572 kubelet.go:283] Watching apiserver
E1213 19:31:02.127909     572 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://
10.10.10.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlinuxkit-823fcf436dfc&resourceVersion=0: dial tcp 10.10.10.127:6443: g
etsockopt: connection refused
E1213 19:31:02.167876     572 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https
://10.10.10.127:6443/api/v1/services?resourceVersion=0: dial tcp 10.10.10.127:6443: getsockopt: connection refused
E1213 19:31:02.167939     572 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get h
ttps://10.10.10.127:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlinuxkit-823fcf436dfc&resourceVersion=0: dial tcp 10.10.10.127:6
443: getsockopt: connection refused
W1213 19:31:02.260304     572 kubelet_network.go:62] Hairpin mode set to "promiscuous-bridge" but container runtime is "remote", ig
noring
I1213 19:31:02.263340     572 kubelet.go:517] Hairpin mode set to "none"
I1213 19:31:02.263547     572 remote_runtime.go:43] Connecting to runtime service unix:///var/run/cri-containerd.sock
2017/12/13 19:31:02 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix
 /var/run/cri-containerd.sock: connect: no such file or directory"; Reconnecting to {/var/run/cri-containerd.sock <nil>}
E1213 19:31:02.282627     572 remote_runtime.go:69] Version from runtime service failed: rpc error: code = Unavailable desc = grpc:
 the connection is unavailable
E1213 19:31:02.282857     572 kuberuntime_manager.go:160] Get runtime version failed: rpc error: code = Unavailable desc = grpc: th
e connection is unavailable
error: failed to run Kubelet: failed to create kubelet: rpc error: code = Unavailable desc = grpc: the connection is unavailable

cri-containerd

I1213 19:31:00.959698     535 cri_containerd.go:100] Run cri-containerd &{Config:{ContainerdConfig:{RootDir:/var/lib/containerd Sna
pshotter:overlayfs Endpoint:/run/containerd/containerd.sock Runtime:io.containerd.runtime.v1.linux RuntimeEngine: RuntimeRoot:} Cni
Config:{NetworkPluginBinDir:/var/lib/cni/opt/bin NetworkPluginConfDir:/var/lib/cni/etc/net.d} SocketPath:/var/run/cri-containerd.so
ck RootDir:/var/lib/cri-containerd StreamServerAddress: StreamServerPort:10010 CgroupPath: EnableSelinux:false SandboxImage:gcr.io/
google_containers/pause:3.0 StatsCollectPeriod:10 SystemdCgroup:false OOMScore:-999 EnableProfiling:true ProfilingPort:10011 Profil
ingAddress:127.0.0.1} ConfigFilePath:/etc/cri-containerd/config.toml}
I1213 19:31:00.961008     535 cri_containerd.go:104] Start profiling server
I1213 19:31:00.961020     535 cri_containerd.go:108] Run cri-containerd grpc server on socket "/var/run/cri-containerd.sock"
I1213 19:31:00.966705     535 service.go:155] Get device uuid "d358f4da-8808-4f6c-8b8a-fcc647914e7d" for image filesystem "/var/lib
/containerd/io.containerd.snapshotter.v1.overlayfs"
time="2017-12-13T19:31:00Z" level=info msg="CNI network weave (type=weave-net) is used from /var/lib/cni/etc/net.d/10-weave.conf" 
time="2017-12-13T19:31:00Z" level=info msg="CNI network weave (type=weave-net) is used from /var/lib/cni/etc/net.d/10-weave.conf" 
I1213 19:31:00.977500     535 service.go:182] Start cri-containerd service
I1213 19:31:00.977526     535 service.go:184] Start recovering state
I1213 19:31:02.346005     535 service.go:190] Start event monitor
I1213 19:31:02.346016     535 service.go:194] Start snapshots syncer
I1213 19:31:02.346028     535 service.go:203] Start streaming server
I1213 19:31:02.346032     535 service.go:214] Start grpc server
I1213 19:31:02.346127     535 events.go:94] TaskExit event &TaskExit{ContainerID:rngd,ID:rngd,Pid:636,ExitStatus:1,ExitedAt:2017-12
-13 19:31:01.834075925 +0000 UTC,}
E1213 19:31:02.346164     535 events.go:100] Failed to get container "rngd": does not exist
I1213 19:31:02.347662     535 events.go:94] TaskExit event &TaskExit{ContainerID:kubelet,ID:kubelet,Pid:572,ExitStatus:1,ExitedAt:2
017-12-13 19:31:02.300502105 +0000 UTC,}
E1213 19:31:02.347679     535 events.go:100] Failed to get container "kubelet": does not exist

Describe the results you expected:
running kubelet

Additional information you deem important (e.g. issue happens only occasionally):
I could reproduce this back to at least f9a2a31

Current version of linuxkit has no build option

Description
Invoking make produces the following error:

$ make all KUBE_RUNTIME=cri-containerd
curl -L -o kube-weave.yaml https://cloud.weave.works/k8s/v1.8/net?v=v2.0.5
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 274 100 274 0 0 254 0 0:00:01 0:00:01 --:--:-- 254
100 6956 100 6956 0 0 5296 0 0:00:01 0:00:01 --:--:-- 5296
linuxkit build -name kube-master -format iso-efi yml/kube.yml yml/cri-containerd.yml yml/cri-containerd-master.yml yml/weave.yml
"build" is not valid command.

USAGE: linuxkit [options] COMMAND

Commands:
metadata Metadata utilities
push Push a VM image to a cloud or image store
run Run a VM image on a local hypervisor or remote cloud
version Print version information
help Print this message

Run 'linuxkit COMMAND --help' for more information on the command

Options:
-q Quiet execution
-v Verbose execution
make: *** [kube-master.iso] Error 1
$

Steps to reproduce the issue:

Just run make again

Describe the results you received:

See above

Describe the results you expected:

Successful build

Additional information you deem important (e.g. issue happens only occasionally):
$ linuxkit version
linuxkit version 0.0
commit: 92947c9c417d703c491711f23d00ceb9f53df5b0
$

I assume you now need buildkit to build images.

Switch from cli configuration to kubelet.conf

With the bump to v1.10 in #70 Kubelet is now complaining:

# Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
# Flag --allow-privileged has been deprecated, will be removed in a future version
# Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
# Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
# Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
# Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
# Flag --cadvisor-port has been deprecated, The default will change to 0 (disabled) in 1.12, and the cadvisor port will be removed entirely in 1.13
# Flag --kube-reserved-cgroup has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
# Flag --system-reserved-cgroup has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
# Flag --cgroup-root has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.

It appears (on a quick look) that the way to provide kubelet.conf when using kubeadm is to use the KubeletConfiguration field in kubeadm.conf which has the same KubeletConfiguration type as kubelet.conf.

Using this will mean reworking a bunch of the setup stuff in kubelet.sh so it's not entirely trivial, especially when considering there is the option to provide the config via metadata.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.