Giter Club home page Giter Club logo

talos's Introduction

Talos Linux

A modern OS for Kubernetes.

Release Pre-release


Friendly fork

This is a friendly fork of siderolabs/talos. It is only here to support SBC Turing RK1. And it will be integrated in some way using community managed SBCs in Talos 1.8? I am still waiting for custom kernel support.

Using this fork

asciicast

Download the latest release on the right

This describes the CLI commands, you may use the Turing PI webgui. Always first unpack the image yourself. Version 2.06 supports xz images, but it is slower.

xz -d metal-turing_rk1-arm64.raw.xz
tpi flash -n <NODENUMBER> -i metal-turing_rk1-arm64.raw
tpi power on -n <NODENUMBER> 

To check bootmessages:

tpi uart -n <NODENUMBER> get

Make sure when you use talosctl apply-config to have in this config:

machine:
  kernel:
    modules:
      - name: rockchip-cpufreq

for an extended installation guide with cilium see issue #1

Updating

Updating can also be done faster using the talosctl upgrade command.

talosctl upgrade -i ghcr.io/nberlee/installer:v1.7.x-rk3588

when adding the -rk3588 to the tag, the rk3588 extension is only needed in the machine-config when also other extensions are defined there. For example the ghcr.io/nberlee/installer:v1.7.1-rk3588 installer image has the rk3588 talos extension included. But its removed when another extension is defined.

Talos

Talos is a modern OS for running Kubernetes: secure, immutable, and minimal. Talos is fully open source, production-ready, and supported by the people at Sidero Labs All system management is done via an API - there is no shell or interactive console. Benefits include:

  • Security: Talos reduces your attack surface: It's minimal, hardened, and immutable. All API access is secured with mutual TLS (mTLS) authentication.
  • Predictability: Talos eliminates configuration drift, reduces unknown factors by employing immutable infrastructure ideology, and delivers atomic updates.
  • Evolvability: Talos simplifies your architecture, increases your agility, and always delivers current stable Kubernetes and Linux versions.

Documentation

For instructions on deploying and managing Talos, see the Documentation.

Community

If you're interested in this project and would like to help in engineering efforts or have general usage questions, we are happy to have you! We hold a weekly meeting that all audiences are welcome to attend.

We would appreciate your feedback so that we can make Talos even better! To do so, you can take our survey.

Office Hours

You can subscribe to this meeting by joining the community forum above.

Note: You can convert the meeting hours to your local time.

Contributing

Contributions are welcomed and appreciated! See Contributing for our guidelines.

License

GitHub

Some software we distribute is under the General Public License family of licenses or other licenses that require we provide you with the source code. If you would like a copy of the source code for this software, please contact us via email: info at SideroLabs.com.

talos's People

Contributors

aleksi avatar andrewrynhard avatar bradbeam avatar budimanjojo avatar dependabot[bot] avatar dmitriymv avatar eirikaskheim avatar flokli avatar frezbo avatar jonkerj avatar nanfei-chen avatar nberlee avatar oscr avatar patatman avatar rgl avatar ro11net avatar rsmitty avatar salkin avatar sauterp avatar sergelogvinov avatar smira avatar steverfrancis avatar tgerla avatar timjones avatar twelho avatar uhthomas avatar ulexus avatar unix4ever avatar utkuozdemir avatar yoctozepto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

talos's Issues

Boot from NVMe

Thanks for all the work on this.

As far as I can tell from my experiments, it's not currently possible to install Talos onto an NVMe: flashing is only possible to the eMMC anyway, and changing the install disk to the NVMe doesn't seem to have any effect either. Is that a correct assumption? And if so, do you plan to change that? Or is this project mostly done, now that everybody seems to be waiting for Talos 1.7?

Maybe related to that, a talosctl reset seems to leave the node in an unbootable state for me and I have to flash the eMMC again. Is that the current behavior or am I doing something wrong here?

Cannot find NVMe drive attached to RK1 units

Bug Report

I installed Talos like you demonstrated in the ascii video, on the /dev/mmcblk0 block device. Now I want to use the attached NVMe drive for PVC resources, but I can't find the device in the list:

Description

$ talosctl -n 192.168.1.111 disks
NODE            DEV                 MODEL   SERIAL       TYPE   UUID   WWID   MODALIAS   NAME     SIZE     BUS_PATH                                                                    SUBSYSTEM          READ_ONLY   SYSTEM_DISK
192.168.1.111   /dev/mmcblk0        -       0xbbbbbbbb   SD     -      -      -          BJTD4R   31 GB    /platform/fe2e0000.mmc/mmc_host/mmc0/mmc0:0001/                             /sys/class/block               *
192.168.1.111   /dev/mmcblk0boot0   -       -            SD     -      -      -          -        4.2 MB   /platform/fe2e0000.mmc/mmc_host/mmc0/mmc0:0001/block/mmcblk0/mmcblk0boot0   /sys/class/block   *
192.168.1.111   /dev/mmcblk0boot1   -       -            SD     -      -      -          -        4.2 MB   /platform/fe2e0000.mmc/mmc_host/mmc0/mmc0:0001/block/mmcblk0/mmcblk0boot1   /sys/class/block   *
$ talosctl -n 192.168.1.111 ls -l  /dev/
NODE            MODE          UID   GID   SIZE(B)   LASTMOD             NAME
192.168.1.111   drwxr-xr-x    0     0     3380      Jan 30 22:02:52     .
192.168.1.111   Dcrw-r--r--   0     0     0         Jan 30 22:02:51     autofs
192.168.1.111   drwxr-xr-x    0     0     700       Jan 30 22:02:51     block
192.168.1.111   drwxr-xr-x    0     0     60        Jan  1 1970 01:00   bus
192.168.1.111   drwxr-xr-x    0     0     2540      Jan 30 22:02:52     char
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     console
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     cpu_dma_latency
192.168.1.111   drwxr-xr-x    0     0     180       Jan 30 22:02:51     disk
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     efi_capsule_loader
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     efi_test
192.168.1.111   Lrwxrwxrwx    0     0     13        Jan 30 22:02:51     fd -> /proc/self/fd
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     full
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     fuse
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     gpiochip0
...
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     gpiochip4
192.168.1.111   drwxr-xr-x    0     0     0         Jan 30 22:02:46     hugepages
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     hwrng
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     i2c-0
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     i2c-1
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     i2c-2
192.168.1.111   drwxr-xr-x    0     0     60        Jan  1 1970 01:00   input
192.168.1.111   Dcrw-r--r--   0     0     0         Jan 30 22:02:51     kmsg
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     kvm
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     loop-control
192.168.1.111   Drw-------    0     0     0         Jan 30 22:02:51     loop0
...
192.168.1.111   Drw-------    0     0     0         Jan 30 22:02:51     loop7
192.168.1.111   drwxr-xr-x    0     0     60        Jan 30 22:02:40     mapper
192.168.1.111   Drw-------    0     0     0         Jan 30 22:02:55     mmcblk0
192.168.1.111   Drw-------    0     0     0         Jan 30 22:02:51     mmcblk0boot0
192.168.1.111   Drw-------    0     0     0         Jan 30 22:02:51     mmcblk0boot1
192.168.1.111   Drw-------    0     0     0         Jan 30 22:02:55     mmcblk0p1
...
192.168.1.111   Drw-------    0     0     0         Jan 30 22:02:55     mmcblk0p6
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     mmcblk0rpmb
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     mpt2ctl
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     mpt3ctl
192.168.1.111   drwxr-xr-x    0     0     60        Jan  1 1970 01:00   net
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     null
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     nvme-fabrics
192.168.1.111   Dcrw-r-----   0     0     0         Jan 30 22:02:51     port
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     ptmx
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:52     ptp0
192.168.1.111   drwxr-xr-x    0     0     0         Jan 30 22:02:46     pts
192.168.1.111   Drw-------    0     0     0         Jan 30 22:02:51     ram0
...
192.168.1.111   Drw-------    0     0     0         Jan 30 22:02:51     ram15
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     random
192.168.1.111   Lrwxrwxrwx    0     0     4         Jan 30 22:02:51     rtc -> rtc0
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     rtc0
192.168.1.111   dtrwxrwxrwx   0     0     40        Jan 30 22:02:46     shm
192.168.1.111   Lrwxrwxrwx    0     0     15        Jan 30 22:02:51     stderr -> /proc/self/fd/2
192.168.1.111   Lrwxrwxrwx    0     0     15        Jan 30 22:02:51     stdin -> /proc/self/fd/0
192.168.1.111   Lrwxrwxrwx    0     0     15        Jan 30 22:02:51     stdout -> /proc/self/fd/1
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     tty
192.168.1.111   Dcrw--w----   0     0     0         Jan 30 22:02:51     tty0
...
192.168.1.111   Dcrw--w----   0     0     0         Jan 30 22:02:51     tty63
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     ttyS0
...
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     ttyS9
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     urandom
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     vcs
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     vcs1
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     vcsa
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     vcsa1
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     vcsu
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     vcsu1
192.168.1.111   drwxr-xr-x    0     0     60        Jan  1 1970 01:00   vfio
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     vga_arbiter
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     vhost-net
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     vhost-vsock
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     vsock
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     watchdog
192.168.1.111   Dcrw-------   0     0     0         Jan 30 22:02:51     watchdog0
192.168.1.111   Dcrw-rw-rw-   0     0     0         Jan 30 22:02:51     zero
$ talosctl -n 192.168.1.111 ls -l  /sys/block/
NODE            MODE         UID   GID   SIZE(B)   LASTMOD           NAME
192.168.1.111   drwxr-xr-x   0     0     0         Jan 30 22:02:45   .
192.168.1.111   Lrwxrwxrwx   0     0     0         Jan 30 22:08:29   loop0 -> ../devices/virtual/block/loop0
...
192.168.1.111   Lrwxrwxrwx   0     0     0         Jan 30 22:08:29   loop7 -> ../devices/virtual/block/loop7
192.168.1.111   Lrwxrwxrwx   0     0     0         Jan 30 22:02:51   mmcblk0 -> ../devices/platform/fe2e0000.mmc/mmc_host/mmc0/mmc0:0001/block/mmcblk0
192.168.1.111   Lrwxrwxrwx   0     0     0         Jan 30 22:08:29   mmcblk0boot0 -> ../devices/platform/fe2e0000.mmc/mmc_host/mmc0/mmc0:0001/block/mmcblk0/mmcblk0boot0
192.168.1.111   Lrwxrwxrwx   0     0     0         Jan 30 22:08:29   mmcblk0boot1 -> ../devices/platform/fe2e0000.mmc/mmc_host/mmc0/mmc0:0001/block/mmcblk0/mmcblk0boot1
192.168.1.111   Lrwxrwxrwx   0     0     0         Jan 30 22:08:29   ram0 -> ../devices/virtual/block/ram0
...
192.168.1.111   Lrwxrwxrwx   0     0     0         Jan 30 22:08:29   ram15 -> ../devices/virtual/block/ram15

Instead of the /dev/mmcblk0 device (in the controlplane.yaml file, path /machine/install/disk) I tried several NVMe names, but they were not recognized:

[43208.552039] [talos] server certificate issued {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController", "fingerprint": "sdfghjkjhgfdsdfghj+HLV0YFLj08ilTeNZ/Bc="}
[66999.895977] InvalidArgument [/machine.MachineService/ApplyConfiguration] 4.846056ms unary rpc error: code = InvalidArgument desc = configuration validation failed: 1 error occurred:
[66999.914107]  * specified install disk does not exist: "/dev/nvme0"
[66999.921038]
[66999.922706]  (:authority=192.168.1.107:50000;content-type=application/grpc;grpc-accept-encoding=gzip;runtime=Talos;user-agent=grpc-go/1.59.0)
[67045.858473] InvalidArgument [/machine.MachineService/ApplyConfiguration] 4.057048ms unary rpc error: code = InvalidArgument desc = configuration validation failed: 1 error occurred:
[67045.876598]  * specified install disk does not exist: "/dev/nvme0n1"
[67700.706974]
[67700.708648]  (:authority=192.168.1.107:50000;content-type=application/grpc;grpc-accept-encoding=gzip;runtime=Talos;user-agent=grpc-go/1.59.0)
[67730.911513] InvalidArgument [/machine.MachineService/ApplyConfiguration] 3.891938ms unary rpc error: code = InvalidArgument desc = configuration validation failed: 1 error occurred:
[67730.929653]  * specified install disk does not exist: "/dev/nvme0n1p1"

I tested the NVME drive with the Ubuntu distro on the same RK1 nodes, and they are working fine:

$ sudo nvme smart-log /dev/nvme0
Smart Log for NVME device:nvme0 namespace-id:ffffffff
critical_warning			: 0
temperature				: 37 C (310 Kelvin)
available_spare				: 100%
available_spare_threshold		: 10%
percentage_used				: 0%
endurance group critical warning summary: 0
data_units_read				: 95
data_units_written			: 61
host_read_commands			: 1,946
host_write_commands			: 1,004
controller_busy_time			: 0
power_cycles				: 4
power_on_hours				: 2
unsafe_shutdowns			: 1
media_errors				: 0
num_err_log_entries			: 1
Warning Temperature Time		: 0
Critical Composite Temperature Time	: 0
Thermal Management T1 Trans Count	: 0
Thermal Management T2 Trans Count	: 0
Thermal Management T1 Total Time	: 0
Thermal Management T2 Total Time	: 0

So the question is: How can I get the /dev/nvme0* devices to appear?

Logs

Environment

  • Talos version: 1.6.3
  • Kubernetes version: n/a
  • Platform: TuringPI2 / RK1

Example With Cilium

he ascii cinema video is great and give an very good idea on how to deploy this. Perhaps you could consider adding an example on how to deploy this for Cilium + ebpf + Proxy Replacement and the L2 Advertisement feature.

This is what I did in my cluster:

  1. I use patch to generate cp config, If you have more than 3 nodes you will need to also patch/edit the worker config accordingly.
    I add the extensions,and also select the right interfaces for the VIP advertisement.
    It is a must to use the busPath as the interface names are derived from the mac address so are all different.
- op: add
  path: /machine/install/extensions
  value:
    - image: ghcr.io/nberlee/rk3588:v1.6.3

- op: add
  path: /machine/kernel
  value:
    modules:
      - name: rockchip-cpufreq

- op: add 
  path: /machine/install/disk
  value: 
    /dev/mmcblk0

- op: add
  path: /machine/network
  value:
    interfaces:
    - deviceSelector:
        busPath: "fe1c0000.ethernet"
      dhcp: true
      vip:
        ip:  192.168.0.2

- op: add 
  path: /cluster/network/cni
  value:
    name: none

- op: add 
  path: /cluster/proxy
  value:
    disabled: true

- op: add 
  path: /cluster/allowSchedulingOnControlPlanes
  value:
    true
  1. Generate the configs and customize them:
talosctl gen config turing https://<VIP>:6443 --config-patch-control-plane @cp.patch.yaml
cp  controlplane.yaml rk1-1.yaml
cp  controlplane.yaml rk1-2.yaml
cp  controlplane.yaml rk1-3.yaml
  1. Edit the files and set the hostname
machine:
    network:
        hostname: rk1-1
  1. Bootstrap the cluster as per your video
  2. Now the cluster comes up without a CNI as I have disabled it but we can just deploy Cilium with Helm
helm install \
    cilium \
    cilium/cilium \
    --version 1.14.6 \
    --namespace kube-system \
    --set ipam.mode=kubernetes \
    --set=kubeProxyReplacement=true \
    --set=securityContext.capabilities.ciliumAgent="{CHOWN,KILL,NET_ADMIN,NET_RAW,IPC_LOCK,SYS_ADMIN,SYS_RESOURCE,DAC_OVERRIDE,FOWNER,SETGID,SETUID}" \
    --set=securityContext.capabilities.cleanCiliumState="{NET_ADMIN,SYS_ADMIN,SYS_RESOURCE}" \
    --set=cgroup.autoMount.enabled=false \
    --set=cgroup.hostRoot=/sys/fs/cgroup \
    --set l2announcements.enabled=true \
    --set kubeProxyReplacement=true \
    --set loadBalancer.acceleration=native \
    --set k8sServiceHost=127.0.0.1  \
    --set k8sServicePort=7445 \
    --set bpf.masquerade=true
  1. Follow the Cilium L2 advertisement guide to expose services with the new L2 functionality

Missing CPU frequency

Bug Report

Description

talosctl dashboard reports 8x0MHz for RK1 nodes. Might be related to the big.LITTLE arch, but I see the same issue on my homogenous SOQuartz node. Not sure if upstream cpufreq issues are relevant either.

Logs

Nothing in dmesg, is there somewhere else to look?

Environment

  • Talos version: v1.6.4 (upgrading to v1.6.5 now)
  • Kubernetes version: v1.28.0
  • Platform: 2x Turing RK1 16GB on Turing Pi 2

Unable to build a disk image with the Gasket-driver extension

Bug Report

When attempting to build a disk image using the following fails

docker run --rm -t -v $PWD/_out:/out -v /dev:/dev --privileged ghcr.io/nberlee/imager:v1.6.7 metal --system-extension-image ghcr.io/siderolabs/gasket-driver:09385d4-v1.6.7@sha256:1add5eaa6da69397ed19d9af744e0564d36443fa75859f562dd9418376ce522a
profile ready:
arch: arm64
platform: metal
secureboot: false
version: v1.6.7
input:
  kernel:
    path: /usr/install/arm64/vmlinuz
  initramfs:
    path: /usr/install/arm64/initramfs.xz
  dtb:
    path: /usr/install/arm64/dtb
  uBoot:
    path: /usr/install/arm64/u-boot
  rpiFirmware:
    path: /usr/install/arm64/raspberrypi-firmware
  baseInstaller:
    imageRef: ghcr.io/nberlee/installer:v1.6.7
  systemExtensions:
    - imageRef: ghcr.io/siderolabs/gasket-driver:09385d4-v1.6.7@sha256:1add5eaa6da69397ed19d9af744e0564d36443fa75859f562dd9418376ce522a
output:
  kind: image
  imageOptions:
    diskSize: 1306525696
    diskFormat: raw
  outFormat: .xz
◰ copying kernel modules from /tmp/imager2988444800/extensions/0/rootfs/lib/modules failed: stat /tmp/imager2988444800/extensions/0/rootfs/lib/modules/6.6.22-talos: no such file or directory

Description

I can create images using other extensions just not the gasket-driver. I can also use the official sideorlabs imagers with the gasket-extension.

Logs

Environment

  • Talos version: [talosctl version --nodes <problematic nodes>]
  • Kubernetes version: [kubectl version --short]
  • Platform:

Talos 1.6.4 broke USB2

Bug Report

The USB2 port (which can be assigned one of the nodes on the Turing Pi2) is not recognised in Talos 1.6.4 only.

Unable to flash Talos 1.7.5 on RK1 modules in Turing Pi 2

Bug Report

Description

Hey there!

I built a Talos cluster on the Turing Pi 2 with 4 RK1 nodes a few weeks ago with Talos 1.6.3 using the image from this fork. For reasons (mainly me screwing up), I broke the cluster and wanted to start fresh and install the version 1.7.5, then use talhelper to go for Talos with Cilium without kube-proxy.

I downloaded the image and flashed it using the following command tpi --user root --password turing flash -i metal-turing_rk1-arm64.raw -n 1, then I tpi power on, and then I try to get the IP address using tpi uart get -n1 (or with the user/pass as well), but I get 0 answer. I tried to nmap my network but can't see the nodes there either.

Any idea what's wrong and where I can go from here to get things going again?

Thanks in advance for your insight!

Cheers

ps: Tried to flash the 1.6.3 again, but same result as well... 😦

Logs

Environment

  • Talos version: 1.7.5
  • Kubernetes version: N/A
  • Platform: RK1

Upgrade fails

Bug Report

Description

I'm trying to upgrade my TuringPi2/RK1 cluster from v1.6.5 to v1.6.7 (before going to v1.7.x), but it fails.

Logs

⨠ talosctl upgrade -i ghcr.io/nberlee/installer:v1.6.7-rk3588 --debug -n 10.76.1.41 
◰ watching nodes: [10.76.1.41]
    * 10.76.1.41: 1 error(s) occurred:
    sequence error: sequence failed: error running phase 11 in upgrade sequence: task 1/1: failed, task "upgrade" failed: exit code 1
console logs for nodes ["10.76.1.41"]:
10.76.1.41: user: warning: [2024-05-06T06:33:04.535372153Z]: [talos] upgrade request received: preserve false, staged false, force false, reboot mode DEFAULT
10.76.1.41: user: warning: [2024-05-06T06:33:04.546523153Z]: [talos] validating "ghcr.io/nberlee/installer:v1.6.7-rk3588"
10.76.1.41: user: warning: [2024-05-06T06:33:22.151084153Z]: [talos] etcd upgrade mutex locked with session ID 35438f4a731b3049
10.76.1.41: user: warning: [2024-05-06T06:33:22.251605153Z]: [talos] upgrade sequence: 15 phase(s)
10.76.1.41: user: warning: [2024-05-06T06:33:22.256962153Z]: [talos] phase drain (1/15): 1 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:33:22.262503153Z]: [talos] task cordonAndDrainNode (1/1): starting
10.76.1.41: user: warning: [2024-05-06T06:33:22.268916153Z]: [talos] task cordonAndDrainNode (1/1): waiting for node to be cordoned
10.76.1.41: user: warning: [2024-05-06T06:33:22.277534153Z]: [talos] etcd upgrade mutex unlocked and session closed
10.76.1.41: user: warning: [2024-05-06T06:33:22.522347153Z]: [talos] skipping DaemonSet pod rook-ceph/rook-discover-hchlz
10.76.1.41: user: warning: [2024-05-06T06:33:22.530095153Z]: [talos] skipping mirror pod kube-system/kube-scheduler-cp1
10.76.1.41: user: warning: [2024-05-06T06:33:22.537518153Z]: [talos] skipping DaemonSet pod kube-system/cilium-5sm5c
10.76.1.41: user: warning: [2024-05-06T06:33:22.544659153Z]: [talos] skipping mirror pod kube-system/kube-apiserver-cp1
10.76.1.41: user: warning: [2024-05-06T06:33:22.552073153Z]: [talos] skipping mirror pod kube-system/kube-controller-manager-cp1
10.76.1.41: user: warning: [2024-05-06T06:33:22.560326153Z]: [talos] skipping DaemonSet pod rook-ceph/csi-rbdplugin-7q2hw
10.76.1.41: user: warning: [2024-05-06T06:33:22.567909153Z]: [talos] skipping DaemonSet pod loki/loki-canary-9kj7c
10.76.1.41: user: warning: [2024-05-06T06:33:22.574849153Z]: [talos] skipping DaemonSet pod loki/promtail-fp4v8
10.76.1.41: user: warning: [2024-05-06T06:33:22.581969153Z]: [talos] skipping DaemonSet pod prometheus/kube-prometheus-stack-prometheus-node-exporter-mxbvt
10.76.1.41: user: warning: [2024-05-06T06:33:22.592937153Z]: [talos] skipping DaemonSet pod rook-ceph/csi-cephfsplugin-xh87d
10.76.1.41: user: warning: [2024-05-06T06:34:22.609958153Z]: [talos] WARNING: failed to evict pod: failed waiting on pod rook-ceph/rook-ceph-exporter-cp1-7cbc6447dd-p2hv4 to be deleted: 2 error(s) occurred:
10.76.1.41: user: warning: [2024-05-06T06:34:22.625971153Z]:  pod is still running on the node
10.76.1.41: user: warning: [2024-05-06T06:34:22.630997153Z]:  timeout
10.76.1.41: user: warning: [2024-05-06T06:34:22.634210153Z]: [talos] task cordonAndDrainNode (1/1): done, 1m0.375860217s
10.76.1.41: user: warning: [2024-05-06T06:34:22.641880153Z]: [talos] phase drain (1/15): done, 1m0.388986407s
10.76.1.41: user: warning: [2024-05-06T06:34:22.648294153Z]: [talos] phase cleanup (2/15): 1 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:34:22.653878153Z]: [talos] task removeAllPods (1/1): starting
10.76.1.41: user: warning: [2024-05-06T06:34:22.659633153Z]: [talos] task removeAllPods (1/1): waiting for kubelet lifecycle finalizers
10.76.1.41: user: warning: [2024-05-06T06:34:22.677348153Z]: [talos] task removeAllPods (1/1): shutting down kubelet gracefully
10.76.1.41: user: warning: [2024-05-06T06:34:52.693757153Z]: [talos] service[kubelet](Stopping): Sending SIGTERM to task kubelet (PID 5041, container kubelet)
10.76.1.41: user: warning: [2024-05-06T06:34:52.865332153Z]: [talos] service[kubelet](Finished): Service finished successfully
10.76.1.41: user: warning: [2024-05-06T06:34:52.915614153Z]: [talos] removing pod loki/loki-canary-9kj7c with network mode "POD"
10.76.1.41: user: warning: [2024-05-06T06:34:52.924194153Z]: [talos] removing pod loki/promtail-fp4v8 with network mode "POD"
10.76.1.41: user: warning: [2024-05-06T06:34:52.932183153Z]: [talos] removing pod rook-ceph/rook-discover-hchlz with network mode "POD"
10.76.1.41: user: warning: [2024-05-06T06:34:52.941098153Z]: [talos] removing pod rook-ceph/rook-ceph-exporter-cp1-7cbc6447dd-p2hv4 with network mode "POD"
10.76.1.41: user: warning: [2024-05-06T06:34:52.951969153Z]: [talos] removing container loki/promtail-fp4v8:promtail
10.76.1.41: user: warning: [2024-05-06T06:34:52.965844153Z]: [talos] removed container loki/promtail-fp4v8:promtail
10.76.1.41: user: warning: [2024-05-06T06:35:13.075928153Z]: [talos] removed pod loki/loki-canary-9kj7c
10.76.1.41: user: warning: [2024-05-06T06:35:13.084004153Z]: [talos] removed pod rook-ceph/rook-discover-hchlz
10.76.1.41: user: warning: [2024-05-06T06:35:13.125148153Z]: [talos] removed pod rook-ceph/rook-ceph-exporter-cp1-7cbc6447dd-p2hv4
10.76.1.41: user: warning: [2024-05-06T06:35:13.185509153Z]: [talos] removed pod loki/promtail-fp4v8
10.76.1.41: user: warning: [2024-05-06T06:35:13.196790153Z]: [talos] removing pod rook-ceph/csi-cephfsplugin-xh87d with network mode "NODE"
10.76.1.41: user: warning: [2024-05-06T06:35:13.206358153Z]: [talos] removing pod kube-system/kube-scheduler-cp1 with network mode "NODE"
10.76.1.41: user: warning: [2024-05-06T06:35:13.215693153Z]: [talos] removing pod rook-ceph/csi-rbdplugin-7q2hw with network mode "NODE"
10.76.1.41: user: warning: [2024-05-06T06:35:13.225637153Z]: [talos] removing pod kube-system/kube-controller-manager-cp1 with network mode "NODE"
10.76.1.41: user: warning: [2024-05-06T06:35:13.235822153Z]: [talos] removing pod kube-system/cilium-5sm5c with network mode "NODE"
10.76.1.41: user: warning: [2024-05-06T06:35:13.244498153Z]: [talos] removing pod kube-system/kube-apiserver-cp1 with network mode "NODE"
10.76.1.41: user: warning: [2024-05-06T06:35:13.253723153Z]: [talos] removing pod prometheus/kube-prometheus-stack-prometheus-node-exporter-mxbvt with network mode "NODE"
10.76.1.41: user: warning: [2024-05-06T06:35:13.266069153Z]: [talos] removing container kube-system/kube-scheduler-cp1:kube-scheduler
10.76.1.41: user: warning: [2024-05-06T06:35:13.274987153Z]: [talos] removed pod rook-ceph/csi-cephfsplugin-xh87d
10.76.1.41: user: warning: [2024-05-06T06:35:13.282085153Z]: [talos] removing container kube-system/kube-controller-manager-cp1:kube-controller-manager
10.76.1.41: user: warning: [2024-05-06T06:35:13.293155153Z]: [talos] removed pod rook-ceph/csi-rbdplugin-7q2hw
10.76.1.41: user: warning: [2024-05-06T06:35:13.299709153Z]: [talos] removing container kube-system/kube-apiserver-cp1:kube-apiserver
10.76.1.41: user: warning: [2024-05-06T06:35:13.308638153Z]: [talos] removed pod kube-system/cilium-5sm5c
10.76.1.41: user: warning: [2024-05-06T06:35:13.315565153Z]: [talos] removed pod prometheus/kube-prometheus-stack-prometheus-node-exporter-mxbvt
10.76.1.41: user: warning: [2024-05-06T06:35:13.325642153Z]: [talos] removed container kube-system/kube-scheduler-cp1:kube-scheduler
10.76.1.41: user: warning: [2024-05-06T06:35:13.335000153Z]: [talos] removed container kube-system/kube-controller-manager-cp1:kube-controller-manager
10.76.1.41: user: warning: [2024-05-06T06:35:13.345684153Z]: [talos] removed container kube-system/kube-apiserver-cp1:kube-apiserver
10.76.1.41: user: warning: [2024-05-06T06:35:13.355247153Z]: [talos] removed pod kube-system/kube-scheduler-cp1
10.76.1.41: user: warning: [2024-05-06T06:35:13.361943153Z]: [talos] removed pod kube-system/kube-controller-manager-cp1
10.76.1.41: user: warning: [2024-05-06T06:35:13.470614153Z]: [talos] removed pod kube-system/kube-apiserver-cp1
10.76.1.41: user: warning: [2024-05-06T06:35:13.477807153Z]: [talos] task removeAllPods (1/1): done, 50.827400351s
10.76.1.41: user: warning: [2024-05-06T06:35:13.484669153Z]: [talos] phase cleanup (2/15): done, 50.839866746s
10.76.1.41: user: warning: [2024-05-06T06:35:13.491128153Z]: [talos] phase dbus (3/15): 1 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:35:13.496432153Z]: [talos] task stopDBus (1/1): starting
10.76.1.41: user: warning: [2024-05-06T06:35:13.502189153Z]: [talos] task stopDBus (1/1): done, 5.719684ms
10.76.1.41: user: warning: [2024-05-06T06:35:13.508517153Z]: [talos] phase dbus (3/15): done, 17.347771ms
10.76.1.41: user: warning: [2024-05-06T06:35:13.514673153Z]: [talos] phase leave (4/15): 1 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:35:13.520286153Z]: [talos] task leaveEtcd (1/1): starting
10.76.1.41: user: warning: [2024-05-06T06:35:13.559613153Z]: [talos] service[etcd](Stopping): Sending SIGTERM to task etcd (PID 5076, container etcd)
10.76.1.41: user: warning: [2024-05-06T06:35:13.570877153Z]: [talos] removed static pod {"component": "controller-runtime", "controller": "k8s.StaticPodServerController", "id": "kube-scheduler"}
10.76.1.41: user: warning: [2024-05-06T06:35:13.585706153Z]: [talos] removed static pod {"component": "controller-runtime", "controller": "k8s.StaticPodServerController", "id": "kube-apiserver"}
10.76.1.41: user: warning: [2024-05-06T06:35:13.600403153Z]: [talos] removed static pod {"component": "controller-runtime", "controller": "k8s.StaticPodServerController", "id": "kube-controller-manager"}
10.76.1.41: user: warning: [2024-05-06T06:35:14.720085153Z]: [talos] service[etcd](Finished): Service finished successfully
10.76.1.41: user: warning: [2024-05-06T06:35:14.865642153Z]: [talos] task leaveEtcd (1/1): done, 1.345468732s
10.76.1.41: user: warning: [2024-05-06T06:35:14.872214153Z]: [talos] phase leave (4/15): done, 1.357664733s
10.76.1.41: user: warning: [2024-05-06T06:35:14.878498153Z]: [talos] phase stopServices (5/15): 1 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:35:14.884784153Z]: [talos] task stopServicesForUpgrade (1/1): starting
10.76.1.41: user: warning: [2024-05-06T06:35:14.891567153Z]: [talos] service[udevd](Stopping): Sending SIGTERM to Process(["/sbin/udevd" "--resolve-names=never"])
10.76.1.41: user: warning: [2024-05-06T06:35:14.903110153Z]: [talos] service[cri](Stopping): Sending SIGTERM to Process(["/bin/containerd" "--address" "/run/containerd/containerd.sock" "--config" "/etc/cri/containerd.toml"])
10.76.1.41: user: warning: [2024-05-06T06:35:14.920808153Z]: [talos] service[trustd](Stopping): Sending SIGTERM to task trustd (PID 4992, container trustd)
10.76.1.41: user: warning: [2024-05-06T06:35:14.931725153Z]: [talos] service[udevd](Finished): Service finished successfully
10.76.1.41: user: warning: [2024-05-06T06:35:14.939560153Z]: [talos] service[cri](Finished): Service finished successfully
10.76.1.41: user: warning: [2024-05-06T06:35:15.040910153Z]: [talos] service[trustd](Finished): Service finished successfully
10.76.1.41: user: warning: [2024-05-06T06:35:15.048876153Z]: [talos] task stopServicesForUpgrade (1/1): done, 164.101859ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.056569153Z]: [talos] phase stopServices (5/15): done, 178.076274ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.063443153Z]: [talos] phase unmountUser (6/15): 1 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:35:15.069445153Z]: [talos] task unmountUserDisks (1/1): starting
10.76.1.41: user: warning: [2024-05-06T06:35:15.075542153Z]: [talos] task unmountUserDisks (1/1): done, 6.098293ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.082409153Z]: [talos] phase unmountUser (6/15): done, 18.973049ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.089047153Z]: [talos] phase unmount (7/15): 2 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:35:15.094630153Z]: [talos] task unmountPodMounts (2/2): starting
10.76.1.41: user: warning: [2024-05-06T06:35:15.100820153Z]: [talos] task unmountOverlayFilesystems (1/2): starting
10.76.1.41: user: warning: [2024-05-06T06:35:15.108138153Z]: [talos] task unmountPodMounts (2/2): unmounting /var/lib/kubelet/pods/7c1b1e50-adf5-413f-ab45-f0c9b3192cc6/volumes/kubernetes.io~secret/config
10.76.1.41: user: warning: [2024-05-06T06:35:15.124296153Z]: [talos] task unmountPodMounts (2/2): unmounting /var/lib/kubelet/pods/7c1b1e50-adf5-413f-ab45-f0c9b3192cc6/volumes/kubernetes.io~projected/kube-api-access-p5d6d
10.76.1.41: user: warning: [2024-05-06T06:35:15.141972153Z]: [talos] task unmountPodMounts (2/2): done, 47.329448ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.150447153Z]: [talos] task unmountOverlayFilesystems (1/2): done, 55.548595ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.158433153Z]: [talos] phase unmount (7/15): done, 69.380374ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.164734153Z]: [talos] phase unmountBind (8/15): 1 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:35:15.170866153Z]: [talos] task unmountSystemDiskBindMounts (1/1): starting
10.76.1.41: user: warning: [2024-05-06T06:35:15.178297153Z]: [talos] task unmountSystemDiskBindMounts (1/1): unmounting /system/state
10.76.1.41: kern:  notice: [2024-05-06T06:35:15.187330153Z]: XFS (mmcblk0p5): Unmounting Filesystem a41c083d-f8b8-40e1-b017-1475b399a125
10.76.1.41: user: warning: [2024-05-06T06:35:15.203359153Z]: [talos] task unmountSystemDiskBindMounts (1/1): unmounting /var
10.76.1.41: kern:  notice: [2024-05-06T06:35:15.447600153Z]: XFS (mmcblk0p6): Unmounting Filesystem 2c949d80-249e-4f7e-b427-081031015ed9
10.76.1.41: user: warning: [2024-05-06T06:35:15.490590153Z]: [talos] task unmountSystemDiskBindMounts (1/1): done, 319.74368ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.498640153Z]: [talos] phase unmountBind (8/15): done, 333.940236ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.506266153Z]: [talos] phase unmountSystem (9/15): 2 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:35:15.512554153Z]: [talos] task unmountStatePartition (2/2): starting
10.76.1.41: user: warning: [2024-05-06T06:35:15.519468153Z]: [talos] task unmountEphemeralPartition (1/2): starting
10.76.1.41: user: warning: [2024-05-06T06:35:15.526933153Z]: [talos] task unmountStatePartition (2/2): done, 7.220119ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.534368153Z]: [talos] task unmountEphemeralPartition (1/2): done, 14.983651ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.542312153Z]: [talos] phase unmountSystem (9/15): done, 36.062385ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.549226153Z]: [talos] phase verifyDisk (10/15): 1 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:35:15.555302153Z]: [talos] task verifyDiskAvailability (1/1): starting
10.76.1.41: user: warning: [2024-05-06T06:35:15.562864153Z]: [talos] task verifyDiskAvailability (1/1): done, 7.560517ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.570390153Z]: [talos] phase verifyDisk (10/15): done, 21.167699ms
10.76.1.41: user: warning: [2024-05-06T06:35:15.577088153Z]: [talos] phase upgrade (11/15): 1 tasks(s)
10.76.1.41: user: warning: [2024-05-06T06:35:15.582900153Z]: [talos] task upgrade (1/1): starting
10.76.1.41: user: warning: [2024-05-06T06:35:15.612788153Z]: [talos] task upgrade (1/1): performing upgrade via "ghcr.io/nberlee/installer:v1.6.7-rk3588"
10.76.1.41: user: warning: [2024-05-06T06:35:15.628704153Z]: [talos] pulling extension "ghcr.io/nberlee/rk3588:v1.6.5"
10.76.1.41: user: warning: [2024-05-06T06:35:18.205612153Z]: 2024/05/06 06:35:21 running Talos installer v1.6.7
10.76.1.41: user: warning: [2024-05-06T06:35:18.212183153Z]: 2024/05/06 06:35:21 WARNING: config validation:
10.76.1.41: kern:  notice: [2024-05-06T06:35:18.215701153Z]: XFS (mmcblk0p3): Mounting V5 Filesystem 832c26ad-0e47-4d3d-afdd-7bb231b93a87
10.76.1.41: user: warning: [2024-05-06T06:35:18.218438153Z]: 2024/05/06 06:35:21   .machine.install.extensions is deprecated, please see https://www.talos.dev/latest/talos-guides/install/boot-assets/
10.76.1.41: kern:    info: [2024-05-06T06:35:18.267291153Z]: XFS (mmcblk0p3): Ending clean mount
10.76.1.41: kern:  notice: [2024-05-06T06:35:18.275582153Z]: XFS (mmcblk0p3): Unmounting Filesystem 832c26ad-0e47-4d3d-afdd-7bb231b93a87
10.76.1.41: user: warning: [2024-05-06T06:35:18.297333153Z]: 2024/05/06 06:35:21 running pre-flight checks
10.76.1.41: user: warning: [2024-05-06T06:35:18.304430153Z]: 2024/05/06 06:35:21 host Talos version: v1.6.5
10.76.1.41: user: warning: [2024-05-06T06:35:18.321734153Z]: 2024/05/06 06:35:21 host Kubernetes versions: kubelet: 1.29.3, kube-apiserver: 1.29.3, kube-scheduler: 1.29.3, kube-controller-manager: 1.29.3
10.76.1.41: user: warning: [2024-05-06T06:35:18.337297153Z]: 2024/05/06 06:35:21 all pre-flight checks successful
10.76.1.41: user: warning: [2024-05-06T06:35:18.344035153Z]: 2024/05/06 06:35:21 discovered system extensions:
10.76.1.41: user: warning: [2024-05-06T06:35:18.350491153Z]: 2024/05/06 06:35:21 NAME             VERSION   AUTHOR
10.76.1.41: user: warning: [2024-05-06T06:35:18.357332153Z]: 2024/05/06 06:35:21 rk3588-drivers   v1.6.5    Nico Berlee
10.76.1.41: user: warning: [2024-05-06T06:35:18.364657153Z]: 2024/05/06 06:35:21 validating system extensions
10.76.1.41: user: warning: [2024-05-06T06:35:18.371032153Z]: 2024/05/06 06:35:21 preparing to run depmod to generate kernel modules dependency tree
10.76.1.41: user: warning: [2024-05-06T06:35:23.847023153Z]: Error: copying kernel modules from /system/extensions/000.ghcr.io-nberlee-rk3588-v1.6.5/rootfs/lib/modules failed: stat /system/extensions/000.ghcr.io-nberlee-rk3588-v1.6.5/rootfs/lib/modules/6.6.22-talos: no such file or directory
10.76.1.41: user: warning: [2024-05-06T06:35:23.871241153Z]: Usage:
10.76.1.41: user: warning: [2024-05-06T06:35:23.873506153Z]:   installer install [flags]
10.76.1.41: user: warning: [2024-05-06T06:35:23.877822153Z]: 
10.76.1.41: user: warning: [2024-05-06T06:35:23.879557153Z]: Flags:
10.76.1.41: user: warning: [2024-05-06T06:35:23.881809153Z]:   -h, --help   help for install
10.76.1.41: user: warning: [2024-05-06T06:35:23.886492153Z]: 
10.76.1.41: user: warning: [2024-05-06T06:35:23.888160153Z]: Global Flags:
10.76.1.41: user: warning: [2024-05-06T06:35:23.891098153Z]:       --arch string                    The target architecture (default "arm64")
10.76.1.41: user: warning: [2024-05-06T06:35:23.900545153Z]:       --board string                   The value of talos.board (default "none")
10.76.1.41: user: warning: [2024-05-06T06:35:23.909996153Z]:       --bootloader                     Deprecated: no op (default true)
10.76.1.41: user: warning: [2024-05-06T06:35:23.918570153Z]:       --config string                  The value of talos.config
10.76.1.41: user: warning: [2024-05-06T06:35:23.926478153Z]:       --disk string                    The path to the disk to install to
10.76.1.41: user: warning: [2024-05-06T06:35:23.935252153Z]:       --extra-kernel-arg stringArray   Extra argument to pass to the kernel
10.76.1.41: user: warning: [2024-05-06T06:35:23.944214153Z]:       --force                          Indicates that the install should forcefully format the partition
10.76.1.41: user: warning: [2024-05-06T06:35:23.956000153Z]:       --meta metaValueSlice            A key/value pair for META (default [])
10.76.1.41: user: warning: [2024-05-06T06:35:23.965172153Z]:       --platform string                The value of talos.platform
10.76.1.41: user: warning: [2024-05-06T06:35:23.973265153Z]:       --upgrade                        Indicates that the install is being performed by an upgrade
10.76.1.41: user: warning: [2024-05-06T06:35:23.984455153Z]:       --zero                           Indicates that the install should write zeros to the disk before installing
10.76.1.41: user: warning: [2024-05-06T06:35:23.997206153Z]: 
10.76.1.41: user: warning: [2024-05-06T06:35:23.998881153Z]: copying kernel modules from /system/extensions/000.ghcr.io-nberlee-rk3588-v1.6.5/rootfs/lib/modules failed: stat /system/extensions/000.ghcr.io-nberlee-rk3588-v1.6.5/rootfs/lib/modules/6.6.22-talos: no such file or directory
10.76.1.41: user: warning: [2024-05-06T06:35:24.081845153Z]: [talos] task upgrade (1/1): failed: task "upgrade" failed: exit code 1
10.76.1.41: user: warning: [2024-05-06T06:35:24.090616153Z]: [talos] phase upgrade (11/15): failed
10.76.1.41: user: warning: [2024-05-06T06:35:24.096109153Z]: [talos] upgrade sequence: failed

Environment

  • Talos version:
Client:
	Tag:         v1.6.5
	SHA:         22803bc5
	Built:       
	Go version:  go1.21.6 X:loopvar
	OS/Arch:     linux/amd64
Server:
	NODE:        10.76.1.41
	Tag:         v1.6.5
	SHA:         523c4966
	Built:       
	Go version:  go1.21.6 X:loopvar
	OS/Arch:     linux/arm64
	Enabled:     RBAC
  • Kubernetes version: v1.29.4
  • Platform: TuringPi 2 with 4x RK1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.