Giter Club home page Giter Club logo

tink's Introduction

Tinkerbell

Build Status codecov CII Best Practices

License

Tinkerbell is licensed under the Apache License, Version 2.0. See LICENSE for the full license text. Some of the projects used by the Tinkerbell project may be governed by a different license, please refer to its specific license.

Tinkerbell is part of the CNCF Projects.

CNCF Landscape

Community

The Tinkerbell community meets bi-weekly on Tuesday. The meeting details can be found here.

Community Resources:

What's Powering Tinkerbell?

The Tinkerbell stack consists of several microservices, and a gRPC API:

Tink

Tink is the short-hand name for the tink-server and tink-worker. tink-worker and tink-server communicate over gRPC, and are responsible for processing workflows. The CLI is the user-interactive piece for creating workflows and their building blocks, templates and hardware data.

Smee

Smee is Tinkerbell's DHCP server. It handles DHCP requests, hands out IPs, and serves up iPXE. It uses the Tinkerbell client to pull and push hardware data. It only responds to a predefined set of MAC addresses so it can be deployed in an existing network without interfering with existing DHCP infrastructure.

Hegel

Hegel is the metadata service used by Tinkerbell and OSIE. It collects data from both and transforms it into a JSON format to be consumed as metadata.

OSIE

OSIE is Tinkerbell's default an in-memory installation environment for bare metal. It installs operating systems and handles deprovisioning.

Hook

Hook is the newly introduced alternative to OSIE. It's the next iteration of the in-memory installation environment to handle operating system installation and deprovisioning.

PBnJ

PBnJ is an optional microservice that can communicate with baseboard management controllers (BMCs) to control power and boot settings.

Building

Use make help. The most interesting targets are make all (or just make) and make images. make all builds all the binaries for your host OS and CPU to enable running directly. make images will build all the binaries for Linux/x86_64 and build docker images with them.

Configuring OpenTelemetry

Rather than adding a bunch of command line options or a config file, OpenTelemetry is configured via environment variables. The most relevant ones are below, for others see https://github.com/equinix-labs/otel-init-go

Currently this is just for tracing, metrics needs to be discussed with the community.

Env Variable Required Default
OTEL_EXPORTER_OTLP_ENDPOINT n localhost
OTEL_EXPORTER_OTLP_INSECURE n false
OTEL_LOG_LEVEL n info

To work with a local opentelemetry-collector, try the following. For examples of how to set up the collector to relay to various services take a look at otel-cli

export OTEL_EXPORTER_OTLP_ENDPOINT=localhost:4317
export OTEL_EXPORTER_OTLP_INSECURE=true
./cmd/tink-server/tink-server <stuff>

Website

For complete documentation, please visit the Tinkerbell project hosted at tinkerbell.org.

tink's People

Contributors

abhinavmpandey08 avatar alexellis avatar cbkhare avatar chrisdoherty4 avatar dependabot[bot] avatar detiber avatar displague avatar gauravgahlot avatar gianarb avatar grahamc avatar invidian avatar jacobweinstock avatar kqdeng avatar maxpeal avatar mergify[bot] avatar micahhausler avatar mmlb avatar mrmrcoleman avatar nathangoulding avatar nshalman avatar panktishah26 avatar parauliya avatar rahulgrover99 avatar rgl avatar ryli17 avatar splaspood avatar srikarsganti avatar thebsdbox avatar tobert avatar tstromberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tink's Issues

Tinkerbell Docs Updates

  • We need to include what the docs assume we have running already, namely Terraform and Docker

  • Note what the project id is (hash after projects in the url) on this page

  • Set an expectation on provisioning. It took nearly 6 minutes and we didn't know if everything was moving as expected. (Will be nice to write: “it takes average 5minutes”)

  • What comes after Tinkerbell is provisioned? Now that Tinkerbell was provisioned and everything is working, what happens next? You have to press the arrow at the left to know what you have to do, but it is nice to write it at the end of the page.

  • When you run the setup script it asks for:
    Following network interfaces found on the system:
    eno1
    eno2
    enp1s0f0
    enp1s0f1
    Which one would you like to use for with Tinkerbell? As side note, @gianarb picked a wrong one and now I don’t know how to change it, so I did terraform apply again. (Answer looks to be enp1s0f1)

  • The SSH into the provisioner for the following steps: is easy to over look. Could we make that bold?

  • We need to include troubleshooting for the “Error: The facility sjc1 has no provisionable c3.small.x86 servers matching your criteria.” You have to change the target datacenter in main.tf. You'll need to change it in three places for the vlan and both machines.

If you see “Error: The facility sjc1 has no provisionable c3.small.x86 servers matching your criteria” you can change the target datacenter in main.tf (You need to change it in 3 places, the vlan, and both machines) It would be great if we had a CLI command or endpoint we could query to find out about availability.

Mark couldn’t find any c3.small.x86 in ams1, ewr1, nrt1, or sjc1. Switch to c1.small.x86 in Amsterdam and it wouldn't work. The docs should specify it has to be c3.small.x86 for Tinkerbell to work.

Provisioner stays active if the worker fails.

When the datacenter didn’t have hosts for both provisioner and worker, it spent over 3 minutes creating the provisioner before failing; the worker wasn’t started. After it failed, the provisioner was still there until I ran terraform apply with the new datacenter details. If the worker fails, it would be nice for the provisioner to shut down.

Provide release binary of tink CLI

A statically-linked release binary should be made available of the tink CLI. Users shouldn't have to download go to build the CLI, or wonder where to get it from. I couldn't find docs explaining where to find it. There also are no release tags of tink, but there should be, even if the tag is 0.1.0

For an example, see how we do this with arkade or inlets

Build config for Travis/Make:

https://github.com/alexellis/arkade/blob/master/.travis.yml
https://github.com/alexellis/arkade/blob/master/Makefile

Install script:

https://github.com/alexellis/arkade/blob/master/get.sh

action images are not available publicly yet ?

I'm trying to pull your action images, but I keep getting this:

root@ubuntu:/home/student# docker pull quay.io/tinkerbell/ubuntu:base
Error response from daemon: unauthorized: access to the requested resource is not authorized
root@ubuntu:/home/student# docker pull quay.io/tinkerbell/disk-wipe:v3
Error response from daemon: unauthorized: access to the requested resource is not authorized
root@ubuntu:/home/student# docker pull quay.io/tinkerbell/disk-partition:v3
Error response from daemon: unauthorized: access to the requested resource is not authorized

Are they public yet ?

Creating targets based on ip_addr field does not work

Additionally, it seems to be providing confusing error message:

$ tink target create '{"targets": {"machine2": {"ip_addr": "192.168.1.12"}}}'
2020/05/06 20:39:22 invalid key "ip_addr" in data. it should be "mac_addr" or "ip_addr" only

Configurable top-level directories

We should allow users to tailor where our directories / files get put down. Not necessarily too fine grained, I think supporting $PREFIX or something along those lines in setup.sh would be fine.

tink target create --help contains invalid invokation example

# tink target create --help
create a target

Usage:
  tink target create [flags]

Examples:
tinkerbell target create '{"targets": {"machine1": {"mac_addr": "02:42:db:98:4b:1e"},"machine2": {"ipv4_addr": "192.168.1.5"}}}'

Flags:
  -h, --help   help for create

Global Flags:
  -f, --facility string   used to build grcp and http urls

It should be:

tink target create '{"targets": {"machine1": {"mac_addr": "02:42:db:98:4b:1e"},"machine2": {"ipv4_addr": "192.168.1.5"}}}'

not:

tinkerbell target create '{"targets": {"machine1": {"mac_addr": "02:42:db:98:4b:1e"},"machine2": {"ipv4_addr": "192.168.1.5"}}}'

tink hardware push returns no useful data

tink hardware push returns no useful data - surely we can output something like "Inserted..OK", or some other data for users? The exit code can be checked, but is confusing.

tink hardware push "`cat /tmp/hardware.json`"
echo $?
0

Also can stdin be used to pipe the file or not?

cat /tmp/hardware.json | tink hardware push

This is unclear, but should be a workflow, putting the string in via args is not UNIX-like.

cc @gauravgahlot

Getting Started Experience

The current getting started UX involves a few steps. Instead, it should be as simple as running a single script which encapsulates all other stuff. The script can take inputs from users if needed.

curl https://raw.githubusercontent.com/tinkerbell/tink/master/setup.sh | bash

Use persistent volume to hold registry data

Current
The private registry created with the docker-compose.yml used to set up the workflow environment stores all the data in an anonymous volume.

Expected
The registry should use persistent volumes.

Website is out of date

The website http://tinkerbell.org only has two pages, and missing a lot of information that is in the docs/ in this repo. It is ok to have much more detailed docs on the Website, but it either should clearly have no real guides and point to the right guide in the repo, or should have a good but very simple "getting started" guide. Even better if it aligns with #49

Cut release tags from tink

Git enables releases to be made, these are shown in the GitHub Releases page.

Screenshot 2020-04-23 at 10 53 19

Tink currently has none, however I would suggest that the project starts to use them, even if they are RC tags or very early alpha tags.

For instance:

0.1.0
0.1.1

and so forth...

These tags can be checked out, show progress being made to internal and external stakeholders, and allow users to reason about "versions"

Example: https://github.com/alexellis/arkade/releases

If you install the Derek app on this repo, it will generate notes with all the PRs and commits pushed automatically.

Screenshot 2020-04-23 at 10 53 04

Add ability to reboot the machine after workflow is finished

For workflows, which provision the OS, it would be nice if the workflow itself could reboot the machine, after it's done, so the machine can boot itself into target OS, so the upper orchestration system (e.g. person who monitors provisioning process, some kind of logic which use IPMI etc.) don't need to care about that.

Things to consider:

  • worker can be part of multiple workflows. Perhaps reboot should only happen when all workflows are successfully finished.
  • perhaps workflow could indicate, that after it's finished, the reboot is needed e.g. by setting reboot parameter to true.
  • the action or task can't trigger a reboot by itself, as this will shut down the worker and it won't be able to report that reboot task succeeded

Make lint, vet and fmtcheck part of Makefile and part of CI run

We definitely should be running golint as part of our CI and fail it otherwise. It should be in the Makefile, so anyone can run it.

We also should be running go vet.

Finally, we should be running fmt check. We should not update the files as part of fmt check (as in gofmt -w -s <file>) but should be checking it. It usually is useful to have a Makefile target called fmtcheck that does the above and is executed as part of CI, and also a separate fmt target that fixes the files. This makes it easy for someone running it locally to fmt all files in a single command, but also for CI to check the status.

Add documentation for passing cloud-init metadata

For service provisioning, customer need pass cloud-init info to create user/get package, etc. tink should handle cloud-init via metadata, but it's not clear on how to pass cloud-init data to metadata server and retrieve it during boot up.

README file is missing basic information

Update the README to include at the minimum a

  • one line project description
  • link to web site
  • one paragraph about the project
  • examples of use and guides to getting started
  • current status of project maturity, at some level that doesn't need daily changes

Looks like a bunch of this might be answered in depth by #20

Is the compose file to build it or to run it?

There is a compose file in this repo, but it isn't completely clear if it is to build tinkerbell or run it. If it is to run it, there are dependencies on git and git-lfs, which make it difficult to "just run".

Add version subcommand

With tink binary, it should be possible to check version of both CLI tool and ideally server, similar to kubectl version functionality, as currently, there is no way to reference which version of the binary is in use when reporting the issues.

add init support for centos7

The setup script is not working on a CentOS 7 machine:

[root@localhost tink]# ./setup_with_docker_compose.sh
grep: /etc/network/interfaces: No such file or directory
This is network interface
grep: /etc/network/interfaces: No such file or directory

It should also have set -e to stop execution if there are errors.

Insecure defaults from the setup instructions

After following the setup instructions, all services appear to be directly exposed on the Internet which is never a good idea, Kibana for instance has no authentication or TLS configured.

Screenshot 2020-04-30 at 16 48 38

Please consider removing this default in favour of accessing the dashboards via SSH tunnels or by using inlets.

Do the nodes even need a public IP? I hear that Packet now supports nodes with no public IPs, perhaps the terraform could provision a tiny Type0 to run Nginx as a reverse proxy with auth?

These are the ports that are open:

Screenshot 2020-04-30 at 16 50 02

I do not consider this to be a security advisory that needs to be handled via backchannels or email, this is probably just an oversight. If the team feel otherwise I can remove the issue.

Worker should not exit once all workflows are finished

Currently worker polls for new workflows while some other workflows are running, but when they are all finished, it exists. That seem counter-intuitive and requires from user to reboot the machine to run more workflows, which may not be even possible, if one workflow already installed the OS and then machine will boot from the disk.

It seems worker should wait infinitely for new workflows. See also #71

Do we need git-lfs?

What do we use git-lfs for? Based on the .gitattributes file in boots here, it looks like

  • 3 .efi files
  • 1 .kpxe file
  • 1 .go file; how large could a go file be and what is in it?

More importantly, do these need to be in lfs? Are they actually versioned source, or just generated binaries that we use, and possibly could move to a saner store for large data that itself does not require versioning, or at least not subject to PR-style changes and tracking like normal source code?

Yes, I am thinking about OCI image repositories which do quite well with things like this, but either way, this creates complexity in git and a dependency on git and git+lfs to start it.

expose endpoints

We need to expose endpoints for the CRUD operations so that external users can interact via means other than the CLI.

tink hardware create/list/delete
tink template create/list/delete
tink workflow create/list/delete

Suggest we start with implementing a gRPC server for these actions.

Tinkerbell vision & mission

Hi everyone, I wanted to know a bit more about the vision & mission for Tinkerbell.

I've just started an awesome bare-metal repo which lists Tinkerbell alongside some classic projects and products. There's also some writing and analysis I'm planning on doing in a very short period of time.

Could you help me with some of these questions? I don't mind taking them over email.

What's the USP vs prior art?

What is / or are the USPs that Tinkerbell brings in the context of prior art? How does it compare to the incumbent projects and products? What are the main use-cases and pros/cons?

Are there certain things that are not in scope? I.e. IPMI Is there a story for that?

In what ways is Tinkerbell similar to the Packet API? What lessons were learned and then applied?

Key concepts

Workflows seem to be important. What are the alternatives to a workflow?

What was the reason that Tinkerbell was created?

Why was Tinkerbell created? Is it doing it in a different way? Is that a novel technique and why?

Who's the main sponsor for the project? Are other companies involved or coming on board to contribute / test / build it?

Community / ethos / vision

What is the ethos for the project? I see that Go, gRPC is used and docker-compose. Are there values or experiences that lead to these choices?

If there was a new developer that wanted to contribute, what values or decisions could be codified and written up to help them make a meaningful contribution?

What would the ideal community look like, 6 months from now? Is there a GA / 1.0 release on the horizon?

Thanks a lot 👍

Alex

Errors when spinning up the provisioner and worker

When the datacenter didn’t have hosts for both provisioner and worker, it spent over 3 minutes creating the provisioner before failing; the worker wasn’t started. After it failed, the provisioner was still there until I ran terraform apply with the new datacenter details.

It would be great if we had a CLI command or endpoint we could query to find out about availability.

Mark couldn’t find any c3.small.x86 in ams1, ewr1, nrt1, or sjc1 and only knew there was an issue after trying to spin up.

Does Tinkerbell have to run on c3.small.x86? When Mark tried to switch to c1.small.x86 in Amsterdam and it worked. Then it broke with:

Error: POST https://api.packet.net/ports/ac8144a2-5656-47a2-a954-30b61dafd10e/disbond: 422 This device is not enabled for Layer 2. Please contact support for more details.

on main.tf line 19, in resource "packet_device" "tf-provisioner": 19: resource "packet_device" "tf-provisioner" {

Error: POST https://api.packet.net/ports/fd9e24e7-0c63-4b90-a2f4-218447a6d8b7/disbond: 422 Hardware invalid server type

on main.tf line 30, in resource "packet_device" "tf-worker": 30: resource "packet_device" "tf-worker" {

He tried again with a c3.small.x86 for the worker.

Error: POST https://api.packet.net/ports/c0eea99b-7d89-40f5-bcc7-c6dee46f226f/disbond: 422 This device is not enabled for Layer 2. Please contact support for more details.

on main.tf line 19, in resource "packet_device" "tf-provisioner": 19: resource "packet_device" "tf-provisioner" {

Can I run it with nothing more than compose?

Related to #51 , is there a way that I can just run it with a single command, i.e. I get a single config file (docker-compose.yml is fine) and it launches it with no additional dependencies? That would be the second-fastest way to get started (the fastest being #49 of course, which would just wrap this).

workflow creation - missing steps

Hi there,

I setup a tinkerbell in my local environment, all seems to be operational. I tried to setup a workflow following instructions from https://github.com/tinkerbell/tink/blob/master/docs/writing-workflow.md , but ended up with the following error message:

~ # tink hardware all
~ # tink target list
+--------------------------------------+--------------------------------------------------------------+
| TARGET ID | TARGET DATA |
+--------------------------------------+--------------------------------------------------------------+
| 2597b263-0de1-422c-8aca-7738d903e550 | {"targets": {"machine1": {"mac_addr": "52:54:00:f9:79:28"}}} |
+--------------------------------------+--------------------------------------------------------------+
~ # tink template list
+--------------------------------------+---------------+-------------------------------+-------------------------------+
| TEMPLATE ID | TEMPLATE NAME | CREATED AT | UPDATED AT |
+--------------------------------------+---------------+-------------------------------+-------------------------------+
| 5d8f9e7c-e984-4162-bff0-ad0400fe5b1b | sample | 2020-04-15 14:14:16 +0000 UTC | 2020-04-15 14:14:16 +0000 UTC |
+--------------------------------------+---------------+-------------------------------+-------------------------------+
~ # tink workflow create -t 5d8f9e7c-e984-4162-bff0-ad0400fe5b1b -r 2597b263-0de1-422c-8aca-7738d903e550
2020/04/15 14:15:13 rpc error: code = Unknown desc = Failed to insert in workflow_state: Target mentioned with refernece 52:54:00:f9:79:28 not found

From the logs, it looks like I am missing entries in hardware table in db. But in the documentation, I haven't seen any steps or instructions on how to add hardware entries to the db.

clarifications for setup page - what is inputenv?

The setup page has no info under what to provide in inputenv. The file only has a few elements, but some could be read in multiple ways:

  • what is host IP vs nginx IP?
  • Why are there two?
  • Do these need to be on the same network?
  • I assume broad_ip means broadcast, but it isn’t self-evident?
  • What is the cidr for? Is it the netmask for the above (coming back to above question, “do they need to be on the same network”)?

Simplified stack version needed

I'd love to run tinkerbell over the internet to help my small project (https://devpost.com/software/help-education-sector-to-produce-more-laptops-for-students), with significantly reduced stack components.

I've a vision to run a 3-4 fixed, catch-all MACs workflows. Here's how I see it:

  • get rid of most of logic for TFTP/DHCP/BOOTP - so no boots, cacher, and hegel needed in my case
  • keep osie's tree - especially kernels and vmlinuzez, great concept
  • keep the way you describe workflow steps in a singe template file - but somehow detach that logic from components I've cut down, another great concept

From my analysis, I conclude that a potential killer feature here to make this all happen would be... for worker helper/container to be able to read workflow definition from a https source. Would you think it through please and point me how this potentially could be implemented (I can try myself)?

I'm open to discuss this idea further if you wanted to.

Style issue with error strings

According to idiomatic Golang style, error strings should not begin with a capital letter.

Error strings should not be capitalized (unless beginning with proper nouns or acronyms) or end with punctuation, since they are usually printed following other context.

https://github.com/golang/go/wiki/CodeReviewComments#error-strings

Almost all of the error strings found in the workflow engine do not follow this rule:

https://github.com/tinkerbell/tink/blob/master/executor/executor.go#L26
https://github.com/tinkerbell/tink/blob/master/executor/executor.go#L89

I don't know where it is on the list, but perhaps going forward it might be worth following for new code and when making changes? cc @nathangoulding @deitch

golint may find this.

Support provisioning RPi

We should support an easy homelab deployment model for Tinkerbell, such as supporting RPi with UEFI (https://rpi4-uefi.dev/) or PXE (doc) . The goal would be to run core services on a local laptop or computer and lifecycle RPi 3 or RPi 4 devices.

Make --help work without environment variables set

Currently running tink --help gives me not the nicest panic:

$ tink --help
{"level":"panic","ts":1587130851.119304,"caller":"rollbar/rollbar.go:20","msg":"required envvar is unset","service":"github.com/tinkerbell/tink","pkg":"log","envvar":"ROLLBAR_TOKEN"}
panic: required envvar is unset

goroutine 1 [running]:
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc00049e2c0, 0xc0002afd80, 0x1, 0x2)
        /home/invidian/go/pkg/mod/go.uber.org/[email protected]/zapcore/entry.go:229 +0x547
go.uber.org/zap.(*SugaredLogger).log(0xc00028c398, 0x4, 0x55873649a79c, 0x18, 0x0, 0x0, 0x0, 0xc0003dfa58, 0x2, 0x2)
        /home/invidian/go/pkg/mod/go.uber.org/[email protected]/sugar.go:234 +0x102
go.uber.org/zap.(*SugaredLogger).Panicw(...)
        /home/invidian/go/pkg/mod/go.uber.org/[email protected]/sugar.go:204
github.com/packethost/pkg/log/internal/rollbar.Setup(0xc00028c398, 0x55873649bbe5, 0x1a, 0x2)
        /home/invidian/go/pkg/mod/github.com/packethost/[email protected]/log/internal/rollbar/rollbar.go:20 +0x44d
github.com/packethost/pkg/log.configureLogger(0xc00040a780, 0x55873649bbe5, 0x1a, 0x55873648bace, 0x4, 0x55873648b4dc, 0x3, 0x55873648c9da, 0x5)
        /home/invidian/go/pkg/mod/github.com/packethost/[email protected]/log/log.go:39 +0x26c
github.com/packethost/pkg/log.Init(0x55873649bbe5, 0x1a, 0x558736699a00, 0xc0002b1100, 0x100, 0x30, 0x30, 0xc0003eccc0)
        /home/invidian/go/pkg/mod/github.com/packethost/[email protected]/log/log.go:73 +0x228
main.main()
        /home/invidian/go/pkg/mod/github.com/tinkerbell/[email protected]/main.go:17 +0x6c

tink-cli should validate hardware data

By mistake, I created a bad hardware data and tink-cli allowed me to push it into database. In my opinion, there should be a validation mechanism whether on the client or server side, preventing users from pushing wrong data. As a consequence of my mistake, I could observe boots panic and crash: tinkerbell/smee#28 .

setup support for CentOS 8

Issue summary is pretty descriptive - adding support for setting this up on CentOS 8 in addition to CentOS 7.

As part of this, we should consider stopping the script if it's on an unsupported OS.

Give known configuration for tinkerbell on a Packet host

I would like to see a known configuration for the installer script in #62 which provides everything needed without any guess-work or troubleshooting.

I.e. "run these commands" and it results in Tinkerbell being up.

  • Packet machine type and region
  • OS (if you can use Ubuntu, that would be easier)
  • Exact env-vars required to get this to work (I tried the supplied values and the script crashes)

cc @gauravgahlot

A comment on this issue should be sufficient to get started, but the above should be in the docs eventually.

tink-worker for arm64

Hey there,

Currently there is no valid build of tink-worker for arm64, so any execution on for example raspberry pi is failing:

pi@raspberrypi:~ $ docker run -ti --entrypoint /bin/sh quay.io/tinkerbell/tink-worker
standard_init_linux.go:211: exec user process caused "exec format error"
failed to resize tty, using default size

It would be nice, if you could have multi architecture images stored in your registry.

unify hardware/target data model

We need to unify the hardware and target data model, and standardize on the changes that @kdeng3849 introduced to switch to the new, easier to comprehend model. This should also remove cacher as a dependency.

tink hardware doesn't have delete action

tink hardware command doesn't have delete action compare to other tink commands, I know we can push empty json body to get the "delete" result, but it would be nice to have "delete" action.

Examples:
tink hardware [command]

Available Commands:
  all         Get all known hardware for facility
  id          Get hardware by id
  ingest      Trigger tinkerbell to ingest
  ip          Get hardware by any associated ip
  mac         Get hardware by any associated mac
  push        Push new hardware to tinkerbell
  watch       Register to watch an id for any changes

Provide real-world example with Ubuntu LTS

As an end-user I want to provision a known OS such as Ubuntu LTS, 18.04.

This should be documented in the first steps instead of or as well as hello-world.

For each step in the task to install, there should be sources for the Dockerfiles so that these can be hacked on and adjusted.

I see this referenced in the documentation as an example of what could be done, but it's not in the examples folder, so I am not sure if the source is published for it.

It would also be extremely useful to have an example using cloud-init and the Ubuntu cloud image - https://cloud-images.ubuntu.com/bionic/current/

To recap, when completed, the following would be available on GitHub, with instructions:

  1. Working Ubuntu workflow with sources for each dockerfile step for a Packet host (preferably the one used in the Terraform setup on the www site)
  2. cloud-init example workflow for the worker in the Terraform setup from the www setup.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.