Giter Club home page Giter Club logo

harvester's Introduction

Harvester

Build Status Go Report Card Releases Slack

Harvester is a modern, open, interoperable, hyperconverged infrastructure (HCI) solution built on Kubernetes. It is an open-source alternative designed for operators seeking a cloud-native HCI solution. Harvester runs on bare metal servers and provides integrated virtualization and distributed storage capabilities. In addition to traditional virtual machines (VMs), Harvester supports containerized environments automatically through integration with Rancher. It offers a solution that unifies legacy virtualized infrastructure while enabling the adoption of containers from core to edge locations.

harvester-ui

Overview

Harvester is an enterprise-ready, easy-to-use infrastructure platform that leverages local, direct attached storage instead of complex external SANs. It utilizes Kubernetes API as a unified automation language across container and VM workloads. Some key features of Harvester include:

  1. Easy to install: Since Harvester ships as a bootable appliance image, you can install it directly on a bare metal server with the ISO image or automatically install it using iPXE scripts.
  2. VM lifecycle management: Easily create, edit, clone, and delete VMs, including SSH-Key injection, cloud-init, and graphic and serial port console.
  3. VM live migration support: Move a VM to a different host or node with zero downtime.
  4. VM backup, snapshot, and restore: Back up your VMs from NFS, S3 servers, or NAS devices. Use your backup to restore a failed VM or create a new VM on a different cluster.
  5. Storage management: Harvester supports distributed block storage and tiering. Volumes represent storage; you can easily create, edit, clone, or export a volume.
  6. Network management: Supports using a virtual IP (VIP) and multiple Network Interface Cards (NICs). If your VMs need to connect to the external network, create a VLAN or untagged network.
  7. Integration with Rancher: Access Harvester directly within Rancher through Rancher’s Virtualization Management page and manage your VM workloads alongside your Kubernetes clusters.

The following diagram outlines a high-level architecture of Harvester:

architecture.svg

  • Longhorn is a lightweight, reliable, and easy-to-use distributed block storage system for Kubernetes.
  • KubeVirt is a virtual machine management add-on for Kubernetes.
  • Elemental for SLE-Micro 5.3 is an immutable Linux distribution designed to remove as much OS maintenance as possible in a Kubernetes cluster.

Hardware Requirements

To get the Harvester server up and running the following minimum hardware is required:

Type Requirements
CPU x86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum for testing; 16-core or above required for production
Memory 32 GB minimum; 64 GB or above required for production
Disk Capacity 250 GB minimum for testing (180 GB minimum when using multiple disks); 500 GB or above required for production
Disk Performance 5,000+ random IOPS per disk (SSD/NVMe). Management nodes (first three nodes) must be fast enough for etcd
Network Card 1 Gbps Ethernet minimum for testing; 10Gbps Ethernet required for production
Network Switch Trunking of ports required for VLAN support

We recommend server-class hardware for best results. Laptops and nested virtualization are not officially supported.

Quick start

You can use the ISO to install Harvester directly on the bare-metal server to form a Harvester cluster. Users can add one or many compute nodes to join the existing cluster.

To get the Harvester ISO, download it from the Github releases.

During the installation, you can either choose to create a new Harvester cluster or join the node to an existing Harvester cluster.

  1. Mount the Harvester ISO file and boot the server by selecting the Harvester Installer option. iso-install.png
  2. Use the arrow keys to choose an installation mode. By default, the first node will be the management node of the cluster. iso-install-mode.png
    • Create a new Harvester cluster: Select this option to create an entirely new Harvester cluster.
    • Join an existing Harvester cluster: Select this option to join an existing Harvester cluster. You need the VIP and cluster token of the cluster you want to join.
    • Install Harvester binaries only: If you choose this option, additional setup is required after the first bootup.
  3. Choose the installation disk you want to install the Harvester cluster on and the data disk you want to store VM data on. By default, Harvester uses GUID Partition Table (GPT) partitioning schema for both UEFI and BIOS. If you use the BIOS boot, then you will have the option to select Master boot record (MBR). iso-choose-disks.png
    • Installation disk: The disk to install the Harvester cluster on.
    • Data disk: The disk to store VM data on. Choosing a separate disk to store VM data is recommended.
    • Persistent size: If you only have one disk or use the same disk for both OS and VM data, you need to configure persistent partition size to store system packages and container images. The default and minimum persistent partition size is 150 GiB. You can specify a size like 200Gi or 153600Mi.
  4. Configure the HostName of the node.
  5. Configure network interface(s) for the management network. By default, Harvester will create a bonded NIC named mgmt-bo, and the IP address can either be configured via DHCP or statically assigned. iso-config-network.png
  6. (Optional) Configure the DNS Servers. Use commas as a delimiter to add more DNS servers. Leave blank to use the default DNS server.
  7. Configure the virtual IP (VIP) by selecting a VIP Mode. This VIP is used to access the cluster or for other nodes to join the cluster. iso-config-vip.png
  8. Configure the cluster token. This token will be used for adding other nodes to the cluster.
  9. Configure and confirm a Password to access the node. The default SSH user is rancher.
  10. Configure NTP servers to make sure all nodes' times are synchronized. This defaults to 0.suse.pool.ntp.org. Use commas as a delimiter to add more NTP servers.
  11. (Optional) If you need to use an HTTP proxy to access the outside world, enter the proxy URL address here. Otherwise, leave this blank.
  12. (Optional) You can choose to import SSH keys by providing HTTP URL. For example, your GitHub public keys https://github.com/<username>.keys can be used.
  13. (Optional) If you need to customize the host with a Harvester configuration file, enter the HTTP URL here.
  14. Review and confirm your installation options. After confirming the installation options, Harvester will be installed on your host. The installation may take a few minutes to complete.
  15. Once the installation is complete, your node restarts. After the restart, the Harvester console displays the management URL and status. The default URL of the web interface is https://your-virtual-ip. You can use F12 to switch from the Harvester console to the Shell and type exit to go back to the Harvester console. iso-installed.png
  16. You will be prompted to set the password for the default admin user when logging in for the first time. first-login.png

Releases

NOTE:

  • <version>* means the release branch is under active support and will have periodic follow-up patch releases.
  • Latest release means the version is the latest release of the newest release branch.
  • Stable release means the version is stable and has been widely adopted by users.
  • EOL means that the software has reached the end of its useful life and no further code-level maintenance will be provided. You may continue to use the software within the terms of the licensing agreement.

https://github.com/harvester/harvester/releases

Release Version Type Release Note (Changelog) Upgrade Note
1.3* 1.3.1 Latest 🔗 🔗
1.2* 1.2.2 Stable 🔗 🔗
1.1* 1.1.3 EOL 🔗 🔗

Documentation

Find more documentation here.

Demo

Check out this demo to get a quick overview of the Harvester UI.

Source code

Harvester is 100% open-source software. The project source code is spread across a number of repos:

Name Repo Address
Harvester https://github.com/harvester/harvester
Harvester Dashboard https://github.com/harvester/dashboard
Harvester Installer https://github.com/harvester/harvester-installer
Harvester Network Controller https://github.com/harvester/harvester-network-controller
Harvester Cloud Provider https://github.com/harvester/cloud-provider-harvester
Harvester Load Balancer https://github.com/harvester/load-balancer-harvester
Harvester CSI Driver https://github.com/harvester/harvester-csi-driver
Harvester Terraform Provider https://github.com/harvester/terraform-provider-harvester

Community

If you need any help with Harvester, please join us at either our Slack #harvester channel or forums where most of our team hangs out at.

If you have any feedback or questions, feel free to file an issue.

License

Copyright (c) 2024 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

harvester's People

Contributors

aiwantaozi avatar bk201 avatar catherineluse avatar chrisho avatar frankyang0529 avatar fsdaniel avatar futuretea avatar gitlawr avatar guangbochen avatar ibrokethecloud avatar joshmeranda avatar jubalh avatar lucidd avatar m-ildefons avatar masteryyh avatar orangedeng avatar shangma avatar sheng-liang avatar simonflood avatar skalt avatar starbops avatar vasiliy-ul avatar vicente-cheng avatar votdev avatar w13915984028 avatar webberhuang1118 avatar weihanglo avatar yaocw2020 avatar yasker avatar yu-jack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

harvester's Issues

Delay the verification of VM storage service endpoint

Improvement:

An entry verification for VM storage service endpoint blocks to start the Harvester, it should delay the verification or support to customize it in bootstrapping.

Reference:

The error report from verification:

add vm image management crd and controller

This is a feature request:

  • add image CRD to store VM images by URL
  • support Minio as backend server
  • add action handler that allows user to upload an image directly from the local computer[low priority]
  • add image secret reference

add vm lifecycle action handler

Description:

Add action handlers to the kubevirt virtual machine object.

  • start VM action handler
  • stop VM action handler
  • restart VM action handler
  • pause VM action handler
  • unpause VM action handler
    - [ ] terminate the VM action handler(e.g permanent delete the VM after 2 minutes?) [TBD]

Clarify the env names of VM storage service in Harvester

Improvement:

The meaning of the env names of VM storage service is not clear enough, it should be compatible with other S3(Object storage) services.

Reference:

Clarify the env names of VM storage service, the following is a feasible proposal.

  • MINIO_ACCESS_KEY -> VM_STORAGE_ACCESS_KEY
  • MINIO_SECRET_KEY -> VM_STORAGE_SECRET_KEY
  • MINIO_URL -> VM_STORAGE_ENDPOINT

TODO:

  • Adjust Harvester.
  • Change the envs of installation charts.

UI design

few harvester-UI designs and feature changes:

  1. vm delete logo
  2. vm adjust the location of the namespace
  3. networking todo
  4. rootdisk can get source: image ,size: imageSize
  5. vm add ssh key default select
  6. start vm default true
  7. template delete namespace (set default namespace)
  8. ssh key add a search input
  9. ssh key name set the default name
  10. vm template option is hidden by default (rancher rke)
  11. ssh key The content disappears after input
  12. image action launch
  13. add disk spell error
  14. template describe position adjustment
  15. select all position
  16. storageclass need to get the first default through API, it cannot be written to hostpath

kubevir CDI CRD reconcile loop issue

Issue:
kubevirt's datavoulems and cdiconfigs keep updating its managedFields.time field and it leads to steve keeps refreshing its loaded schema.

Reproduce steps:
kubectl get crds -w and the following two CRDs keeps updating

datavolumes.cdi.kubevirt.io                                     2020-07-15T05:52:53Z
cdiconfigs.cdi.kubevirt.io                                      2020-07-15T05:52:53Z

check the VM server logs and it will print refreshing schema message in every 2-3 seconds:

INFO[0013] Refreshing all schemas                       
INFO[0014] Refreshing all schemas                       
INFO[0014] APIVersion /v1 Kind Bindin
...

Expected:
there are two ways to address this issue:

  1. since the managedFields.time is introduced in k8s 1.18 and only used as object metadata, it should not be updated continuously - preferred.
  2. ensure steve only refreshing updated CRD schema instead of all.

VM create page issues

  • 1. Create a VM with no image selected, the create button turns to created - Validated
    image

  • 2. Create a VM with the only image selected, the page title disappears - Validated
    image

  • 3. An image just can be searched by using the accurate criteria, e.g., if a user searches the image by using the keyword cent , it will just filter out all the items contains cent accurately. (Case sensitive) - Validated

  • 4. Choose a Size, CPU and Memory textboxes are allowed to fill in non-numeric values if a user is using a Chinese input method - Validated

  • 5. Choose a Size, CPU and Memory textboxes are allowed to fill in a super large number, and they finally turn into infinity - Validated
    image

  • 6. Add a disk, there is no limit length on the Name, users can name it with a very very long string. Size has no limit as well, after clicking the show advanced, there is show advanced still displaying. Edit a disk, it still pops up a 'Add a disk' prompt - Validated
    image

  • 7. Add a networking interface, there is no limit length on the Name and Networking Name, users can name it with a very very long string, size has no limit as well. Edit a networking interface, it still pops up a 'Add a networking interface' prompt. An invalid Mac Address still can be saved - - Validated
    image

  • 8. Add an SSH key, the length limit for the Name is better to be set in the front end. the SSH-Key label overlaps the content in the textbox. - Will keep tracking it
    image

improve image file validation

Description:
improve image file validation, check whether CDI or QEMU have tools to validate the correct image type and content

allow vm to reboot

Description:

allow the user to manage VM through subresource APIs - e.g pause, unpause, reboot, rename, guestosinfo

  • allow VM to reboot through API action handler.
  • allow VM to pause/unpause through API action hander.

add image metadata info to the DV annotations

add image metadata ns:name to the dataVolume template annotations when creating the VirtualMachine, therefore user/UI will be able to know which image the URL is related.

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
spec:
  dataVolumeTemplates:
  - metadata:
      annotations:
        harvester.cattle.io/imageId: default:myimage ## add image metadata annotation with `ns:name`
      name: lawr-ubuntu
    spec:
      pvc: {...}
      source:
        http:
          url: http://minio.default:9000/vm-images/xxxxx
  template:
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/vm: lawr-ubuntu
    spec: {...}

Provide image format info

Be able to show the format when a VM image is imported.
The incentive is to handle root volume differently for iso image.

websocket continue to push deleted image

Problem:

When a user deletes an image from the UI the API shows the object is successfully removed, but the WebSocket will continue to push the deleted object to the frontend.

Expected Solution:

WebSocket should not continue to pushing deleted objects.

A virtual machine cannot be created by using the generated hostname with uppercase

Steps to replicate:

  1. Select one of the images, in this case, select CentOS-7-x86_64-GenericCloud.qcow2.xz
  2. Fill up other information and save it

Result:
The virtual machine cannot be created because of the invalid hostname
image

Suggestion:
It makes no sense that a user cannot create a virtual machine by using a generated name by default even though a hostname is not allowed to have any upper case.

If there is no way to have this kind of name to be saved, it's better to let the frontend convert the uppercase to lowercase before sending it to the backend.

backend crash

goroutine 590 [running]:
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1800d20, 0x2b39d20)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x1800d20, 0x2b39d20)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/rancher/harvester/pkg/controller/master/template.(*templateHandler).OnChanged(0xc000415320, 0xc00213a0a0, 0x18, 0x0, 0x7265736572703a66, 0x203000, 0x2273646c6569466e)
	/go/src/github.com/rancher/harvester/pkg/controller/master/template/template_controller.go:16 +0x26
github.com/rancher/harvester/pkg/generated/controllers/vm.cattle.io/v1alpha1.FromTemplateHandlerToHandler.func1(0xc00213a0a0, 0x18, 0x0, 0x0, 0x19b07e0, 0x697461746f6e6e61, 0x7b3a22736e6f, 0x60)
	/go/src/github.com/rancher/harvester/pkg/generated/controllers/vm.cattle.io/v1alpha1/template.go:103 +0xdb
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.SharedControllerHandlerFunc.OnChange(0xc0003f2b80, 0xc00213a0a0, 0x18, 0x0, 0x0, 0x65697061225c3a22, 0x408c5b, 0x1afa3a0, 0xc000217320)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/sharedcontroller.go:29 +0x4e
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*sharedHandler).OnChange(0xc0007140f0, 0xc00213a0a0, 0x18, 0x0, 0x0, 0x42d200, 0x0)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/sharedhandler.go:65 +0x150
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*controller).syncHandler(0xc000324fd0, 0xc00213a0a0, 0x18, 0xc000a794d0, 0xc000bdad30)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/controller.go:208 +0x122
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*controller).processSingleItem(0xc000324fd0, 0x176d800, 0xc00271e610, 0x0, 0x0)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/controller.go:193 +0xf0
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*controller).processNextWorkItem(0xc000324fd0, 0xc000ba2e40)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/controller.go:170 +0x51
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*controller).runWorker(0xc000324fd0)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/controller.go:159 +0x2b
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0007494b0)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5e
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007494b0, 0x1c8c640, 0xc000648000, 0xc000bd4001, 0xc0000e2420)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xa3
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007494b0, 0x3b9aca00, 0x0, 0x3a2265706f637301, 0xc0000e2420)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0xe2
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0007494b0, 0x3b9aca00, 0xc0000e2420)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*controller).run
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/controller.go:130 +0x35e
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x130 pc=0x15b1066]
goroutine 590 [running]:
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x1800d20, 0x2b39d20)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/rancher/harvester/pkg/controller/master/template.(*templateHandler).OnChanged(0xc000415320, 0xc00213a0a0, 0x18, 0x0, 0x7265736572703a66, 0x203000, 0x2273646c6569466e)
	/go/src/github.com/rancher/harvester/pkg/controller/master/template/template_controller.go:16 +0x26
github.com/rancher/harvester/pkg/generated/controllers/vm.cattle.io/v1alpha1.FromTemplateHandlerToHandler.func1(0xc00213a0a0, 0x18, 0x0, 0x0, 0x19b07e0, 0x697461746f6e6e61, 0x7b3a22736e6f, 0x60)
	/go/src/github.com/rancher/harvester/pkg/generated/controllers/vm.cattle.io/v1alpha1/template.go:103 +0xdb
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.SharedControllerHandlerFunc.OnChange(0xc0003f2b80, 0xc00213a0a0, 0x18, 0x0, 0x0, 0x65697061225c3a22, 0x408c5b, 0x1afa3a0, 0xc000217320)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/sharedcontroller.go:29 +0x4e
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*sharedHandler).OnChange(0xc0007140f0, 0xc00213a0a0, 0x18, 0x0, 0x0, 0x42d200, 0x0)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/sharedhandler.go:65 +0x150
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*controller).syncHandler(0xc000324fd0, 0xc00213a0a0, 0x18, 0xc000a794d0, 0xc000bdad30)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/controller.go:208 +0x122
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*controller).processSingleItem(0xc000324fd0, 0x176d800, 0xc00271e610, 0x0, 0x0)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/controller.go:193 +0xf0
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*controller).processNextWorkItem(0xc000324fd0, 0xc000ba2e40)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/controller.go:170 +0x51
github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*controller).runWorker(0xc000324fd0)
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/controller.go:159 +0x2b
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0007494b0)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5e
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007494b0, 0x1c8c640, 0xc000648000, 0xc000bd4001, 0xc0000e2420)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xa3
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007494b0, 0x3b9aca00, 0x0, 0x3a2265706f637301, 0xc0000e2420)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0xe2
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0007494b0, 0x3b9aca00, 0xc0000e2420)
	/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller.(*controller).run
	/go/src/github.com/rancher/harvester/vendor/github.com/rancher/lasso/pkg/controller/controller.go:130 +0x35e

image status is not showing correctly

Description:
In the UI the image status showing as active when uploading image was failed, e.g
image

status:
  appliedUrl: https://launchpad.net/cirros/trunk/0.3.2/+download/cirros-0.3.2-source.tar.gz
  conditions:
  - lastUpdateTime: "2020-08-18T10:43:46+08:00"
    message: started image importing
    reason: 'Get http://example.com:9000/vm-images/?location=: dial tcp 93.184.216.34:9000:
      i/o timeout'
    status: "False"
    type: imported
  downloadUrl: ""
  progress: 0

Expected:
change status to failed and display the error message.

duplate display name of the os image

Issue:
currently, the display name is auto-generated by its URL name and we don't validate whether its name is duplicated or not, the backend should add validation to this.

Solution:
The backed API should validate the display name and auto-increment the display name by 1. e.g os-aaacc to os-aaacc1

rancher 2.4 integation with harvester chart

Description:

  1. user should be able to install harvester chart from the rancher catalog(helm v3)
  2. user should be able to access harvester UI through rancher magic proxy
  3. allows users to enabled/disable each component i.e Kubevirt, CDI and Minio
  4. after deleting the harvester chart from the catalog apps, it should clean all its component like CRDs and CRs
  5. when Minio is disabled, s3 server endpoint and secret configuration should be required (need to validate if it works with s3 compatible storage server e.g, AWS s3 or GCS)
  6. persistence storage should be enabled by default of the Minio

required flags not set

when trying to run ./bin/harvester, --image-storage flags are not set, and there is little explanation of what they should be set to.

what should they be set to in order to run harvester?

Add unit tests and integration tests

Improvement:

It's necessary to add testings to control the quality of Harvester.

Reference:

  • Construct the unit test cases based on the fake generated clientset. (TDD style)
  • Construct the integration test cases based on the Ginkgo. (BDD style)

TODO:

  • Adjust the test script of dapper to support two kinds of test.
  • Add unit test cases.
  • Add integration test cases.

add ISO image unmount action when CD-ROM disks is attached

Description:
When the user chooses to boot MV from the .iso image by default the CD-ROM will be set as the 1st bootable disk in its VM config if the root disk is empty, and after the OS is installed successfully we should allow unmounting the CR-ROM disk easily through the action handler.

manual process:

  1. confirm the OS is installed successfully to the VM.
  2. kubectl edit vm and remove the cdrom disk and volume.
  3. delete the VMI to reboot it.
  4. check the VM is running and only the root-disk is enabled.

example of YAML:

  1. before remove
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: gc-ubuntu-server
  namespace: default
spec:
  dataVolumeTemplates:
  - metadata:
      name: gc-simple-iso
    spec:
      pvc:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
      source:
        http:
          url: http://172.16.0.254:8080/ubuntu-18.04.4-live-server-amd64.iso
  - metadata:
      name: gc-simple-root
    spec:
      pvc:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
      source:
        blank: {}
  running: true
  template:
    spec:
      domain:
        cpu:
          cores: 1
        devices:
          disks:
          - cdrom:
              bus: sata
              readonly: true
            name: cdrom-disk
            bootOrder: 2
          - disk:
              bus: virtio
            name: datavolumedisk1
            bootOrder: 1
        resources:
          requests:
            memory: 2048M
      volumes:
      - dataVolume:
          name: gc-simple-iso
        name: cdrom-disk
      - dataVolume:
          name: gc-simple-root
        name: datavolumedisk1
  1. removed iso disk
apiVersio: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: gc-ubuntu-server
  namespace: default
spec:
  dataVolumeTemplates:
  - metadata:
      name: gc-simple-root
    spec:
      pvc:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
      source:
        blank: {}
  running: true
  template:
    spec:
      domain:
        cpu:
          cores: 1
        devices:
          disks:
          - disk:
              bus: virtio
            name: datavolumedisk1
        machine:
          type: q35
        resources:
          requests:
            memory: 2048M
      volumes:
      - dataVolume:
          name: gc-simple-root
        name: datavolumedisk1

add ssh-key management

This is a feature request:

Add a CRD to store user-customized ssh-keys, the user will need to specify the key name and public SSH key.

Unable to import image when content-length is missing from remote url

Import an image from https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

Result:

ERRO[4747] error importing image from https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img: strconv.Atoi: parsing "": invalid syntax

The server hosting the image file may return a chunked response withoutContent-Length header. In this case, the total file size is unknown until the import is finished. The percentage progress will be unavailable to the users. We might need to show the downloaded size instead(similar to chrome's download behavior).

crd naming standard

Description:

  1. use an object-oriented CRD name instead of a generic name.
  2. rename API to use harvester.cattle.io instead of vm.cattle.io.

add rancher vm ui

add rancher VM UI with the following scope

  • add image management page that allows user to view, upload and edit OS images
  • add ssh-key management page that allows user to view, create and edit ssh keys
  • add VM listing/view detail page
  • add a creating page of VM
  • add PV/PVC management page
  • add VM template page
  • add Graphical Console (VNC)

add vm template

This is a feature request:
Description:
user should be able to add a VM template with customized virtual machine spec, also it should be able to add a image that belongs to this template.

  • add VM template that stores kubevirt virtualMachine spec
  • VM template will contain a image reference (i.e image namespace:name)

add harvester UI integration and embedded build

TODO:

  1. need to add harvester UI integration with loaded from the CDN.
  2. add embedded UI to the harvester docker image, the user can config to use embedded UI through the ui-index setting with value: local

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.