Giter Club home page Giter Club logo

hostercore's People

Contributors

yaroslav-gwit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

hostercore's Issues

Introduce `mbuffer` in ZFS replication

mbuffer is a tool that provides buffering and rate limiting capabilities, making it useful for controlling data transfers. It will help us implement the rate limiting for ZFS replication in cases where it's needed (ie over the WAN replication, slow uplinks, slow storage drives, etc).

Implement `-kill` command

-stop is not sufficient if the underlying OS can't accept short ACPI signals (for example during a system boot, or if a VM is stuck in a reboot loop). For this purpose I'll add -kill command:

  • It will ignore the machine's "online" status, and will attempt to clean up everything regardless
  • After the -stop is used, new kill function will start a "process kill" loop and will prevent the VM from coming back to life (there are some edge cases, where bhyve revives the VM even after it was stopped and network interface removed, -kill will make sure VM stays down)
  • Sometimes it's useful to -kill a VM instantly, avoiding the delay that is involved in a graceful shutdown

Implement `HA mode`

Some ideas for the HA mode implementation (subject to change):

  • Node self-monitoring using REST API and ha-watchdog process

  • Raft-like Consensus Algorithm, where no matter the cluster size, there are always 3 candidate nodes that control the whole cluster, and among these 3 candidate nodes there is 1 manager that makes failover decisions

  • All 3 candidate nodes must be specified manually, in the ha_config.json

  • All worker nodes are dynamically added to and removed from the cluster

  • In case of a node failure:

    • ha-watchdog process will reboot the node it's running on, which serves as a simple fencing mechanism
    • based on the failover strategy, the manager will start VMs from the failed node on other nodes in the cluster, prioritising nodes with the freshest VM snapshot
  • Notify cluster admins about the outage, include the list of VMs that were failed over and/or were ignored

  • Keep a log of things that happen overtime in plain text and JSON formats for later representation by the Hoster REST API and/or WebUI

The CLI flags to use (subject to change):

To start using HA, execute this: hoster api start --ha-mode, and make sure you are running the latest dev release.

--ha-mode - start the REST API server, and activate the HA mode
--ha-debug - only log actions, and do not actually perform them - useful for the initial cluster setup and troubleshooting

Implement VM snapshot list output using `hoster vm snapshot-list`

To improve the system management further, it would be nice to have a list of VM snapshots, outputted using a CLI table of JSON format for further automation or REST API usage. The output would include:

  • Snapshot number (this would be a simple "count" stuff in the table output)
  • VM Name
  • Snapshot date
  • Snapshot size
  • ZFS location/Full ZFS snapshot name (it case you'd want to revert back or clone the snapshot using the underlying ZFS tools)

Implement safety checks for over-provisioning of the VMs

There is a need to implement some guard rails for over-provisioning of the VMs, because at the moment even if your host has 1 socket, 24 logical CPU cores, and 32G of RAM in total, you can still set your VM to have 5 sockets, 100 CPU cores and 200G of RAM.

The most basic safety implementation for the vm deploy, would be to check if the new VM's resources are within the host limits. vm start is a whole different story, as the VM will have to be checked not only against the overall host limits, but also against some free up-to-date system resources, like the amount of available RAM for example.

Implement `vm deploy --dns-server` flag

At the moment, network gateway address is used as a DNS server. This works in most cases, unless you have to deploy something specific, like a DNS-based ad blocker, or Windows AD related service.

--dns-server will allow us to specify an alternative DNS server during the VM deployment. The flag will also be present within vm cireset as well.

Add `rtclocaltime="NO"` to the rc.conf using `node_init.sh`

It appears that VMs reset the time/date settings on every hoster reboot, and before they can contact an external NTP server to update the time. rtclocaltime="NO" in rc.conf should help with resolving this.

Bug only happens if you are outside of GMT+0 time zone due to hardcoded TZ settings within public VM images (there is no easy way to implement a dynamic TZ setting, so I have to leave it as is on the image side of things). Thank you @leyoda for reporting it :)

Allow dynamic ZFS dataset configuration

At the moment ZFS dataset list is hard coded, which needs to be changed. So instead of this:

	var zfsDatasets []string
	var configFileName = "/vm_config.json"
	zfsDatasets = append(zfsDatasets, "zroot/vm-encrypted")
	zfsDatasets = append(zfsDatasets, "zroot/vm-unencrypted")

logic must be implemented to read from the host config file:

    "active_datasets": [
        "zroot/vm-encrypted",
        "zroot/vm-unencrypted"
    ],

Add `--wait-time` flag to `star-all` command

At the moment start-all uses a straight forward approach to the wait time between the start up of each VM:

	sleepTime := 5
	for i, vm := range getAllVms() {
		vmConfigVar := vmConfig(vm)
		if vmConfigVar.ParentHost != GetHostName() {
			continue
		} else if vmConfigVar.LiveStatus == "production" || vmConfigVar.LiveStatus == "prod" {
			if i != 0 {
				time.Sleep(time.Second * time.Duration(sleepTime))
			}
			vmStart(vm)
			if sleepTime < 30 {
				sleepTime = sleepTime + 1
			}
		} else {
			continue
		}
	}

This works well on most of the systems, especially with a bit slower storage. But there is no way to specify a static period of time on much faster systems with "all-flash" storage arrays. This improvement will implement such flag.

Change the default (hard coded) domain search list: `gateway-it.internal`

The code block below should be changed to pick up a system hostname, and add .internal.lan suffix to make it more dynamic.

const ciNetworkConfigTemplate = `version: 2
ethernets:
  interface0:
     match:
       macaddress: "{{ .MacAddress }}"
     
     set-name: eth0
     addresses:
     - {{ .IpAddress }}/{{ .NakedSubnet }}
     
     gateway4: {{ .Gateway }}
     
     nameservers:
       search: [ gateway-it.internal, ]
       addresses: [ {{ .Gateway }}, ]
`

Implement `hoster proxy` sub-command

hoster proxy will be responsible for traefik process control and basic template generation for the reverse proxy resources. Will publish more information during the actual implementation.

Implement new `vm replicate-all --filter` flag

In the HA integrated setup, that is being actively developed now, there is a need to implement this particular flag, so it's easier to spread the VM replication between multiple hosts in the cluster. The final version of this command would look like so (subject to change):

hoster vm replicate-all --filter "vm1, vm2, vm3" --endpoint 192.168.1.1

or

hoster vm replicate-all --filter "vm1,vm2,vm3" --endpoint 192.168.1.1

Add CPU temperature module `kldload` to `hoster init`

Because there are already mechanisms in place to get and filter out the correct CPU information, it will be nice load the CPU temperature kernel module on hoster init if it hasn't been loaded yet, for it's further implementation into the overall system monitoring.

Just a reminder to self:

  • for Intel platforms: kldload coretemp
  • for AMD platforms: kldload amdtemp

To check the temperature:

sysctl dev.cpu | grep -i temperature

Implement `vm deploy --from-iso`

To make it easier to produce native bhyve/hoster images, it would be nice to mount an ISO during the VM deployment process. This will require me to implement the absolute file path linking for disk images (be it ISOs or raw disk images), because so far I only had to mount the disks from within the VM directory.

Also, --from-iso will disable/cancel out some of the other integration flags (such as CloudInit), as it is only intended to get the image prepared for use with hoster.

Implement `hoster init` check

I want to implement a feature that will check if the hoster was initialised after boot. This will improve the overall user experience, because sometimes one can simply forget to run init after the host has been rebooted.

The check will be integrated into every every function, and if hoster wasn't init-ed yet, the binary will exit with code 101 and a message stating that you need to execute hoster init first.

Implement vCPU vs. pCPU ratio

One of the customers is interested in the vCPU x pCPU ratio (virtual vs. physical processors). I will be implementing it for both: console and WebUI. The console ratio will be displayed in the host table, as a new section V2P Ratio. The WebUI ratio will be displayed on the main dashboard and will include all servers to give the user a better overview and understanding of their setup.

Add 2 additional tables to `hoster` command output

Now that ZFS datasets and VM networks are dynamic it's time to create 2 additional output tables:

  • ZFS dataset list and info
  • Network list and info

This will help with showing the full picture for the cluster and node admins.

Implement `vm deploy --ip-address` flag

Sometimes you don't want to use an auto-generated IP. That's why I need to implement this new basic flag for vm deploy. The flag will be used as so hoster vm deploy --ip-address 192.168.0.1 --network-name external.

Move VNC credentials to it's own output table, using `vm secrets` command

Having VNC credentials in the main information table is okay, until you have to share it with someone who has access to the same LAN network. At this point it would be wise to create a separate table that will be shown by running hoster vm secrets vm-name, along with gwitsuper user password and a root password.

Add latest snapshot timestamp to the backup VMs description

After the vm snapshot-list is implemented, it would be nice to add ๐Ÿ’พโฉ hoster-test-0102 ๐Ÿ•” 2023-06-11 13:49:49 to the backup VM description in CLI table output. I'll probably even add some colours as well:

  • if the backup is older than 2-5 days, the text will be displayed in a yellow colour
  • if the backup is older than 5-10 days, the text will be displayed in an orange colour
  • anything older than 10 days will be in red

`hoster vm disk-expand` error

The bug happens when you try to expand a disk of the VM that does not exist or it's name is misspelled.

The error return is not handled and we get the following message:

vmConfig Function Error: open /vm_config.json: no such file or directory
panic: unexpected end of JSON input

goroutine 1 [running]:
hoster/cmd.vmConfig({_, _})
	/root/Git/HosterCore/cmd/vm_list.go:265 +0x14b
hoster/cmd.diskExpandOffline({0x820da82f6, 0x4}, {0x9bf600, 0x9}, 0x0?)
	/root/Git/HosterCore/cmd/vm_disk_expand.go:34 +0x7b
hoster/cmd.glob..func17(0xe715e0?, {0x850203050?, 0x3?, 0x3?})
	/root/Git/HosterCore/cmd/vm_disk_expand.go:24 +0x46
github.com/spf13/cobra.(*Command).execute(0xe715e0, {0x850203020, 0x3, 0x3})
	/root/go/pkg/mod/github.com/spf13/[email protected]/command.go:876 +0x67b
github.com/spf13/cobra.(*Command).executeC(0xe71d60)
	/root/go/pkg/mod/github.com/spf13/[email protected]/command.go:990 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
	/root/go/pkg/mod/github.com/spf13/[email protected]/command.go:918
hoster/cmd.Execute()
	/root/Git/HosterCore/cmd/root.go:21 +0x25
main.main()
	/root/Git/HosterCore/main.go:9 +0x17

On the other commands it works, we have the return of the type VM is not found in the system.

Implement `disk add`

There is a need to implement vm disk add to all Hoster management interfaces. VM will have to be offline for it, as bhyve can't pick up changes on the fly yet.

P.S. as a side effect, I'll move vm disk-expand into a separate command + subcommand to make things more streamlined and logical:
vm disk expand

Implement VM clone command: `hoster vm clone`

VM clone feature might be a bit controversial with ZFS for the reasons below:

  • The child is permanently linked to the parent snapshot, so the parent VM cannot be destroyed
  • If you want to make the child independent there are 2 ways to do it: replicate the VM to another system, or create a new dataset and manually copy all of the VM files into that dataset
  • If the child VM changes very rapidly (think about a file server or a very active DB server) - the parent snapshot will grow in size as rapidly or even quicker, and it doesn't matter if you add or remove data, it's still a change from the parent snapshot, which cannot be removed

On the other hand the benefits are clear :

  • VM testing -> you can very easily test OS/software updates, or anything else really, to know if it will work well with your production system, and then simply destroy the clone and perform the same changes on your production VM
  • VM and software templating -> you might create a custom image with WordPress or NextCloud that is ready to go, and then simply clone it for every new customer, because it's much quicker and more efficient then spinning up a new VM and perform a fresh deployment
  • Spinning up a clone of the VM that is running on a different node in the cluster in order to scale up your application quickly and efficiently (parent will have to be renamed, so it's not overridden by the replication process, but it would be a future problem when it comes to the implementation, maybe even a new flag will be responsible for this, something like hoster vm clone --quick)

Allow multiple replication jobs at the same time

Right now, the replication function creates a temporary shell script in /tmp/replication.sh, so only 1 replication job can be active at the time (file name is hard coded and file existence is checked before each replication job). HA/Clustered setup requires multiple jobs running at the same time. For this purpose, I am planning to create a /var/run/replication-endpoint-address-hash.sh file.

This way we can still rely on the original design of not trying to "overrun" a currently running operation and fail in some unpredictable way, potentially damaging the data, but at the same time will allow parallel execution to different endpoints.

Improve the error output for `vm deploy`

vm deploy errors only show the exit code of the underlying system command (zfs snapshot in this case, see the output below), but ignore the STDOUT. Needs to be fixed before the 0.2a release.

hoster vm deploy -n vm-to-be-created -d zroot/vm-encrypted

 ๐ŸŸข INFO:    ๐Ÿ•” 2023-05-03 10:34:56: ๐Ÿ“„ Deploying new VM: vm-to-be-created
 ๐Ÿ”ท DEBUG:   ๐Ÿ•” 2023-05-03 10:34:56: ๐Ÿ“„ OS type used: debian11

2023/05/03 10:34:56 could not execute zfs snapshot: exit status 2

Add DNS servers and DNS domain keys to the `network_config.json`

To accommodate some custom network configs, I need to add 2 new keys to the network_config.json file:

[
    {
        "network_name": "internal",
        "network_gateway": "10.0.100.254",
        "network_subnet": "10.0.100.0/24",
        "network_range_start": "10.0.100.10",
        "network_range_end": "10.0.100.200",
        "bridge_interface": "None",
        "apply_bridge_address": true,
        "dns_servers": ["10.0.100.254", "1.1.1.1"],
        "dns_domain": "custom.dns-domain.lan",
        "comment": "Internal Network"
    }
]

Fix `vm replicate` bug (unpredictable behaviour if VM doesn't have any snapshots)

A couple of days ago I've discovered a bug, where the initial VM replication would succeed, but then fail the next time, because VM didn't have any snapshots. To fix this bug, I'll create a check if vm snapshots < 2 then make 2 custom snapshots now.

For the time being, and as a workaround before I get to this bug, just execute hoster vm snapshot for your VM twice before replicating if it doesn't have any snapshots just yet.

Implement new command `vm change parent`

There is a need to implement a new vm change parent command in order to support the upcoming release of HosterHA configuration. The command will simply switch the current parent in the VM's config file, so that HA watchdog can fence the flaky host using PF and start such VM on a new host, with zero config changes for super fast failover.

Rewrite `nebula-control-plane` in Golang and publish it on GitLab

As of now nebula-control-plane or nebula-server-api is written in Python using FastAPI. It was initially written that way to support fast prototyping. It needs to be rewritten in Golang, and published on our GitLab.

Here is the plan:

  • Migrate from Python3 to Golang
  • Integrate Swagger docs, so it's easier to use for the broader public
  • Implement the auto-update mechanism, to get the latest Nebula binaries from the upstream repo
  • Implement new POST actions to remotely update the server database if needed
  • Implement new route/function that will allow to manually download the config, in order to run Nebula on client nodes (like laptops, workstations or bastion nodes)
  • Implement a new function on the client side to periodically check for config changes, and to reload them on the fly if required
  • Implement a file locking mechanism to avoid Yaml/JSON host database file damages overtime
  • Implement an optional file backup mechanism, where the old DB file will be copied to db.<date>.yaml on every file change
  • Implement a process supervisor on the client side to control the upstream Nebula binary. It will watch the config changes, and start/restart/reload the main Nebula process as needed.
  • Publish documentation on how to deploy the nebula-contol-plane and how to use it with hoster
  • Write some client related docs for Windows and Linux clients that need to be integrated into the cluster network
  • Add include_custom_ips directive in the config file to allow additional IP addresses that could be used for Vale or VxLAN switching

Also, publishing it on GitLab will give hoster users more confidence in their own cluster security.

Separate CPU and RAM table columns to support "Sockets/Cores" output

At the moment CPU and RAM are joint in the CLI output table. But because VNC credentials will be moved to a separate table soon, it makes some room for separate CPU and RAM columns, which will improve visibility into CPU configuration in terms of CPU Sockets assigned to the VM and CPU cores.

Fix `vm deploy` to accept any network, not just `internal`

There is a hardcoded value internal in vm_deploy.go, which needs to be changed to a dynamic value (line 408):

    "networks": [
        {
            "network_adaptor_type": "virtio-net",
            "network_bridge": "internal",
            "network_mac": "{{ .MacAddress }}",
            "ip_address": "{{ .IpAddress }}",
            "comment": "Internal Network"
        }
    ],

This will allow to implement a new flag --use-network in the vm deploy command, but it will pick up the first available network from the list in the network config file by default.

Add snapshot size to the `vm replicate` log output

At the moment, the vm replicate log output looks like this:

 ๐Ÿ”ถ CHANGED: ๐Ÿ•” 2023-05-16 11:13:50: ๐Ÿ“„ Took a new snapshot: zroot/vm-encrypted/hosterDocs@replication_2023-05-16_11-13-50
 ๐Ÿ”ถ CHANGED: ๐Ÿ•” 2023-05-16 11:13:50: ๐Ÿ“„ Removed an old snapshot: zroot/vm-encrypted/hosterDocs@replication_2023-04-29_20-07-00
 ๐ŸŸข INFO:    ๐Ÿ•” 2023-05-16 11:13:50: ๐Ÿ“„ Working with this remote dataset: zroot/vm-encrypted/hosterDocs
 ๐Ÿ”ท DEBUG:   ๐Ÿ•” 2023-05-16 11:13:50: ๐Ÿ“„ Sending incremental snapshot: zroot/vm-encrypted/hosterDocs@custom_2023-03-05_20-51-23
 ๐Ÿ“ค Sending incremental snapshot || zroot/vm-encrypted/hosterDocs@custom_2023-03-05_20-51-23 ||  100% |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 
 ๐Ÿ”ถ CHANGED: ๐Ÿ•” 2023-05-16 11:13:50: ๐Ÿ“„ Incremental snapshot sent: zroot/vm-encrypted/hosterDocs@custom_2023-03-05_20-51-23

But it would be nice to add snapshot size to the log output as well (maybe as a separate log line?), so that if the snapshot transfer takes longer than expected it would be easier to investigate.

Replace the default `virtio-blk` storage driver with `nvme`

The enhancement needs a little bit of testing to check if all supported OS templates are 100% compatible with this change, but in the end it will bring an improved storage read and write performance, which is always good.

So this:

        {
            "disk_type": "virtio-blk",
            "disk_location": "internal",
            "disk_image": "disk0.img",
            "comment": "OS Drive"
        },

Will become this:

        {
            "disk_type": "nvme",
            "disk_location": "internal",
            "disk_image": "disk0.img",
            "comment": "OS Drive"
        },

Create a basic Jail management interface

One of our customers has requested a feature, that will help them manage FreeBSD Jails in a similar fashion to VMs: automated provisioning, ZFS clone as a templating mechanism, automatic snapshots and replication, parent/backup detection, prod vs test, REST API functions, access to existing VM network bridges, default and VM-like resource limits, Jail uptime monitoring, start/stop/snapshot using our WebUI, automatic HA failover, etc.

Jails will never be a primary focus of our project, but having even a basic Jail management system will definitely help us in a long run.

I don't plan on publishing any app based, or pre-packaged Jails. Instead I would focus on creating the cross-platform app deployment approaches within Jails, FreeBSD VMs or FreeBSD baremetal hosts using Ansible, shell scripts, installation tutorials, and so on. This way it can be useful to a broader community of admins and users out there, and will boost the FreeBSD exposure.

Process self-destruction mechanism for `vm-supervisor`

There is no exact number to be named, but for some weird reason after issuing 4-6 reboot commands from within the VM, the vm-supervisor process can't start a new bhyve child.

To resolve this issue I'll need to create a self-destruction mechanism for the parent process for vm-supervisor, whenever it sees that the VM was rebooted. And then simply start a new detached process that will execute hoster vm start vm-name again to start the VM.

Create new `vm deploy` flag: `--init-script`

vm deploy would benefit from specifying a "startup script" link, which will help with VM provisioning. The flag will accept a link to any shell script on the internet, which can be downloaded and executed on the OS in question (all using normal Cloud Init mechanisms).

P.S. If you want to run such script only on the first boot, create something like /etc/provisioned file in the file system after first/successful execution. And don't forget to sleep for some time before execution, so that VM can pick up networking.

Set VM name character restrictions for `vm deploy`

At the moment there are no bounds checks for the length of VM name, whilst using vm deploy, but this needs to change. The new requirements would be:

  • min -> 5 chars
  • max -> 21 chars
  • name must start with a letter
  • no special characters allowed, apart from - and _

Create a custom scheduler to improve the automated snapshot/replication user experience

cron has worked fine so far for the automated snapshots and replication, but we are starting to hit some limitations on production clusters:

  • error handling is mostly non-existent
  • hard to schedule the replicate-all to multiple nodes at once (have to juggle the snapshot times and replication times, to avoid conflicts)
  • hard to integrate with a monitoring system like Zabbix, to do proactive checks
  • and some other client specific issues I am not allowed to share here

All in all, I am aiming to release a new binary (probably will name it a backup_scheduler), with a json based declarative config, which will be autostarted on hoster init.

Add `--ssh-keys-file` flag to `vm deploy` and `vm cireset`

At some point we'll need --ssh-keys-file flag within vm deploy and vm cireset commands to avoid manually editing config files to include new SSH keys. This feature will also allow better VM multi-tenancy and migration capabilities.

Implement dynamic versioning for the compiled `Hoster` version

As of now, the Hoster development is very rapid, and most of the time I simply forget to put a new version in (which can cause issues, if you want to keep a track of versions installed across all your hosts). Will look into something that can help me implement a way to dynamically change the Hoster version if it's compiled on the end user system.

Release a separate DNS server binary

Unbound is a really powerfull piece of software, but for hosters use case it's like using a shotgun to kill the fly. I will release a separate dns_service binary, that will be fine tuned to work with hoster specifically.

This will also allow users to tinker with their resolver configuration, without a fear of it being overwritten by hoster at some point.

Implement `vm ci-iso mount/unmount` to improve security

CloudInit ISO images are really helpful when something needs to changed/deployed/updated. But once VM is in production we need a way to unmount the CloudInit ISO to improve the VM security, as it contains sensitive information about the system users and scripts executed. This is completely fine in the single user per VM situation, but becomes a problem in the multi-user scenario.

Once implemented, hoster vm ci-iso unmount will mount an empty ISO in place of the real CloudInit ISO. This will help us keep the device order, and make the setup more predictable, especially for Windows related stuff.

There will also be hoster vm ci-iso mount to mount a real CloudInit ISO in case user needs it back.

`hoster vm secrets` doesn't check if VM exists

Somehow I failed to implement a basic vm exists check in the last hoster vm secrets implementation, which needs to be fixed now. In case you'd call the command below and use a VM name that is not present on a given system, it panics:

(command)

hoster vm secrets nonExistentVm

(output)

mConfig Function Error:  open /vm_config.json: no such file or directory
panic: unexpected end of JSON input

goroutine 1 [running]:
hoster/cmd.vmConfig({_, _})
        /root/Git/HosterCore/cmd/vm_list.go:283 +0x14b
hoster/cmd.vmSecretsTableOutput({0x820f41d0a, 0x9})
        /root/Git/HosterCore/cmd/vm_secrets.go:36 +0x3e
hoster/cmd.glob..func39(0xe54fa0?, {0x85006d750, 0x1, 0x1?})
        /root/Git/HosterCore/cmd/vm_secrets.go:27 +0x85
github.com/spf13/cobra.(*Command).execute(0xe54fa0, {0x85006d720, 0x1, 0x1})
        /root/go/pkg/mod/github.com/spf13/[email protected]/command.go:876 +0x67b
github.com/spf13/cobra.(*Command).ExecuteC(0xe54320)
        /root/go/pkg/mod/github.com/spf13/[email protected]/command.go:990 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
        /root/go/pkg/mod/github.com/spf13/[email protected]/command.go:918
hoster/cmd.Execute()
        /root/Git/HosterCore/cmd/root.go:28 +0x25
main.main()
        /root/Git/HosterCore/main.go:9 +0x17

Improve `vm cireset` to only change the necessary parts of `vm_config.json`

As of now vm cireset generates completely new VM config file, which is less than ideal in most situations. Here is the list of improvements, that has to be implemented:

  • Read vm_config.json file into a dynamic map instead of the default VmConfig struct
  • Replace:
    • SSH keys
    • MAC for network with index zero (first on the list)
    • IP address for network with index zero (first on the list)
    • VNC port and password
    • Default user passwords
  • Keep:
    • CPU and RAM configuration where possible, or log out that the host isn't compatible with the config and use the defaults of 2 CPU cores and 2G of RAM
    • VM network name where possible, or log out that the network x doesn't exist, and will be replaced with network y

The new Cloud Init ISO will be generated as per usual, using the same flags and configs from vm deploy

Implement `hoster node-exporter` sub-command

node-exporter command will be responsible for a couple of things:

  • Supervising 2 node_exporter processes -- general purpose node_exporter and our own implementation called node_exporter_custom
  • Starting/restarting/stopping the supervised processes using start, stop, reload/restart sub commands
  • Integrating with hoster init to start the monitoring processes after system reboot or similar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.