Giter Club home page Giter Club logo

vagrant-caasp's Introduction

vagrant-caasp -- BETA

An automated deployment of SUSE CaaS Platform (Kubernetes) v4.5 for testing.

This project is a work in progress and will be cleaned up after some testing and feedback. Feel free to open issues and/or submit PRs.

What you get

  • (1-2) Load balancers
  • (1-3) Masters
  • (1-5) Workers
  • (1) Storage node setup with an NFS export for the nfs-client storage provisioner
  • (1) Kubernetes Dashboard deployment
  • (1) MetalLB instance
  • (1) Optional Rook / Ceph / SES setup

ASSUMPTIONS

  • You're running openSUSE Tumbleweed or Leap 15+
  • You have at least 8GB of RAM to spare
  • You have the ability to run VMs with KVM
  • You have an internet connection (images pull from internet, box comes from download.suse.de)
  • DNS works on your system hosting the virtual machines (if getent hosts `hostname -s` hangs, you will encounter errors)
  • You enjoy troubleshooting :P

INSTALLATION (As root)

sysctl -w net.ipv6.conf.all.disable_ipv6=1 # rubygems.org has had issues pulling via IPv6
git clone https://github.com/sigsteve/vagrant-caasp
cd vagrant-caasp
# Install dependent packages and configure vagrant-libvirt
./libvirt_setup/openSUSE_vagrant_setup.sh

NETWORK SETUP (As root)

# Make sure ip forwarding is enabled for the proper interfaces
# Fresh vagrant-libvirt setup
virsh net-create ./libvirt_setup/vagrant-libvirt.xml
# _OR_ if you already have the vagrant-libvirt network
./libvirt_setup/add_hosts_to_net.sh
# Update host firewall (if applicable)
./libvirt_setup/update_firewall.sh

ADD BOX (As root)

# Find the latest box at http://download.suse.de/ibs/home:/sbecht:/vc-test:/SLE-15-SP1/images/
vagrant box add vagrant-caasp \
    http://download.suse.de/ibs/home:/sbecht:/vc-test:/SLE-15-SP1/images/<box>
# _OR_
# wget/curl the box and 'vagrant box add vagrant-caasp </path/to/box>'

OPTIONAL -- running as a user other than root

# Become root (su), then
echo "someuser ALL=(ALL) NOPASSWD: ALL" >/etc/sudoers.d/someuser
visudo -c -f /etc/sudoers.d/someuser
# Add user to libvirt group
usermod --append --groups libvirt someuser
su - someuser
vagrant plugin install vagrant-libvirt
# ssh-keygen if you don't have one already
ssh-copy-id root@localhost
# Add any boxes (if you have boxes installed as other users, you'll need to add them here)
vagrant box add [boxname] /path/to/boxes

USAGE

Examine the config.yml to view the model to choose for the size of each VM. The config.yml configures the amount of RAM and CPUs for each type of vm as well as the number of vms for each type: master, workers, load balancers, storage

The current model list is minimal, small, medium, large

The deploy_caasp.sh must be run as either root or sles user.

# Initial deployment
cd vagrant-caasp
./deploy_caasp.sh -m <model> < --full > < -a >
# -a will deploy air-gap/registry mirror settings prior to SUSE CaaSP cluster deployment
# --full will attempt to bring the machines up and deploy the cluster.
# Please adjust your memory settings in the config.yml for each machine type.
# Do not run vagrant up, unless you know what you're doing and want the result
Usage deploy_caasp.sh [options..]
-m, --model <model>   Which config.yml model to use for vm sizing
                      Default: "minimal"
-f, --full            attempt to bring the machines up and deploy the cluster
-a, --air-gapped      Setup CaaSP nodes with substitute registries (for deployment and/or private image access)
-i, --ignore-memory   Don't prompt when over allocating memory
-t, --test            Do a dry run, don't actually deploy the vms
-v, --verbose [uint8] Verbosity level to pass to skuba -v (default is 1)
-h,-?, --help         Show help

Once you have a CaaSP cluster provisioned you can start and stop that cluster by using the cluster.sh script

Usage cluster.sh [options..] [command]
-v, --verbose       Make the operation more talkative
-h,-?, --help       Show help and exit

start               start a previosly provisioned cluster
stop                stop a running cluster

dashboardInfo       get Dashboard IP, PORT and Token
monitoringInfo      get URLs and credentials for monitoring stack

INSTALLING CAASP (one step at a time)

After running deploy_caasp.sh -m <model> without the --full option, do the following.

vagrant ssh caasp4-master-1
sudo su - sles
cd /vagrant/deploy
# source this
source ./00.prep_environment.sh
# skuba init
./01.init_cluster.sh
# skuba bootstrap (setup caasp4-master-1)
./02.bootstrap_cluster.sh
# add extra masters (if masters > 1)
./03.add_masters.sh
# add workers
./04.add_workers.sh
# setup helm
./05.setup_helm.sh
# wait for tiller to come up... Can take a few minutes.
# add NFS storage class (via helm)
./06.add_k8s_nfs-sc.sh
# add Kubernetes Dashboard
./07.add_dashboard.sh
# add MetalLB
./08.add_metallb.sh

INSTALLING CAASP (all at once)

vagrant ssh caasp4-master-1
sudo su - sles
cd /vagrant/deploy
./99.run-all.sh

Rook + SES / Ceph

# For rook, you must deploy with a model that has a tag with _rook.
# See config.yml large_rook for example.
# This will handle all setup and configuration for you.
# Currently the default storage class will remain NFS.
#
# To make SES your default storage class:
/vagrant/rook/switch_default_sc_to_ses.sh
# To see status:
/vagrant/rook/rook_status.sh

# To use CephFS you must create pools and a filesystem associated.
# To quickly set it up for use and testing you can execute this script
/vagrant/rook/rook_cephfs_setup.sh

# Example cephfs app at /vagrant/rook/examples/test-cephfs-webserver.yaml

OPENSTACK

(details to be documented)

CAP

(details to be documented)

EXAMPLES

  • FULL DEPLOY asciicast

  • INSTALL

  • DESTROY

./destroy_caasp.sh

NOTES

vagrant-caasp's People

Contributors

007romoore avatar ajaeger avatar alexrenna avatar chasecrum avatar hemna avatar jodavis-suse avatar mook-as avatar ne0777 avatar sigsteve avatar swebarre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vagrant-caasp's Issues

Upgrade to helm 3

The syntax for helm 3 has changed and we will need to update scripts that deploy with it.

Enable swap accounting

Enabling the swapaccount=1 boot option is a prerequisite for CAP, at least when installing with the Diego scheduler, so can this be added to the vagrant-caasp box?

Helm Mirror : add options to the deploy wrapper script

Opening this per our recent internal discussion. @sigsteve

We need to add some options for people who want to get their system up to a variety of "levels". The "--full" option deploys the nodes and then deploys CaaSP, but we might also want to add options that are friendly to air-gapped environments or even bandwidth restricted. Things like RMT, container image or chart mirrors.

Perhaps a "--create-helm-mirror" to include an automated deployment of a helm mirror.

See other open issue for Registry mirror.

firewalld documentation

The documentation lacks instructions that firewalld needs to accept NFS connections (or to disable the firewalld).

Cheers

Need podman and buildah

Podman is needed for various docker CLI needs, including logging into private registries.

Buildah isn't supported inside the cluster but it is needed to build images from Dockerfiles in CaaSPv4.

zypper version comparison doesn't work in libvirt_setup/openSUSE_vagrant_setup.sh

$ cat zypper-version
#!/bin/bash

#zypper_version=($(zypper -V))
zypper_version='1.14.27'
if [[ ${zypper_version[1]} < '1.14.4' ]]
then
echo "<"
else
echo ">"
fi

$ ./zypper-version
<
which is wrong

this would work, but requires rpmdev-vercmp provided by rpmdevtools from openSUSE:Backports:SLE-15-SP1 repo

[...]
zypper_version=($(rpm -q --qf 'zypper-%{V}\n' zypper))

#https://unix.stackexchange.com/questions/163702/bash-script-to-verify-that-an-rpm-is-at-least-at-a-given-version
rpmdev-vercmp $zypper_version 'zypper-1.14.4'
if [[ $? == 11 ]]
then
zypper --no-gpg-checks in -y https://releases.hashicorp.com/vagrant/2.2.5/vagrant_2.2.5_x86_64.rpm
[...]

as /usr/bin/rpmdev-vercmp is a Python script, you could simply copy it to /usr/local/bin/

Enhancement ideas

I'd like to use rook with ceph for storage and therefore need an extra disk on each worker node. Could this be easily added?

Also, these nodes have no subscriptions, an easy way to add them would be great so that we can install packages and updates.

Otherwise, great tool - love it! Thanks

Registry Mirror : add options to the deploy wrapper script

Opening this per our recent internal discussion. @sigsteve

We need to add some options for people who want to get their system up to a variety of "levels". The "--full" option deploys the nodes and then deploys CaaSP, but we might also want to add options that are friendly to air-gapped environments or even bandwidth restricted.

Perhaps a "--create-registry-mirror" to include an automated deployment of a registry-mirror.

See other open issue for Helm mirror.

00.prep_environment.sh fails

Just stood up the nodes with deploy_caasp.sh
vagrant ssh caasp4-master-1
sudo su - sles
source /vagrant/deploy/00.prep_environment.sh

sles@caasp4-master-1:/vagrant/deploy> source 00.prep_environment.sh
Agent pid 6164
/vagrant/cluster/caasp4-id: Permission denied
sles@caasp4-master-1:/vagrant/deploy> whoami
sles
sles@caasp4-master-1:/vagrant/deploy> ls -al /vagrant/cluster/
total 16
drwxr-xr-x 2 root    root    4096 Sep 10 13:07 .
drwxr-xr-x 8 vagrant vagrant 4096 Sep 10 13:07 ..
-rw------- 1 root    root    1843 Sep 10 13:07 caasp4-id
-rw-r--r-- 1 root    root     414 Sep 10 13:07 caasp4-id.pub

06.add_k8s_nfs-sc.sh fails

After manually adding the missing public key to the worker node, I was able to run 05.setup_helm.sh, then tried 06.add_k8s_nfs-sc.sh, which failed

sles@caasp4-master-1:/vagrant/deploy> ./05.setup_helm.sh
Setting up helm...
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
Creating /home/sles/.helm
Creating /home/sles/.helm/repository
Creating /home/sles/.helm/repository/cache
Creating /home/sles/.helm/repository/local
Creating /home/sles/.helm/plugins
Creating /home/sles/.helm/starters
Creating /home/sles/.helm/cache/archive
Creating /home/sles/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/sles/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
deployment.extensions/tiller-deploy patched
sles@caasp4-master-1:/vagrant/deploy> ./06.add_k8s_nfs-sc.sh
Adding NFS storage class...
Error: could not find a ready tiller pod

Installing macOS Catalina 10.15 in Windows 64 bit

In have followed all steps trying to install macOS Catalina using VirtualBox on Windows 10-64 bit. I get the following error being thrown in the UEFI the code error is as follows -
UEFI Interactive Shell v21.2
EDK II
UEFI v2.70 (EDK II, 0x000100000)

Mapping table
BLKO: Alias(s):
PCIRoot (0x0)/Pci(0xC,0x0)/USB (0x8,0x0)
BLK1: Alias(s):
PCIRoot(0x0)/Pci(0xD,0x0/Sata(0x0,0xFFFF,0x0)
Press ESC in 1 seconds to skip startup.nsh. or any other key to continue.
Shell>_

Any help would be appreciated.

Add builtin support for creating/managing users

It would be great if the vagrant-caasp setup had a simple identity provider built-in, so we can have multiple users with different privileges.

Right now all operations are performed using cluster-admin bindings, which kind of by-passes the whole RBAC idea.

Releasing the vagrant box for SLES15 SP1 or CaaSP

Hi there,

we from B1 Systems would be highly interested if the vagrant box that you use could somehow be provided as official download or as kiwi-template (if no official download is possible).

Or better yet, if there was a vagrant box for SUSE CaaSP directly, that would be really awesome and would ease setting up training environments or test setups a lot.

Kind Regards,
Johannes

Vagrant 2.2.6 on openSUSE

Hi,
today my openSUSE upgrades Vagrant to 2.2.6.
Try to install vagrant-libvirt this is the error:
Installing the 'vagrant-libvirt' plugin. This can take a few minutes...
Traceback (most recent call last):
18: from /usr/share/vagrant/gems/bin/vagrant:23:in <main>' 17: from /usr/share/vagrant/gems/bin/vagrant:23:in load'
16: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/bin/vagrant:166:in <top (required)>' 15: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/environment.rb:290:in cli'
14: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/cli.rb:66:in execute' 13: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/command/root.rb:66:in execute'
12: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/command/install.rb:69:in execute' 11: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/command/install.rb:69:in each'
10: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/command/install.rb:70:in block in execute' 9: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/command/base.rb:14:in action'
8: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/runner.rb:102:in run' 7: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/util/busy.rb:19:in busy'
6: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/runner.rb:102:in block in run' 5: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/builder.rb:116:in call'
4: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/warden.rb:50:in call' 3: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/builtin/before_trigger.rb:23:in call'
2: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/warden.rb:50:in call' 1: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/action/install_gem.rb:30:in call'
/usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/plugin/manager.rb:156:in install_plugin': undefined method name' for nil:NilClass (NoMethodError)

02.bootstrap auth error

Bootstrapping cluster...
+ skuba -v8 node bootstrap --user sles --sudo --target caasp4-master-1 caasp4-master-1
I0910 16:18:43.737952    2615 config.go:38] loading configuration from "kubeadm-init.conf"
I0910 16:18:43.740529    2615 states.go:35] === applying state kubernetes.install-node-pattern ===
W0910 16:18:43.803161    2615 ssh.go:306]
The authenticity of host '127.0.0.1:22' can't be established.
ECDSA key fingerprint is 33:8c:92:15:f5:63:7e:51:d4:58:45:67:22:b3:84:bc.
I0910 16:18:43.803276    2615 ssh.go:307] accepting SSH key for "caasp4-master-1:22"
I0910 16:18:43.803377    2615 ssh.go:308] adding fingerprint for "caasp4-master-1:22" to "known_hosts"
E0910 16:18:43.818124    2615 ssh.go:237] ssh authentication error: please make sure you have added to your ssh-agent a ssh key that is authorized in "caasp4-master-1".
F0910 16:18:43.818191    2615 bootstrap.go:49] error bootstraping node: failed to apply state kubernetes.install-node-pattern: failed to initialize client: authentication error
+ skuba cluster status
E0910 16:18:43.877490    2625 status.go:34] unable to get cluster status: unable to get admin client set: could not load admin kubeconfig file: failed to load admin kubeconfig: open admin.conf: no such file or directory
+ set +x
mkdir: cannot create directory ‘/home/sles/.kube’: File exists
+ kubectl get nodes -o wide
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ set +x



sles@caasp4-master-1:/vagrant/deploy> ssh-add -l
2048 SHA256:Azz4dBfdtn7Lan0EWGNDvEEyZdgRxlMzL5HgyPq+8uM sles@caasp4-master-1 (RSA)
~~~

02.bootstrap_cluster.sh error

I'm getting the following error at stage 02:

sles@caasp4-master-1:/vagrant/deploy> ./00.prep_environment.sh
++ ssh-agent -s

  • eval 'SSH_AUTH_SOCK=/tmp/ssh-BSd5LlV0dD2G/agent.6053;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=6054;' export 'SSH_AGENT_PID;' echo Agent pid '6054;'
    ++ SSH_AUTH_SOCK=/tmp/ssh-BSd5LlV0dD2G/agent.6053
    ++ export SSH_AUTH_SOCK
    ++ SSH_AGENT_PID=6054
    ++ export SSH_AGENT_PID
    ++ echo Agent pid 6054
    Agent pid 6054
  • ssh-add /vagrant/cluster/caasp4-id
    Identity added: /vagrant/cluster/caasp4-id ([email protected])
    sles@caasp4-master-1:/vagrant/deploy> ./01.init_cluster.sh
    Initializing cluster...
  • skuba cluster init --control-plane caasp4-lb-1 caasp4-cluster
    ** This is a BETA release and NOT intended for production usage. **
    [init] configuration files written to /vagrant/cluster/caasp4-cluster
  • set +x
    sles@caasp4-master-1:/vagrant/deploy> ./02.bootstrap_cluster.sh
    Bootstrapping cluster...
  • skuba node bootstrap --user sles --sudo --target caasp4-master-1 caasp4-master-1
    ** This is a BETA release and NOT intended for production usage. **
    [bootstrap] updating init configuration with target information
    F0707 03:31:56.188933 6075 bootstrap.go:48] error bootstraping node: unable to add target information to init configuration: could not retrieve OS release information: failed to initialize client: SSH_AUTH_SOCK is undefined. Make sure ssh-agent is running
  • skuba cluster status
    ** This is a BETA release and NOT intended for production usage. **
    E0707 03:31:56.229288 6084 status.go:33] unable to get cluster status: unable to get admin client set: could not load admin kubeconfig file: failed to load admin kubeconfig: open admin.conf: no such file or directory
  • set +x
  • kubectl get nodes -o wide
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
  • set +x

Documentation update needed

I think this sentence "Once you have a CaaSP cluster provisioned you can start and stop that cluster by using the cluster.sh script" would be better right above "Usage cluster.sh [options..] [command]"

It seems to split up the ./deploy_caasp.sh reference and doesn't call out the ./cluster.sh portion.

vagrant setup script produces error in Leap 15.1

I encountered the error below and was asked to report an issue. I am using Leap 15.1

when executing sudo ./libvirt_setup/openSUSE_vagrant_setup.sh, I get the following error:
chasecrum@linux-yblm:~/code/vagrant-caasp> sudo ./libvirt_setup/openSUSE_vagrant_setup.sh

  • type rpmdev-vercmp
    ++ rpm -qa zypper
  • rpmdev-vercmp zypper-1.14.4 zypper-1.14.27-lp151.1.2.x86_64
    WARNING: hyphen in release2: 1.14.27-lp151.1.2.x86_64

rpmdev-vercmp
rpmdev-vercmp
rpmdev-vercmp # with no arguments, prompt

Exit status is 0 if the EVR's are equal, 11 if EVR1 is newer, and 12 if EVR2
is newer. Other exit statuses indicate problems.

As a work around, this part of the script was commented out and the following command worked:

zypper --no-gpg-checks in -y https://releases.hashicorp.com/vagrant/2.2.5/vagrant_2.2.5_x86_64.rpm

After this command successfully completed, the vagrant install script (with commented out section) ran successfully to completion.

Add log levels, pass through to skuba

Currently the default log levels in skuba are used, which is not a lot of information if you hit a problem. It would be helpful to be able to pass through a log level (-v [0-10] is currently supported) without having to edit each script in the deploy/ dir.

04.add_workers.sh auth failure

step 04 fails now

sles@caasp4-master-1:/vagrant/deploy> source 00.prep_environment.sh
Agent pid 12255
Identity added: /vagrant/cluster/caasp4-id ([email protected])
sles@caasp4-master-1:/vagrant/deploy> ./04.add_workers.sh
Adding workers...
++ seq 1 1
+ for NUM in $(seq 1 $NWORKERS)
+ skuba node join --role worker --user sles --sudo --target caasp4-worker-1 caasp4-worker-1
W0911 15:19:37.896159   12263 ssh.go:306]
The authenticity of host '192.168.121.130:22' can't be established.
ECDSA key fingerprint is d5:73:2c:f5:7d:b2:fd:09:95:c7:ca:0f:b8:39:90:af.
I0911 15:19:37.896243   12263 ssh.go:307] accepting SSH key for "caasp4-worker-1:22"
I0911 15:19:37.896253   12263 ssh.go:308] adding fingerprint for "caasp4-worker-1:22" to "known_hosts"
E0911 15:19:37.916194   12263 ssh.go:237] ssh authentication error: please make sure you have added to your ssh-agent a ssh key that is authorized in "caasp4-worker-1".
F0911 15:19:37.916268   12263 join.go:51] error joining node caasp4-worker-1: failed to apply state kubernetes.install-node-pattern: failed to initialize client: authentication error
+ set +x
Waiting for masters to be ready
Waiting for workers to be ready........

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.