A Puppet Enterprise Proof-of-concept kit
PEPOC uses Puppetizer to automatically install Puppet Enterprise masters and agents as described in the inventory file.
PEPOC comes with a virtual lab of VMs for your convenience but it can also be configured to use real VMs on public or private cloud. The idea is that the workstation your running the puppetizer tool from reaches out over the network using SSH and up puppet according to your requirements.
- Vagrant lab
- Inventory file describing Vagrant lab
- Gemfile to load correct versions of puppetizer, etc.
- Instructions
In a nutshell:
- Setup your workstation
- Setup your POC VMs
- Built-in Vagrant lab
- Real VMs
- Use Puppetizer to install masters and agents
- Customization workflows
- Install open-ssh client, ssh-agent and md5sum (
brew install openssl md5sha1sum
) - Install ruby
sudo gem install bundler
git clone https://github.com/puppetlabs/pepoc.git
cd pepoc
bundle install
- You might need to install additional OS libraries to get this operation to succeed
- Install VirtualBox
- Install Vagrant
vagrant plugin install vagrant-hosts
vagrant plugin install vagrant-hostsupdater
Mac users will need to install xcode (from app store) and run xcode-select --install
To create /etc/hosts
entries for the vagrant machines in the lab on the workstation you are running from, install the vagrant-hostsupdater
plugin:
To avoid being prompted for a password when updating /etc/hosts
follow the plugin's
instructions.
vagrant up
VMs Must:
- Be RHEL/CentOS (Windows agents must be setup manually)
- Accept incoming SSH connections from the machine your running the PEPOC from
- Allow one of the following:
- Login as root via SSH using a password
- Login as root via SSH using a public key
- Login as a regular user via SSH and obtain root via passwordless sudo
- Login as a regular user via SSH and obtain root via sudo + password
- Login as a regular user via SSH and obtain root via su + root password
By far the easiest and securest way to run the POC is to load keys into the SSH agent and not worry about things:
eval `ssh-agent -s`
ssh-add ssh_keys/id_rsa
The Vagrant lab dumps keys into the above directory. If your using real VMs, you will need to obtain a copy of the private key for the account your trying to access and then run the ssh-add
against it. Make sure your keys have permission 0400
or they will not be loaded.
If you don't yet have key based authentication in place, then its easiest to generate a new keypair locally and use ssh-copy-id to install it. Mac users will need to brew install ssh-copy-id
to obtain the command. See https://valdhaus.co/writings/ansible-post-install/ for a worked example.
Your vagrant keys will be regenerated when you reprovision your lab VMs. If you destroy or recreate your key you must remove the old one from the SSH agent or you will not be able to access lab machines after creating a new one:
ssh-add -D #remove all keys
To stop passwords appearing in the process table, they are passed by exporting the variable PUPPETIZER_USER_PASSWORD
, e.g.:
export PUPPETIZER_USER_PASSWORD=t0ps3cret
You may encounter the following error if you have SSH keys loaded in the SSH agent:
disconnected: Too many authentication failures for root (2) @ #<Net::SSH::Simple::Result exception=#<Net::SSH::Disconnect: disconnected: Too many authentication failures for root (2)> finish_at=2016-09-25 23:54:48 +1000 stderr="" stdout="" success=false>
In this case, the fix is to unload all loaded keys:
ssh-add -D
Setup your inventory file. If you are using the Vagrant lab, then this is already done for you although you may want to tweak it. If you are using real VMs, then you will need to adjust as follows:
- Under the
[puppetmasters]
heading, list the address(es) of hosts to install as monolithic masters - Under the
[agents]
heading, list the address(es) of hosts to install as agents - For each node, specify
pp_role
if you would like to assign a role class via CSR attributes - For your master, set
deploy_code=true
to checkout an R10K control repo - For the moment, this file has to be edited in-place. You might want to fork the pepoc repository to service a new client. Please don't commit back customer specific changes to the main repo
bundle exec puppetizer --help
Puppetizer attempts to login as root by default, if you need to login as another user, pass the username like this:
--ssh-username username
To get stack traces, invoke puppetizer like this:
puppetizer --verbosity debug
If your username is not root
, puppetizer will attempt to use sudo
to gain root access. If you need to type a password when using sudo, set the PUPPETIZER_USER_PASSWORD
shell variable to the password you need to type, eg:
export PUPPETIZER_USER_PASSWORD=t0ps3cret
Be sure to always launch puppetizer using bundler or the wrong libraries may be used:
bundle exec puppetizer ...
Puppetizer assumes the same username and password is used for all machines in the inventory file. This is to avoid building a plaintext file containing all the usernames and passwords to a fleet of machines...
If you find yourself in the situation where there are lots of different usernames and passwords needed, your options are:
- Iteratively set
PUPPETIZER_USER_PASSWORD
and comment out hosts in the inventory file - Use
ssh-copy-id
to manually login once to the required hosts and then use public key authentication - Provide passwords in a CSV file
- Think of something better and create a PR :)
You must manually download the PE media from puppet.com and place the compressed tarball in the pepoc directory. This is to ensure that marketing leads are captured. You will need the 64 bit RHEL 7 version.
IMPORTANT: You must use 2016.4 LTR release of Puppet Enterprise. Quarterly releases are not supported and will result in failed installation due to changes to built-in node classifier rules.
bundle exec puppetizer puppetmasters
This will:
- Process any CSR role definitions
- Install PE on the master
- Clone a bare repository from https://github.com/GeoffWilliams/r10k-control/ to
/var/lib/psquared/r10k-control
- this becomes the POC's git server - Configure the master with a nice shell prompt for root, all known agents, etc.
bundle exec puppetizer agents
This will:
- Process any CSR role definitions
- Attempt to install the puppet agent from the master on each agent listed in the inventory file
- If installation fails on a particular host, puppetizer will continue to the next host
By this stage, you should have a functional puppetmaster and agents and be ready to continue. Its also possible to run:
bundle exec puppetizer all
The steps below outline the common procedure that each of the customisation trails follow.
puppet.demo.internal
is the hostname to use for the demo environment, update this to suit your needs if your not using the Vagrant lab:
- Checkout a copy of the control repo from the puppet master
- On the master:
git clone /var/lib/psquared/r10k-control
- On a workstation:
- Obtain private key from the master at
/var/lib/psquared/.ssh/[email protected]
, download to your workstation andchmod 400
it. - Add the key to the SSH agent:
ssh-add [email protected]
- Clone the repository somewhere:
git clone ssh://[email protected]/var/lib/psquared/r10k-control/
- Obtain private key from the master at
- Add your local changes inside the cloned r10k-control directory:
- The bulk of changes involve customising the ready profiles through their parameters by adding hiera data to
/hieradata
- If you decide to go off-trail and make new roles and profiles, put them in the
role
andprofile
directories under/site
- Reference new modules in
/Puppetfile
- git commit... git push - code manager will be triggered to update the code on master on push. The git command will freeze until Code Manager has finished running so that you know your code is live when you want to run it.
- run puppet on affected nodes as needed or wait 30 minutes
The following customizations are ready to go so feel free to attempt them:
- Setting up a load balanced GeoServer
- Setting up a monitor server
- Setting up a load balancer (haproxy)
- Setting up A LAMP server
- Setting up A LAMP server with WordPress
- Setting up IIS on Windows
- Setting up the iptables firewall on Linux
vagrant up
vagrant suspend
vagrant destroy -f
vagrant ssh HOSTNAME
E.G.,
vagrant ssh puppet.demo.internal
Q: What IP addresses do the Vagrant lab VMs use?
A: Check the value of base_ip
in the Vagrantfile to determine the subnet. The puppetmaster uses the .10
address and all agent nodes are allocated sequentially from .50
in increments of 10. Hopefully you won't have to worry about the IP addresses too much since the vagrant-hostsupdater will add them the /etc/hosts file on your workstation.
Q: I receive the error: "An internal Escort error has occurred, you should probably report it by creating an issue on github! To get a stacktrace, up the verbosity level to DEBUG and execute your command again"
A: The root-cause of the error should be in the line above this one. Errors are not currently handled properly as this code is still experimental.
Q: I can't SSH! I might have added some VMs to Vagrantfile too...
A: I ended up having to reboot the laptop and then vagrant destroy -f
to fix this. Something real skrewy was going on around memory, firewalls and duplicate IPs...