Giter Club home page Giter Club logo

sdwan-devops's Introduction

SDWAN DevOps

This repo contains a set of tools to automate workflows and build CI/CD pipelines for Cisco SDWAN.

Note: The tools in this repo only work from a Unix environment with Docker (e.g. Linux, MacOS, etc.) due to issues with Ansible and file permissions mapping between Windows and the Linux container used in play.sh. WSL2 may fix this issue and we will revisit when WSL2 is released.

All operations are run out of the sdwan-devops directory: cd sdwan-devops

The folder sdwan-edge allows the deployment of C8000v in AWS, Azure, GCP. Openstack and VMware.

The folder sdwan-terraform allows the deployment of SDWAN Controllers in AWS, Azure and VMware.

A video demonstration of the use of this repository is available on Vidcast.

Clone repository

Clone the sdwan-devops repo using the main branch (default: origin/main):

git clone --single-branch --recursive https://github.com/ciscodevnet/sdwan-devops.git

Make sure you use --recursive to also clone folders sdwan-edge and terraform-sdwan.

Openssl version3

If you are on a Mac: we need openssl version3, while on mac this is LibreSSL.

Upgrade openssl:

brew install openssl@3

Software Dependencies

All software dependencies have been rolled into a Docker container. Ansible playbooks are launched via the container using the play.sh shell script.

All you need is a valid installation of docker on your system.

Note: The Dockerfile included in this repo is used to automatically build the sdwan-devops container image and publish it to the GitHub Container Registry. For a detailed list of the dependencies required to run the playbooks, refer to the Dockerfile.

Running playbooks via the Docker container

To run playbooks in this repo, use the play.sh shell script as shown below:

  • ./play.sh <playbook> <options>

This will start the docker container published in the GitHub Container Registry, run the playbooks inside the container and remove it once finished.

Deploying Controllers on AWS

The sdwan-devops can be used to instantiates controllers on AWS.

Deploying Controllers on AWS

  • Deploy vBond, vSmart and vManage controllers in a VPC
  • Provides bootstrap configuration

Deploying C8000v

C8000v can be deployed in a transit VPC/VNET in AWS, Azure and GCP, and can also be deployed on VMware and Openstack.

Deploying C8000v

  • Generates bootstrap configuration (cloud-init format)
  • Creates transit VPC if required
  • Deploy C8000v

Simulation

Simulation can be used for developing new deployments as well as testing changes to current deployments. Simulation capabilities are provided by CML^2 or VMware. The Ansible CML^2 Modules are used to automate deployments in CML^2. The Terraform Modules are used to automate deployments in VMware.

Simulation

sdwan-devops's People

Contributors

chrishocker avatar jasonking3 avatar jbarozet avatar jlothian avatar ljakab avatar nathandotto avatar radtrentasei avatar reismarcelo avatar sconrod-cisco avatar stevenca avatar stmosher avatar tithomas1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sdwan-devops's Issues

Align export-templates.yml and import-templates.yml

Currently, export-templates.yml exports the device and feature templates with the variable names device_templates and feature_templates, while import-templates.yml is looking for vmanage_device_templates and vmanage_feature_templates. We should modify export-templates.yml to use the vmanage_ naming convention so that import-templates.yml can directly use templates exported with export-templates.yml without modification.

Location of metadata.yaml and implications thereof

Hi, I am working through how to run the code in this repo. I am given to understand that the starting point is in sdwan_config_builder. In that directory, the README says:

By default sdwan_config_build looks for a 'metadata.yaml' file in the directory where it is run.

The only metadata.yaml file that is in the repo is in the config dir, alongside a config.example.yaml.

The minimal_env.sh has:

export CONFIG_BUILDER_METADATA="../config/metadata.yaml"

This implies, then, that the overall starting point must be to run the minimal_env.sh from the bin directory first. I think that step should be documented in the top level README then.

The metadata.yaml refers to:

top_level_config: "../config/config.yaml"

Whereas we have a config.example.yaml in the repo. I think, then, that the config.example.yaml should be copied to a config.yaml and suitably edited.

If this is correct so far, then I can add suitable documentation updates and submit a PR.

Thanks

Nathan

Cleanup docs directory

There are a lot of old and out-of-date files in the docs directory that need to either be updated or deleted.

vmanage export attachments - exports just one variable

Hello

When I tried export attachments from the box, all the time it exported just last variable
vManage code 20.1.1
Will try it again on production vManage where I have 19.2.2

Example of such attachments
{
"vmanage_attachments": [
{
"host_name": "vEdge1kupperOld",
"device_type": "vedge",
"uuid": "11OG637181517",
"system_ip": "198.18.6.2",
"site_id": "11000000",
"template": "vEdge1k_upper_template",
"variables": {
"null": "vEdge1kupperOld"
}
}
]
}

I have CLI templates and for sure more variables, in this example I had 3 variables.

Fix or remove Windows powershell script (play.ps1)

Right now this script is broken when used with this repo. The issue is with bind mounted volumes in Docker Desktop for Windows. Due to the way Docker Desktop does the bind mount (CIFS volume share), the permissions of files in the repo get set inappropriately and things like dynamic inventory don't work due to not being executable, etc.

Convert hq2 control plane IP addressing to environment variables

Defining the control plane addressing statically in the inventory/hq2/sdwan.yml file is not ideal for deployment into various test/validation systems. Using environment variables to define the IP addressing of the control plane components would allow more flexibility when deploying the automation for test/validation.

OPA rules - implications and explanation

In the bin/config_build.sh we have:

# Uncomment the line below if you want to enforce the OPA rules in `config/policy/config.rego`
#set -e

That probably needs some additional explanation, but I am not sure how to explain that. Any ideas please?

Modify play.sh to use Docker Hub container

Now that we have the ansible-sdwan container being built in Docker Hub, we should consider using it in play.sh so that building the container locally is not required. We can create a play-dev.sh that uses a local container or modify play.sh so that it allows a user to specifying the container at runtime.

check-sdwan always passes, even if host is unreachable

The following output is from vManage without "Rapid" enabled:

Output:

Nping in VPN 1
Starting Nping 0.7.60 ( https://nmap.org/nping ) at 2020-02-20 21:39 UTC
SENT (0.0154s) ICMP [192.168.1.1 > 192.168.2.1 Echo request (type=8/code=0) id=27995 seq=1] IP [ttl=64 id=61444 iplen=28 ]
SENT (1.0155s) ICMP [192.168.1.1 > 192.168.2.1 Echo request (type=8/code=0) id=27995 seq=2] IP [ttl=64 id=61444 iplen=28 ]
RCVD (1.0155s) ICMP [127.1.0.2 > 192.168.1.1 Network 192.168.2.1 unreachable (type=3/code=0) ] IP [ttl=64 id=0 iplen=56 ]
SENT (2.0162s) ICMP [192.168.1.1 > 192.168.2.1 Echo request (type=8/code=0) id=27995 seq=3] IP [ttl=64 id=61444 iplen=28 ]
RCVD (2.0163s) ICMP [127.1.0.2 > 192.168.1.1 Network 192.168.2.1 unreachable (type=3/code=0) ] IP [ttl=64 id=0 iplen=56 ]
SENT (3.0172s) ICMP [192.168.1.1 > 192.168.2.1 Echo request (type=8/code=0) id=27995 seq=4] IP [ttl=64 id=61444 iplen=28 ]
RCVD (3.0173s) ICMP [127.1.0.2 > 192.168.1.1 Network 192.168.2.1 unreachable (type=3/code=0) ] IP [ttl=64 id=0 iplen=56 ]
SENT (4.0182s) ICMP [192.168.1.1 > 192.168.2.1 Echo request (type=8/code=0) id=27995 seq=5] IP [ttl=64 id=61444 iplen=28 ]
RCVD (4.0183s) ICMP [127.1.0.2 > 192.168.1.1 Network 192.168.2.1 unreachable (type=3/code=0) ] IP [ttl=64 id=0 iplen=56 ]
Max rtt: 0.042ms | Min rtt: 0.018ms | Avg rtt: 0.026ms
Raw packets sent: 5 (140B) | Rcvd: 4 (224B) | Lost: 1 (20.00%)
Nping done: 1 IP address pinged in 4.03 seconds

And this is with "Rapid" enabled:

Output:

Nping in VPN 1
!!!!!
Raw packets sent: 5 (0B) | Rcvd: 5 (0B) | Lost: 0 (0%)

Enable DHCP for VMware automation

Right now only static IP address assignment works for VMware automation. This enhancement would add the dynamic inventory bits required to pull IP addressing from terraform.

Update use of the netconf and network_cli connection types

In the scenario where we specify static addressing for control plane components and we specify the mgmt_interface, we should always use that static address as the host address for Ansible and not rely on dynamic inventory to supply that info (because it may pickup some other address.) In order to do this, we need to set ansible_host appropriately anytime we use the netconf or network_cli connection types.

Align export template playbook with import template playbook

Currently the export-templates.yml playbook exports the device and feature templates under the headings device_templates and feature_templates in the YAML while the import-templates.yml playbook is looking for vmanage_device_templates and vmanage_feature_templates. This requires user intervention to make exported template files consumable by the import playbook. This is less than desirable.

Update to handle multiple CML client library versions

We could use a way to specify which CML client library version to use when running playbooks. Currently it is set statically in the requirements.txt file and will break when targeting different CML 2.x minor versions (like we do in CI currently).

hq1 should be the default inventory

hq1 is the easiest inventory to get working because it does not require the static IP addressing changes that hq2 does. It should be the default.

In ansible.cfg change:

inventory = ./inventory/hq2

to:

inventory = ./inventory/hq1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.