Giter Club home page Giter Club logo

cncf / cnf-testbed Goto Github PK

View Code? Open in Web Editor NEW
162.0 162.0 51.0 11.46 MB

ARCHIVED: 🧪🛏️Cloud-native Network Function (CNF) Testbed --> See LFN Cloud Native Telecom Initiative https://wiki.lfnetworking.org/pages/viewpage.action?pageId=113213592

Home Page: https://wiki.lfnetworking.org/pages/viewpage.action?pageId=113213592

License: Apache License 2.0

Shell 54.12% Ruby 2.81% Dockerfile 1.30% Python 27.57% HCL 0.35% Roff 3.89% Makefile 0.33% Smarty 0.43% Jinja 9.06% Mustache 0.15%
cloud-native cnf telecom

cnf-testbed's People

Contributors

carstenkoester avatar dankohn avatar denverwilliams avatar electrocucaracha avatar fkautz avatar kimmcmahon avatar linkous8 avatar lixuna avatar maciekatbgpnu avatar mackonstan avatar michaelspedersen avatar pmikus avatar rajpratik71 avatar rstarmer avatar taylor avatar thewolfpack avatar virlstd avatar wavell avatar wverac avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cnf-testbed's Issues

Create vDNS virtual machine for KVM

Tasks:

  • Create vDNS VNF VM for KVM
  • Bring up vDNS VM on VM runtime (eg. libvirt + KVM)
  • create smoke test vm for checking dns
  • Save VM with vDNS software installed
  • Create scripts/vagrant config for using a saved vm

Run vDNS VNF/CNF tests and collect results

Tasks:

  • physical hardware specs
  • Add environment specs
  • Add software specs for testing (eg. kernel, KVM and docker version)
  • Add information regarding running the tests (eg. clear the system cache then run this command)
  • Add test results including attachments and details
  • add/remove cores to see the difference in results for
    • CNF test
    • VNF test
    • summarize differences seen (eg. affect on memory, cpu utilization, throughput)
  • summarize overall results

Details:

  • Physical server info (eg. cpu, memory, disk)
  • Host OS used
  • kernel version
  • Host OS settings like hugepages
  • KVM version
  • libvirt version
  • VM settings
  • vagrant version

Deploy a 3-node Kolla OpenStack environment

Need: 3-node Kolla OpenStack environment deployed to Packet.net

Initial tasks:

  • Grant access to Packet.net project to Kumulus and rest of CNF dev team
  • Research using Kolla for deployment of 3 node dev cluster
  • Deploy and setup 3-node kolla openstack cluster on Packet.net
  • Share credential for access to OpenStack with CNF dev team
  • List components running on OpenStack cluster for each machine

Testing:

  • Access OpenStack controller node via SSH
  • Access OpenStack via admin credentials
  • Test cli access to openstack API from control node using admin creds
  • Test instantiating a VM w/no public internet IP
    • Test accessing the VM from the controller
  • Test instantiating a VM w/a public internet IP
    • Test accessing the VM from the controller
    • Test accessing the VM from the internet

Update vBNG to work in CSIT test environment

In the new CSIT fd.io environment the vBNG setup needs to be adjusted.

There is some hardware specific configuration that needs to be set for the NF VPP configuration.

Also there are some issues with the Vagrant ubuntu box versions that we need to avoid (by setting specific versions to use).

  • (Container) Update vBNG VPP configuration

    • Update dpdk whitelist with PCI devices dev 0000:18:00.0 dev 0000:18:00.1
    • Enable dpdk no-multi-seg setting
    • Modify setup.gate configuration with correct interfaces and MAC addresses
  • (VM) Update vBNG VPP configuration [More details to be added]

    • Modify Vagrantfile to use specific version of Ubuntu box
    • Newest version seems to have a problem with SSH
  config.vm.box = "generic/ubuntu1604"
  config.vm.box_version = "1.8.14"

Create vBNG container for Docker

Tasks:

  • Determine if the vDNS solution for multiple network interface requirements works for vBNG. See #25
  • Investigate using PCI passthru
    • update Docker config if it's an option
  • If needed, update documentation for process to create and starting vBNG container
    • create 2nd network
    • create container and attach network with specific IP
    • start container
  • Create Dockerfile for building CNF based on VNF
    • Ensure mac address used by vpp is set as needed by #34
  • Update install scripts as needed to work with Docker and Vagrant
  • Create a entrypoint.sh for vBNG
  • Bring up CNF on Docker using process which creates second interface
  • Do minimal smoke test
    • document testing procedure

Grant Access to Kolla-based OpenStack environment to Vulk

Please let me know who to pass the admin password on to. Or discover by logging in to the control node (default user is root, host is 147.75.108.179). OpenStack admin password is available as:

grep keystone_admin /etc/kolla/passwords.yml

Create performance test for vBNG VNF

Goal: Testing Throughput, latency, resource util per unit of performance

VMs:

  • vBNG
  • Packetgen
  • (VXLan VM) may not be needed

Tasks:

  • Investigate ability to do testing with Packetgen + vBNG w/o the need for vxlan.
    • Remove vxlan as requirement in this issue if that is possible
    • Update vBNG VM configuration to use needed network setup via ticket #31
  • Create script to use Packetgen for testing vBNG VM
  • Create top-level comaparison script for testing vBNG
    • create vBNG_vm_test.sh to start VMs and run the test. Eg.
      • Starts vBNG VM
      • Starts Packgen VM
      • Runs tests from packgen vm
    • Test script with no VMs running

Create vDNS container for Docker

Tasks:

  • Create Dockerfile for building CNF based on VNF
  • Bring up CNF on Docker
  • update install files to work with docker and vagrant
  • Create a entrypoint.sh for vDNS to start bind9
    • leave DHCP service starting as is
  • smoke test
  • Determine solution for multiple network interface requirement
  • Update and document process for creating and starting vDNS container
    • create 2nd network
    • create container and attach network with specific IP
    • start container
  • Update container build to use mac address used by vpp in #26

Issues:

  • RESOLVED: Multiple network interfaces needed for testing based on vpp packet gen tests.

Deploy non-ONAP development OpenStack environment for VNF testing

Goal: Segment testing of non-ONAP VNF work from ONAP

This is Part 2: Get VNFs running on OpenStack without ONAP from https://docs.google.com/document/d/1GrdMynJA738fmkOPvTrkZAGz04eC4TcvmLRYLW4wbCs/edit#

Time to provision OpenStack on Virtualbox (using Vagrant) on Packet

Here is the summary of time to provision OpenStack on Virtualbox (using Vagrant) on Packet.

  • (When we have OpenStack provisioning directly to Packet we can provide those timings in ticket #11.)

Set-up:

  • 3 Nodes are provisioned: 1 control and 2 compute nodes
  • Control node is provisioned first, then the 2 compute nodes are provisioned simultaneously

Provision time:

  • Control node total time = 00:27:33
  • Longest compute node total time = 00:13:19

- Total time for all 3 nodes (control, compute 1, compute 2) = 00:54:06

Configure benchmark environment on fd.io CSIT testbench

We will be using Testbed22 (TG/SUT) for benchmarks. Details can be found here: https://wiki.fd.io/view/CSIT/fdio_csit_lab_ext_lld_draft

  • [Partially done] Get access to private management network (fd.io)
    • Create GPG key and get it signed
    • Request access through fd.io
    • Configure OpenVPN using provided "access package"

#############

  • Configure BIOS on 'SUT' testbench to allow PCI-Passthrough
    • Enable Intel. VT for Directed I/O (VT-d)
    • Using KVM (via IPMI interface)

Update CSIT host with fd.io recommended optimizations

  • Do host optimizations to match CSIT environment
    • Make changes persistent across reboot

Run-time (non-persistent)

$ for l in `ls /proc/irq`; do echo 1 | sudo tee /proc/irq/$l/smp_affinity; done
$ for i in `pgrep rcu[^c]` ; do sudo taskset -pc 0 $i ; done
$ echo 1 | sudo tee /sys/bus/workqueue/devices/writeback/cpumask
# Modifications do not persist through reboot

TBD: persistent changes:

[Part 2] Workspace volume is read-only on cnf dev1 system

Issue: file system not writeable

  • Vagrant/Virtualbox not functioning because disk is read-only, which caused OpenStack failures

The dmesg log shows device-mapper requeuing since May 10th before a connection failure caused a full failure on May 29th starting at 2:40:46 UTC with multipath failures to the network storage.

[Tue May 29 02:40:46 2018] connection2:0: ping timeout of 3 secs expired, recv timeout 3, last rx 5317134756, last ping 5317135507, now 5317136321

This continued with more failures as the kernel noticed the device was offline so it was put into read-only mode at 02:41:19 UTC.

[Tue May 29 02:40:59 2018] sd 8:0:0:0: [sdh] tag#0 FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK
[Tue May 29 02:41:18 2018] sd 7:0:0:0: rejecting I/O to offline device
[Tue May 29 02:41:19 2018] EXT4-fs (dm-1): Remounting filesystem read-only

Network interface to storage went down then bonded network interface disabled the failed network interface

[Tue May 29 02:44:13 2018] mlx4_en: enp2s0d1: Link Down
[Tue May 29 02:44:13 2018] bond0: link status definitely down for interface enp2s0d1, disabling it

At 02:46:04 the network interface came back up then the bonded interface was enabled. Unfortunately the bonding failed

[Tue May 29 02:46:04 2018] mlx4_en: enp2s0d1: Link Up
[Tue May 29 02:46:04 2018] bond0: link status up for interface enp2s0d1, enabling it in 200 ms
[Tue May 29 02:46:04 2018] mlx4_en: enp2s0d1: Fail to bond device

Create vBNG virtual machine for KVM

Goal: vBNG VNF VM for KVM

Tasks:

  • Update vBNG build/runtime code to omit the need for vAAA and vDHCP
    • vAAA dependent code removed
    • vDHCP dependent code removed
  • DEFERRED: Investigate PCI pass-through (if large effort, create a new ticket)
    • SR-IOV or Passthrough Mode
    • DEFERRING. Will do another round of tests after this is figured out.
  • Update original vBNG install/build to work in KVM/vagrant environment
  • Bring up vBNG VM on VM runtime (eg. libvirt + KVM)
  • Create smoke test vm for checking packets pass through the BNG VM
  • DEFERRED: Save VM with vBNG software installed
    • If needed, Update scripts/vagrant config for using a saved vm
    • Document saving of vBNG VM (if anything is not generic)
    • Create minimal script for saving vBNG VM
  • Validate test requirements
    • Investigate need for vxlan
    • investigate VMs required for testing

Test and review using Digital Rebar to deploy Kubernetes clusters on Packet.net

Goal: Deploy a Kubernetes cluster to bare metal using a PXE boot install method on Packet.net

This should cover starting with no Packet nodes and all the way to a running Kubernetes cluster which supports deploying apps via helm charts.

A user should be able to start with nothing more than a Packet.net account.

The steps would look something like:

  1. User has a Packet API key
  2. User goes to a public github repo
  3. User clones repo
  4. User adds Packet API key to environment or some config file
  5. User runs script or copy and pastes commands to get a k8s cluster

Could start with SSH to a brand new packet instance, launched from Packet dashboard, and run everything else from command line

Test and review using Digital Rebar to deploy OpenStack devstack on Packet.net

Goal: Deploy a Kubernetes cluster to bare metal using a PXE boot install method on Packet.net

This should cover starting with no Packet nodes and all the way to a running single node devstack install of OpenStack

A user should be able to start with nothing more than a Packet.net account.

The steps would look something like:

  1. User has a Packet API key
  2. User goes to a public github repo
  3. User clones repo
  4. User adds Packet API key to environment or some config file
  5. User runs script or copy and pastes commands to get a k8s cluster
  6. Could start with SSH to a brand new packet instance, launched from Packet dashboard, and run everything else from command line

Setup Pktgen to run directly on host (vs container)

  • Update Pktgen to run directly on host (NFVbench & TRex still runs in container)
    • Modify NFVbench configuration to use correct PCI devices
    • Modify NFVbench configuration to use more cores (7)
    • Make sure enough hugepages are available in host
$ echo 5120 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
$ echo 5120 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
  • Once running, update trex-cfg in container to set 5120 hugepages per socket instead of 512
  • File location in container /opt/trex/<trex_version>/trex-cfg

Create performance test for vDNS CNF equivalent to VNF test

Goal: Testing Throughput, latency, resource util per unit of performance

Tasks:

  • Create vDNSgen container based on the VNF from #24
  • Document network requirements and steps to create network for docker environment
  • Update host system as needed to support networking and test needs
    • Document any host system requirements
  • Update vDNS CNF as needed to support networking/testing needs => #25
  • Add support for specifying the rate for the DNS testing at runtime
  • Read packet counter via tap interface counter vs hardware counter
  • Pin cores used by countainer

Investigate SUSE bare metal deployment of OpenStack (Crowbar) to see if the result is vanilla OpenStack

Request: Investigate SUSE bare metal deployment of OpenStack (Crowbar) to see if the result is vanilla OpenStack

Timebox: 2-3 hours for initial investigation

To determine: Will the crowbar path work to deploy OpenStack on Packet (MVP)?

  • if yes, estimate for time to deliver solution

  • if yes, documentation on how it works

  • if no, documentation on why it won't work and the next approach to try

Chosen solution should:

  • Use an open source license
  • Support deployments completely from the command line
  • Install vanilla OpenStack on bare metal
  • Not use Kubernetes to install OpenStack
  • Not use specialized Web UI deployments of OpenStack onto Packet.net

Investigate SR-IOV with Mellanox ConnectX-4 NICs

  • Configure SR-IOV on instance equipped with ConnectX-4
  • Build VPP with support for mlx5_pmd
  • Configure VPP with VFs

It is possible to add PCI addresses of VFs in VPP, and have the interfaces show up. However, when putting a VF interface in "up" state VPP hangs with no useful information logged (no debug information available).

More details on the steps needed and useful references added below

(Update) A working solution has been added using VPP v18.04

Create performance test for vDNS VNF

Goal: Testing Throughput, latency, resource util per unit of performance

Tasks:

  • Create vDNSgen VM with test software for vDNS VNF
  • Update Vagrant config as needed to support network requirements
  • Update host system as needed to support networking and test needs
    • Added Hugepages support to host system
    • updated system install script for prereqs
    • Document any host system requirements
  • Update vDNS VNF as needed to support networking/testing needs
    • Check performance impact of core isolation (isolcpus, rcu_nocbs, nohz_full) in guest system

[06-18-2018] vG failing with vpp.service error

New bug ticket - followup from #16

Fix vG VNF VM SSE4.2 run time error - vpp.service errors on vG: This binary requires CPU with SSE4.2 extensions.

VGW is failing to start with the below error, which leads me to believe we are running into errors with nested virtualization and/or vpp not being enabled in our openstack deployment.

root@zdcpe1cpe01gw01:~# journalctl -u vpp
-- Logs begin at Mon 2018-06-18 22:48:32 UTC, end at Mon 2018-06-18 23:03:05 UTC. --
Jun 18 22:49:30 zdcpe1cpe01gw01 systemd[1]: Starting vector packet processing engine...
Jun 18 22:49:33 zdcpe1cpe01gw01 systemd[1]: Started vector packet processing engine.
Jun 18 22:49:35 zdcpe1cpe01gw01 vpp[1115]: ERROR: This binary requires CPU with SSE4.2 extensions.
Jun 18 22:49:35 zdcpe1cpe01gw01 systemd[1]: vpp.service: Main process exited, code=exited, status=1/FAILURE
Jun 18 22:49:35 zdcpe1cpe01gw01 systemd[1]: vpp.service: Unit entered failed state.
Jun 18 22:49:35 zdcpe1cpe01gw01 systemd[1]: vpp.service: Failed with result 'exit-code'.
Jun 18 22:49:36 zdcpe1cpe01gw01 systemd[1]: vpp.service: Service hold-off time over, scheduling restart.
Jun 18 22:49:36 zdcpe1cpe01gw01 systemd[1]: Stopped vector packet processing engine.
Jun 18 22:49:37 zdcpe1cpe01gw01 systemd[1]: Starting vector packet processing engine...
Jun 18 22:49:38 zdcpe1cpe01gw01 systemd[1]: Started vector packet processing engine.
Jun 18 22:49:39 zdcpe1cpe01gw01 vpp[1163]: ERROR: This binary requires CPU with SSE4.2 extensions.
Jun 18 22:49:38 zdcpe1cpe01gw01 systemd[1]: vpp.service: Main process exited, code=exited, status=1/FAILURE
Jun 18 22:49:39 zdcpe1cpe01gw01 systemd[1]: vpp.service: Unit entered failed state.
Jun 18 22:49:39 zdcpe1cpe01gw01 systemd[1]: vpp.service: Failed with result 'exit-code'.
Jun 18 22:49:39 zdcpe1cpe01gw01 systemd[1]: vpp.service: Service hold-off time over, scheduling restart.
Jun 18 22:49:40 zdcpe1cpe01gw01 systemd[1]: Stopped vector packet processing engine.
Jun 18 22:49:40 zdcpe1cpe01gw01 systemd[1]: Starting vector packet processing engine...
Jun 18 22:49:42 zdcpe1cpe01gw01 systemd[1]: Started vector packet processing engine.

Create performance test for vBNG CNF equivalent to VNF test

Goal: Testing Throughput, latency, resource util per unit of performance for the vBNG contianer

Tasks:

  • Validate Packgent container based on the VNF from #32 is usable for this test
  • Update host system as needed to support networking and test needs (See vxlan items in #32)
    • Document any host system requirements
  • Update vBNG CNF as needed to support networking/testing needs => #33
  • Used

Add Terraform configuration for creating a Packet system for the box-by-box comparison

Allow user to easily create the host testing system using terraform apply

Tasks:

  • Add terraform configuration to create a Packet system for the box-by-box comparison
  • Install software pre-reqs by running the installs scripts
  • Create example environment file.
  • Test deploying a new system
  • Verify docker works as expected on the new system. See #20
  • Verify vagrant works as expected on the new system. See #19

Summary of networking issues found with NF testing


Write up on networking issues with NF testing, including but not limited to:

  • Network configuration requirements for Packet
  • Networking requirements for VPP/DPDK NFs with the benefits for those specific requirements.
  • Create google doc with draft summary
  • add issues in this ticket

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.