Giter Club home page Giter Club logo

vrnetlab's Introduction

vrnetlab - VR Network Lab

This is a fork of the original plajjan/vrnetlab project. The fork has been created specifically to make vrnetlab-based images to be runnable by containerlab.

The documentation provided in this fork only explains the parts that have been changed in any way from the upstream project. To get a general overview of the vrnetlab project itself, consider reading the docs of the upstream repo.

What is this fork about?

At containerlab we needed to have a way to run virtual routers alongside the containerized Network Operating Systems.

Vrnetlab provides a perfect machinery to package most-common routing VMs in the container packaging. What upstream vrnetlab doesn't do, though, is creating datapath between the VMs in a "container-native" way.
Vrnetlab relies on a separate VM (vr-xcon) to stich sockets exposed on each container and that doesn't play well with the regular ways of interconnecting container workloads.

This fork adds additional option for launch.py script of the supported VMs called connection-mode. This option allows to choose the way vrnetlab will create datapath for the launched VMs.

By adding a few options a connection-mode value can be set to, we made it possible to run vrnetlab containers with the networking that doesn't require a separate container and is native to the tools like docker.

Container-native networking?

Yes, the term is bloated, what it actually means is that with the changes we made in this fork it is possible to add interfaces to a container that hosts a qemu VM and vrnetlab will recognize those interfaces and stitch them with the VM interfaces.

With this you can just add, say, veth pairs between the containers as you would do normally, and vrnetlab will make sure that these ports get mapped to your router' ports. In essence, that allows you to work with your vrnetlab containers like with a normal container and get the datapath working in the same "native" way.

Although the changes we made here are of a general purpose and you can run vrnetlab routers with docker CLI or any other container runtime, the purpose of this work was to couple vrnetlab with containerlab.
With this being said, we recommend the readers to start their journey from this documentation entry which will show you how easy it is to run routers in a containerized setting.

Connection modes

As mentioned above, the major change this fork brings is the ability to run vrnetlab containers without requiring vr-xcon and by using container-native networking.

The default option that we use in containerlab for this setting is connection-mode=tc. With this particular mode we use tc-mirred redirects to stitch container's interfaces eth1+ with the ports of the qemu VM running inside.

tc

Using tc redirection we get a transparent pipe between container's interfaces and VM's.

We scrambled through many alternatives, which I described in this post, but tc-redirect works best of them all.

Other connection mode values are:

  • bridge - creates a linux bridge and attaches eth and tap interfaces to it. Can't pass LACP traffic.
  • ovs-bridge - same as a regular bridge, but uses OvS. Can pass LACP traffic.
  • macvtap

Which vrnetlab routers are supported?

Since the changes we made in this fork are VM specific, we added a few popular routing products:

  • Arista vEOS
  • Cisco XRv9k
  • Cisco XRv
  • Cisco FTDv
  • Juniper vMX
  • Juniper vSRX
  • Juniper vJunos-switch
  • Juniper vJunos-router
  • Juniper vJunosEvolved
  • Nokia SR OS
  • OpenBSD
  • FreeBSD
  • Ubuntu

The rest are left untouched and can be contributed back by the community.

Does the build process change?

No. You build the images exactly as before.

vrnetlab's People

Contributors

akielaries avatar axxyhtrx avatar carlmontanari avatar crankynetman avatar dpnetca avatar emjemj avatar exhar avatar fredsod avatar grahamneville avatar gusman avatar gusman12 avatar hellt avatar hendriksthomas avatar jbemmel avatar jcpvdm avatar jgcumming avatar kaelemc avatar mirceaulinic avatar mzagozen avatar nlgotz avatar noifp avatar plajjan avatar rfc2516 avatar robotwalk avatar sdktr avatar sonicepk avatar ssasso avatar stimmerman avatar tiago-amado avatar vista- avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vrnetlab's Issues

add boot_delay metadata

starting all the routers at the same time might lead to a system halt
proposal is to implement a boot delay metadata parameter that will delay the start of the VMs to spread the load

Issue with VQFX build/boot

Dear @hellt ,

We've been trying to follow the guide to build VQFX, but sadly that didn't work. The build process seems to be done OK:

vrnetlab/vqfx$ sudo make
ls: cannot access '*-re-*.vmdk': No such file or directory
Makefile:23: warning: overriding recipe for target 'docker-pre-build'
../makefile.include:18: warning: ignoring old recipe for target 'docker-pre-build'
Makefile:30: warning: overriding recipe for target 'docker-build-common'
../makefile.include:24: warning: ignoring old recipe for target 'docker-build-common'
for IMAGE in vqfx-20.2R1.10-re-qemu.qcow2; do \
        echo "Making $IMAGE"; \
        make IMAGE=$IMAGE docker-build; \
done
Making vqfx-20.2R1.10-re-qemu.qcow2
make[1]: Entering directory '/mnt/pv0/vrnetlab/vqfx'
ls: cannot access '*-re-*.vmdk': No such file or directory
Makefile:23: warning: overriding recipe for target 'docker-pre-build'
../makefile.include:18: warning: ignoring old recipe for target 'docker-pre-build'
Makefile:30: warning: overriding recipe for target 'docker-build-common'
../makefile.include:24: warning: ignoring old recipe for target 'docker-build-common'
rm -f docker/*.qcow2* docker/*.tgz* docker/*.vmdk* docker/*.iso
echo "pfe     vqfx-20.2R1-2019010209-pfe-qemu.qcow"
pfe     vqfx-20.2R1-2019010209-pfe-qemu.qcow
cp vqfx*-pfe*.qcow* docker/
echo "image   vqfx-20.2R1.10-re-qemu.qcow2"
image   vqfx-20.2R1.10-re-qemu.qcow2
echo "version 20.2R1.10"
version 20.2R1.10
Building docker image using vqfx-20.2R1.10-re-qemu.qcow2 as vrnetlab/vr-vqfx:20.2R1.10
cp ../common/* docker/
make IMAGE=$IMAGE docker-build-image-copy
make[2]: Entering directory '/mnt/pv0/vrnetlab/vqfx'
ls: cannot access '*-re-*.vmdk': No such file or directory
Makefile:23: warning: overriding recipe for target 'docker-pre-build'
../makefile.include:18: warning: ignoring old recipe for target 'docker-pre-build'
Makefile:30: warning: overriding recipe for target 'docker-build-common'
../makefile.include:24: warning: ignoring old recipe for target 'docker-build-common'
cp vqfx-20.2R1.10-re-qemu.qcow2* docker/
make[2]: Leaving directory '/mnt/pv0/vrnetlab/vqfx'
(cd docker; docker build --build-arg http_proxy= --build-arg https_proxy= --build-arg RE_IMAGE=vqfx-20.2R1.10-re-qemu.qcow2 --build-arg PFE_IMAGE=vqfx-20.2R1-2019010209-pfe-qemu.qcow -t vrnetlab/vr-vqfx:20.2R1.10 .)
[+] Building 23.6s (12/12) FINISHED                                                                                                                                                                                         docker:default
 => [internal] load build definition from Dockerfile                                                                                                                                                                                  0.0s
 => => transferring dockerfile: 531B                                                                                                                                                                                                  0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                     0.0s
 => => transferring context: 2B                                                                                                                                                                                                       0.0s
 => [internal] load metadata for docker.io/library/ubuntu:20.04                                                                                                                                                                       1.6s
 => [internal] load build context                                                                                                                                                                                                    22.0s
 => => transferring context: 1.44GB                                                                                                                                                                                                  22.0s
 => [1/7] FROM docker.io/library/ubuntu:20.04@sha256:f5c3e53367f142fab0b49908550bdcdc4fb619d2f61ec1dfa60d26e0d59ac9e7                                                                                                                 0.0s
 => CACHED [2/7] RUN apt-get update -qy    && apt-get upgrade -qy    && apt-get install -y    bridge-utils    iproute2    python3-ipy    socat    qemu-kvm    procps    tcpdump    && rm -rf /var/lib/apt/lists/*                     0.0s
 => CACHED [3/7] COPY vqfx-20.2R1.10-re-qemu.qcow2 /                                                                                                                                                                                  0.0s
 => CACHED [4/7] COPY vqfx-20.2R1-2019010209-pfe-qemu.qcow /                                                                                                                                                                          0.0s
 => CACHED [5/7] COPY healthcheck.py /                                                                                                                                                                                                0.0s
 => CACHED [6/7] COPY vrnetlab.py /                                                                                                                                                                                                   0.0s
 => CACHED [7/7] COPY launch.py /                                                                                                                                                                                                     0.0s
 => exporting to image                                                                                                                                                                                                                0.0s
 => => exporting layers                                                                                                                                                                                                               0.0s
 => => writing image sha256:25d809f4a01d43ce0388803aec9fc4eb34fe47203aaaa084945099bd8490965d                                                                                                                                          0.0s
 => => naming to docker.io/vrnetlab/vr-vqfx:20.2R1.10                                                                                                                                                                                 0.0s
make[1]: Leaving directory '/mnt/pv0/vrnetlab/vqfx'

And we see the image created:

$ sudo docker image ls
REPOSITORY                        TAG              IMAGE ID       CREATED          SIZE
vrnetlab/vr-vqfx                  20.2R1.10        25d809f4a01d   30 minutes ago   1.85GB

However, when we reference it in containerlab topoogy file:

topology:
  nodes:
    leaf01:
      kind: juniper_vqfx
      image: vrnetlab/vr-vqfx:20.2R1.10
      startup-config: startup_configs/leaf01.txt

And launch the topology, we've see such a log:

$ sudo docker container logs -f lab-test-leaf01
[sudo] password for hubadmin: 
2023-12-11 21:12:02,185: vrnetlab   DEBUG    Creating overlay disk image
2023-12-11 21:12:02,209: vrnetlab   DEBUG    Creating overlay disk image
2023-12-11 21:12:02,702: vrnetlab   DEBUG    Starting vrnetlab VQFX
2023-12-11 21:12:02,703: vrnetlab   DEBUG    VMs: [<__main__.VQFX_vcp object at 0x7f3ea2ca1ca0>, <__main__.VQFX_vpfe object at 0x7f3ea2ca1c70>]
2023-12-11 21:12:02,713: vrnetlab   DEBUG    VM not started; starting!
2023-12-11 21:12:02,713: vrnetlab   INFO     Starting VQFX_vcp
2023-12-11 21:12:02,713: vrnetlab   DEBUG    number of provisioned data plane interfaces is 4
2023-12-11 21:12:02,713: vrnetlab   DEBUG    waiting for provisioned interfaces to appear...
2023-12-11 21:12:07,717: vrnetlab   DEBUG    highest allocated interface id determined to be: 4...
2023-12-11 21:12:07,717: vrnetlab   DEBUG    interfaces provisioned, continuing...
2023-12-11 21:12:07,719: vrnetlab   DEBUG    qemu cmd: qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.0:4000,server,nowait -m 2048 -serial telnet:0.0.0.0:5000,server,nowait -drive if=ide,file=vqfx-20.2R1.10-re-qemu-overlay.qcow2 -device pci-bridge,chassis_nr=1,id=pci.1 -device e1000,netdev=p00,mac=0C:00:4d:cc:80:00 -netdev user,id=p00,net=10.0.0.0/24,tftp=/tftpboot,hostfwd=tcp::2022-10.0.0.15:22,hostfwd=udp::2161-10.0.0.15:161,hostfwd=tcp::2830-10.0.0.15:830,hostfwd=tcp::2080-10.0.0.15:80,hostfwd=tcp::2443-10.0.0.15:443 -device e1000,netdev=vcp-int,mac=0C:00:6d:7c:97:01 -netdev tap,ifname=vcp-int,id=vcp-int,script=no,downscript=no -device e1000,netdev=dummy0,mac=0C:00:a2:f7:c1:01 -netdev tap,ifname=dummy0,id=dummy0,script=no,downscript=no -device e1000,netdev=p01,mac=0C:00:a3:bb:e9:01,bus=pci.1,addr=0x2 -netdev tap,id=p01,ifname=tap1,script=/etc/tc-tap-ifup,downscript=no -device e1000,netdev=p02,mac=0C:00:8d:4f:38:02,bus=pci.1,addr=0x3 -netdev tap,id=p02,ifname=tap2,script=/etc/tc-tap-ifup,downscript=no -device e1000,netdev=p03,mac=0C:00:3d:f5:15:03,bus=pci.1,addr=0x4 -netdev tap,id=p03,ifname=tap3,script=/etc/tc-tap-ifup,downscript=no -device e1000,netdev=p04,mac=0C:00:d7:58:04:04,bus=pci.1,addr=0x5 -netdev tap,id=p04,ifname=tap4,script=/etc/tc-tap-ifup,downscript=no
2023-12-11 21:12:09,728: vrnetlab   INFO     Unable to connect to qemu monitor (port 5000), retrying in a second (attempt 1)
2023-12-11 21:12:10,730: vrnetlab   INFO     Unable to connect to qemu monitor (port 5000), retrying in a second (attempt 2)
2023-12-11 21:12:15,740: launch     TRACE    OUTPUT VCP: Loading /boot/loader
Consoles: serial port  
BIOS drive A: is disk0
BIOS drive C: is disk1
BIOS 639kB/2096000kB available memory

FreeBSD/i386 bootstrap loader, Revision 1.2
(builder@qnc-jre-emake1t, Thu Dec 19 03:50:25  2019)
Can't open /boot/init.4th.
Loading /boot/defaults/loader.conf 
/kernel text=0xcf62b8 -
2023-12-11 21:12:15,740: vrnetlab   DEBUG    VM not started; starting!
2023-12-11 21:12:15,741: vrnetlab   INFO     Starting VQFX_vpfe
2023-12-11 21:12:15,741: vrnetlab   DEBUG    qemu cmd: qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.0:4001,server,nowait -m 2048 -serial telnet:0.0.0.0:5001,server,nowait -drive if=ide,file=vqfx-20.2R1-2019010209-pfe-qemu-overlay.qcow -device e1000,netdev=mgmt,mac=0C:00:e1:26:04:00 -netdev user,id=mgmt,net=10.0.0.0/24 -device e1000,netdev=vpfe-int,mac=0C:00:f7:4f:42:00 -netdev tap,ifname=vpfe-int,id=vpfe-int,script=no,downscript=no
2023-12-11 21:12:22,756: launch     TRACE    OUTPUT VCP: data=0x879f4+0x11a155c syms=[0x4+0xc44b0+0x4+0x130837]
/boot/modules/virtio.ko text=0x20cc data=0x204 syms=[0x4+0x7a0+0x4+0x900]
/boot/modules/virtio_pci.ko text=0x2d8c data=0x1fc+0x8 syms=[0x4+0x8a0+0x4+0xaa3]
/boot/modules/virtio_blk.ko text=0x28ac data=0x1ec+0xc syms=[0x4+0x890+0x4+0x906]
/boot/modules/if_vtnet.ko text=0x604c data=0x354+0x10 syms=[0x4+0xcf0+0x4+0xde5]
/boot/modules/virtio_console.ko text=0x35a0 data=0x188+0xc syms=[0x4+0x8c0+0x4+
2023-12-11 21:12:25,761: launch     TRACE    OUTPUT VCP: 0x955]


Hit [Enter] to boot immediately, or space bar for command prompt.
Booting [/kernel]...               
/
Simulating VIRTUAL ELIT!!

vQFX_serial_number: VM5F3D5FF6E7
Serial Number: VM5F3D5FF6E7
Product Name: vQFX-TVP PC (i440FX + PIIX, 1996)
Version: 1.0i440fx-4.2
Board Version: 1GDB: debug ports: sio
GDB: current port: sio
KDB: debugger backends: ddb gdb kdm
KDB: current backend: ddb

2023-12-11 21:12:31,768: launch     TRACE    OUTPUT VCP: Copyright (c) 1996-2019, Juniper Networks, Inc.
All rights reserved.
Copyright (c) 1992-2007 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
JUNOS 19.4R1.10 #0: 2019-12-19 03:54:05 UTC
    builder@qnc-jre-emake1t:/volume/build/junos/19.4/release/19.4R1.10/obj/i386/junos/bsd/kernels/JUNIPER-QFX/kernel

2023-12-11 21:12:34,773: launch     TRACE    OUTPUT VCP: can't re-use a leaf (fe_storm_timeout)!
can't re-use a leaf (alt_break_to_debugger)!
can't re-use a leaf (break_to_debugger)!
can't re-use a leaf (nssu_upgraded)!
acpi_alloc_wakeup_handler: can't alloc wake memory
ACPI APIC Table: <BOCHS  BXPCAPIC>
Timecounter "i8254" frequency 1193182 Hz quality 0

2023-12-11 21:12:37,777: launch     TRACE    OUTPUT VCP: CPU: QEMU Virtual CPU version 2.5+ (2041.99-MHz 686-class CPU)
  Origin = "GenuineIntel"  Id = 0x663  Stepping = 3
  Features=0x783fbfd<FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2>
  Features2=0x80202001<SSE3,CX16,x2APIC,<b31>>
  AMD Features=0x20100800<SYSCALL,NX,LM>
  AMD Features2=0x1<LAHF>
real memory  = 2147483648 (2048 MB)
avail memory = 1936330752 (1846 MB)
Security policy loaded: Junos MAC/veriexec (mac_veriexec)
MAC/veriexec fingerprint module loaded: SHA256
MAC/veriexec fingerprint module loaded: SHA1
ioapic0 <Version 1.1> irqs 0-23 on motherboard
netisr_init: forcing maxthreads from 4 to 1
ETHERNET SOCKET BRIDGE initialising
random: <Software, Yarrow> initialized
fpga driver loaded, 0 (null)
FXPCI warning: hw.pci.enable_static_config is not set.  
Creating PCI Scan thread
Initializing DCF  platform properties ..
Calling dcf_prds_hw_init for platform hw vecs initialization
acpi0: <BOCHS BXPCRSDT> on motherboard
acpi0: Power Button (fixed)
Timecounter "ACPI-safe" frequency 3579545 Hz quality 1000
acpi_timer0: <24-bit timer at 3.579545MHz> port 0x608-0x60b on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
Correcting Natoma config for non-SMP
isab0: <PCI-ISA bridge> at device 1.0 on pci0
isa0: <ISA bus> on isab0
atapci0: <Intel PIIX3 WDMA2 controller> port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xd0c0-0xd0cf at device 1.1 on pci0
ata0: <ATA channel 0> on atapci0
ata1: <ATA channel 1> on atapci0
smb0: <Intel 82371AB SMB controller> irq 9 at device 1.3 on pci0
pci0: <display, VGA> at device 2.0 (no driver attached)
pcib1: <ACPI PCI-PCI bridge> mem 0xfebf1000-0xfebf10ff irq 11 at device 3.0 on pci0
pci1: <ACPI PCI bus> on pcib1
pci1: <network, ethernet> at device 2.0 (no driver attached)
pci1: <network, ethernet> at device 3.0 (no driver attached)
pci1: <network, ethernet> at device 4.0 (no driver attached)
pci1: <network, ethernet> at device 5.0 (no driver attached)
pci0: <network, ethernet> at device 4.0 (no driver attached)
pci0: <network, ethernet> at device 5.0 (no driver attached)
pci0: <network, ethernet> at device 6.0 (no driver attached)
cpu0: <ACPI CPU> on acpi0
atkbdc0: <Keyboard controller (i8042)> port 0x60,0x64 irq 1 on acpi0
atkbd0: <AT Keyboard> irq 1 on atkbdc0
kbd0 at atkbd0
sio0: <16550A-compatible COM port> port 0x3f8-0x3ff irq 4 flags 0x90 on acpi0
sio0: type 16550A, console

2023-12-11 21:12:40,780: launch     TRACE    OUTPUT VCP: orm0: <ISA Option ROM> at iomem 0xe9800-0xeffff on isa0
vga0: <Generic ISA VGA> at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0
sc0: <System console> at flags 0x100 on isa0
sc0: VGA <16 virtual consoles, flags=0x300>
sio1: configured irq 5 not in bitmap of probed irqs 0
sio1: port may not be enabled
sio2: configured irq 3 not in bitmap of probed irqs 0
sio2: port may not be enabled
sio3: configured irq 7 not in bitmap of probed irqs 0
sio3: port may not be enabled
Initializing product: 175 ..
prds_tok_init: Token init was successfly done
Timecounter "TSC" frequency 2041992001 Hz quality 800
Loading the NETPFE fc module
Registering tcp_platform_dependent = tcp_handle_special_ports
ad0: 4095MB <QEMU HARDDISK 2.5+> at ata0-master WDMA2 
random: unblocking device.
Trying to mount root from ufs:/dev/ad0s1a
Kernel thread "kdm_kdb" (pid 51) exited prematurely.
Kernel thread "wkupdaemon" (pid 53) exited prematurely.

2023-12-11 21:12:43,785: launch     TRACE    OUTPUT VCP: Attaching /packages/jbase via /dev/mdctl...
Mounted jbase package on /dev/md0...


2023-12-11 21:12:46,787: launch     TRACE    OUTPUT VCP: Mounted jkernel package on /dev/md1...
Mounted jpfe package on /dev/md2...
Mounted jdocs package on /dev/md3...
Mounted jroute package on /dev/md4...
Executing /packages/mnt/jroute-qfx-x86-32-19.4R1.10/mount.post..
ln: /var/chroot/rest-api: Read-only file system
Mounted jcrypto package on /dev/md5...
Mounted jsd package on /dev/md6...
Mounted jsdn-i386 package on /dev/md7...

2023-12-11 21:12:49,789: launch     TRACE    OUTPUT VCP: Mounted jswitch package on /dev/md8...
Mounted jweb package on /dev/md9...
Executing /packages/mnt/jweb-qfx-19.4R1.10/mount.post..
mkdir: /var/jail/etc: Read-only file system
mkdir: /var/jail/run: Read-only file system
mkdir: /var/jail/tmp: Read-only file system
mkdir: /var/jail/sess: Read-only file system
mkdir: /var/jail/jweb-app: Read-only file system
chown: /var/jail/etc: No such file or directory
chown: /var/jail/run: No such file or directory
chown: /var/jail/sess: No such file or directory
mount_nullfs: /var/jail/etc: No such file or directory
Failed to mount null file system for HTTPD on /packages/mnt/jweb-qfx-19.4R1.10/jail/var/etc
Mounted py-base-i386 package on /dev/md10...
Mounted py-base2-i386 package on /dev/md11...
Mounted py-extensions-i386 package on /dev/md12...
Mounted py-extensions2-i386 package on /dev/md13...
swapon: adding /dev/ad0s1b as swap device
Automatic reboot in progress...
** Last Mounted on /
** Root file system
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
6631 files, 303254 used, 934445 free (25 frags, 233605 blocks, 0.0% fragmentation)
** Last Mounted on /config
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
27 files, 33 used, 309046 free (6 frags, 77260 blocks, 0.0% fragmentation)
sysctl: unknown oid 'dev.ad.1'
Marker feature is not supportedMarker feature is not supported
2023-12-11 21:12:55,797: launch     TRACE    OUTPUT VCP: Creating initial configuration...
2023-12-11 21:13:04,809: launch     TRACE    OUTPUT VCP: mgd: commit complete
Setting initial options:  debugger_on_panic=NO debugger_on_break=YES.
Starting optional daemons:  kdmmachdep.do_minidump: 1 -> 1
kernel dumps on /dev/ad0s1b
Check for coredumps
savecore: Router rebooting after a normal shutdown...
savecore: Router rebooting after a normal shutdown...
savecore: no dumps found
.
Doing initial network setup:.
Initial interface configuration:

2023-12-11 21:13:07,813: launch     TRACE    OUTPUT VCP: additional daemons:.
Enhanced arp scale is disabled
Additional routing options:kern.module_path: /boot//kernel;/boot/modules -> /boot/modules;/modules/peertype;/modules/ifpfe_drv;/modules/ifpfe_media;/modules/platform;/modules;
 re_fpga kld
re-fpga module loadednot a LPC device id
kld netpfe media: ifpfem_otnkld netpfe drv: ifpfed_eia530 ifpfed_ep ifpfed_irb ifpfed_lt ifpfed_ppeer ifpfed_svcs ifpfed_vtkld platform: bcm bcmxxx dc_ifpfeLoading the DC Platform NETPFE module
 fdcsio1: configured irq 5 not in bitmap of probed irqs 0
sio1: port may not be enabled
sio2: configured irq 3 not in bitmap of probed irqs 0
sio2: port may not be enabled
sio3: configured irq 7 not in bitmap of probed irqs 0
sio3: port may not be enabled
fdc1: <floppy drive controller (FDE)> port 0x3f2-0x3f5,0x3f7 irq 6 drq 2 on acpi0
fdc1: does not respond
device_attach: fdc1 attach returned 6
 if_bge
2023-12-11 21:13:10,817: launch     TRACE    OUTPUT VCP:  if_emem0: <Intel(R) PRO/1000 Network Connection Version - 3.2.18> port 0xd000-0xd03f mem 0xfeb80000-0xfeb9ffff irq 11 at device 4.0 on pci0
em0: Memory Access and/or Bus Master bits were not set!
em1: <Intel(R) PRO/1000 Network Connection Version - 3.2.18> port 0xd040-0xd07f mem 0xfeba0000-0xfebbffff irq 10 at device 5.0 on pci0
em1: Memory Access and/or Bus Master bits were not set!
em2: <Intel(R) PRO/1000 Network Connection Version - 3.2.18> port 0xd080-0xd0bf mem 0xfebc0000-0xfebdffff irq 10 at device 6.0 on pci0
em2: Memory Access and/or Bus Master bits were not set!
em3: <Intel(R) PRO/1000 Network Connection Version - 3.2.18> port 0xc000-0xc03f mem 0xfe800000-0xfe81ffff irq 10 at device 2.0 on pci1
em3: Memory Access and/or Bus Master bits were not set!
em4: <Intel(R) PRO/1000 Network Connection Version - 3.2.18> port 0xc040-0xc07f mem 0xfe820000-0xfe83ffff irq 10 at device 3.0 on pci1
em4: Memory Access and/or Bus Master bits were not set!
em5: <Intel(R) PRO/1000 Network Connection Version - 3.2.18> port 0xc080-0xc0bf mem 0xfe840000-0xfe85ffff irq 11 at device 4.0 on pci1
em5: Memory Access and/or Bus Master bits were not set!
em6: <Intel(R) PRO/1000 Network Connection Version - 3.2.18> port 0xc0c0-0xc0ff mem 0xfe860000-0xfe87ffff irq 11 at device 5.0 on pci1
em6: Memory Access and/or Bus Master bits were not set!
em6: ERROR Invalid mac address, using default mac.
em6: bus=1, device=5, func=0, Ethernet address 0c:00:d7:58:04:04

2023-12-11 21:13:13,820: launch     TRACE    OUTPUT VCP: em5: ERROR Invalid mac address, using default mac.
em5: bus=1, device=4, func=0, Ethernet address 0c:00:3d:f5:15:03
em4: ERROR Invalid mac address, using default mac.
em4: bus=1, device=3, func=0, Ethernet address 0c:00:8d:4f:38:02
em3: ERROR Invalid mac address, using default mac.
em3: bus=1, device=2, func=0, Ethernet address 0c:00:a3:bb:e9:01
em2: ERROR Invalid mac address, using default mac.
em2: bus=0, device=6, func=0, Ethernet address 0c:00:a2:f7:c1:01
em1: ERROR Invalid mac address, using default mac.
em1: bus=0, device=5, func=0, Ethernet address 0c:00:6d:7c:97:01
em0: ERROR Invalid mac address, using default mac.
em0: bus=0, device=4, func=0, Ethernet address 0c:00:4d:cc:80:00
 if_vcp ixgbekld peertype: peertype_fxpc peertype_hcm peertype_sfi peertype_slavere grat_arp_on_ifup=YES: net.link.ether.inet.grat_arp_on_ifup: 1 -> 1
 ipsec kldcryptosoft0: <software crypto> on motherboard
 kats kldIPsec: Initialized Security Association Processing.
.
Doing additional network setup:.
Starting final network daemons:.
 chassis.ko loaded Loading JUNOS chassis module
chassis_init_hw_chassis_startup_time: chassis startup time 0.000000
machdep.bootsuccess: 0 -> 0

2023-12-11 21:13:16,824: launch     TRACE    OUTPUT VCP: hw.dcf.auto_upgrade_enabled: 0 -> 1
Configuring IP (169.254.0.2) and mtu (9500) for em1.0
setting ldconfig path: /usr/lib /opt/lib
starting standard daemons: cron.
Initial rc.i386 initialization: microcode kld.

 Lock Manager
RDM Embedded 7 [04-Aug-2006] http://www.birdstep.com
Copyright (c) 1992-2006 Birdstep Technology, Inc.  All Rights Reserved.

Unix Domain sockets Lock manager
Lock manager 'lockmgr' started successfully.

Database Initialization Utility
RDM Embedded 7 [04-Aug-2006] http://www.birdstep.com
Copyright (c) 1992-2006 Birdstep Technology, Inc.  All Rights Reserved.

Profile database initialized
Local package initialization: kdmmachdep.do_minidump: 1 -> 1
kernel dumps on /dev/ad0s1b
Check for coredumps
savecore: Router rebooting after a normal shutdown...
savecore: Router rebooting after a normal shutdown...
savecore: no dumps found
.
starting local daemons:set cores for group access
.
Mon Dec 11 21:13:14 UTC 2023

2023-12-11 21:13:36,850: launch     INFO     matched login prompt
2023-12-11 21:13:36,850: launch     DEBUG    writing to serial console: root
2023-12-11 21:13:39,854: launch     TRACE    OUTPUT VCP:  root

2023-12-11 21:13:45,862: launch     TRACE    OUTPUT VCP: Password:
2023-12-11 21:15:45,000: launch     INFO     matched login prompt
2023-12-11 21:15:45,000: launch     DEBUG    writing to serial console: root
2023-12-11 21:15:48,002: launch     TRACE    OUTPUT VCP:  root
Password:
2023-12-11 21:17:45,073: launch     INFO     matched login prompt
2023-12-11 21:17:45,074: launch     DEBUG    writing to serial console: root
2023-12-11 21:17:48,078: launch     TRACE    OUTPUT VCP:  root
Password:

So it looks like the launch.py cannot connect to VCP VM. Could you please advise?

P.S. What also we've noticed is during build the file vrnetlab.py don't appear in docker directory during the build and, therefore, it looks like file isn't copied inside the VM.

P.P.S. We've tried to build other images, such as Juniper VMX and Cisco XRv9K, and both of them worked well.

Kind regards,
Berkut Cloud

Bootup process hangs on waiting for '>'

Hello,

Super confused about this problem. For some reason any version of CSR1000v I've tried on this specific host just hangs forever on that waiting for '>' stage:

2023-07-15 06:27:16,024: vrnetlab   DEBUG    Starting vrnetlab CSR
2023-07-15 06:27:16,024: vrnetlab   DEBUG    VMs: [<__main__.CSR_vm object at 0x7fed33753d30>]
2023-07-15 06:27:16,027: vrnetlab   DEBUG    VM not started; starting!
2023-07-15 06:27:16,027: vrnetlab   INFO     Starting CSR_vm
2023-07-15 06:27:16,027: vrnetlab   DEBUG    number of provisioned data plane interfaces is 0
2023-07-15 06:27:16,027: vrnetlab   DEBUG    qemu cmd: qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.0:4000,server,nowait -m 4096 -serial telnet:0.0.0.0:5000,server,nowait -drive if=ide,file=/csr1000v-universalk9.17.03.02-serial-overlay.qcow2 -device pci-bridge,chassis_nr=1,id=pci.1 -device virtio-net-pci,netdev=p00,mac=0C:00:e5:00:0e:00 -netdev user,id=p00,net=10.0.0.0/24,tftp=/tftpboot,hostfwd=tcp::2022-10.0.0.15:22,hostfwd=udp::2161-10.0.0.15:161,hostfwd=tcp::2830-10.0.0.15:830,hostfwd=tcp::2080-10.0.0.15:80,hostfwd=tcp::2443-10.0.0.15:443
2023-07-15 06:27:22,036: launch     TRACE    OUTPUT: 
  Booting `CSR1000v - packages.conf'


BOOT CMD: /packages.conf rw root=/dev/ram max_loop=64 HARDWARE=virtual quiet
console= SR_BOOT=bootflash:packages.conf
Calculating SHA-1 hash...done
SHA-1 hash:
        calculated   ac8b9d82:75cfab7a:706f71ff:13c393cd:47f4877b
        expected     ac8b9d82:75cfab7a:706f71ff:13c393cd:47f4877b
package header rev 3 structure detected
IOSXE version 17.3.02 detected
Calculating SHA-1 hash...
2023-07-15 06:27:28,045: launch     TRACE    OUTPUT: done
SHA-1 hash:
        calculated   74564399:fe87a753:c692f47a:650837d7:49a219b5
        expected     74564399:fe87a753:c692f47a:650837d7:49a219b5
Package type:0x7531, flags:0x0
linux image, size=0x682dc8

2023-07-15 06:27:30,048: launch     TRACE    OUTPUT: linux isord, size=0x278dfbc


2023-07-15 06:27:32,051: launch     TRACE    OUTPUT: %IOSXEBOOT-4-PART_VERIFY: (local/local): Verifying partition table for device /dev/bootflash...
%IOSXEBOOT-4-PART_VERIFY: (local/local): Selected MBR v2 partition layout.

2023-07-15 06:27:52,080: launch     TRACE    OUTPUT: 
*Jul 15 06:27:50.811: %IOSXEBOOT-4-BOOT_SRC: (rp/0): Checking for grub upgrade

*Jul 15 06:27:51.024: %IOSXEBOOT-4-BOOT_SRC: (rp/0): Checking grub versions 2.2 vs 2.2

*Jul 15 06:27:51.026: %IOSXEBOOT-4-BOOT_SRC: (rp/0): Bootloader upgrade not necessary.

2023-07-15 06:28:06,100: launch     TRACE    OUTPUT: Jul 15 06:28:05.980: %BOOT-5-OPMODE_LOG: R0/0: binos: System booted in AUTONOMOUS mode

2023-07-15 06:28:40,151: launch     TRACE    OUTPUT: 
              Restricted Rights Legend

Use, duplication, or disclosure by the Government is
subject to restrictions as set forth in subparagraph
(c) of the Commercial Computer Software - Restricted
Rights clause at FAR sec. 52.227-19 and subparagraph
(c) (1) (ii) of the Rights in Technical Data and Computer
Software clause at DFARS sec. 252.227-7013.

           Cisco Systems, Inc.
           170 West Tasman Drive
           San Jose, California 95134-1706



Cisco IOS Software [Amsterdam], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 17.3.2, RELEASE SOFTWARE (fc3)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2020 by Cisco Systems, Inc.
Compiled Sat 31-Oct-20 13:16 by mcpre


This software version supports only Smart Licensing as the software licensing mechanism.


PLEASE READ THE FOLLOWING TERMS CAREFULLY. INSTALLING THE LICENSE OR
LICENSE KEY PROVIDED FOR ANY CISCO SOFTWARE PRODUCT, PRODUCT FEATURE,
AND/OR SUBSEQUENTLY PROVIDED SOFTWARE FEATURES (COLLECTIVELY, THE
"SOFTWARE"), AND/OR USING SUCH SOFTWARE CONSTITUTES YOUR FULL
ACCEPTANCE OF THE FOLLOWING TERMS. YOU MUST NOT PROCEED FURTHER IF YOU
ARE NOT WILLING TO BE BOUND BY ALL THE TERMS SET FORTH HEREIN.

Your use of the Software is subject to the Cisco End User License Agreement
(EULA) and any relevant supplemental terms (SEULA) found at
http://www.cisco.com/c/en/us/about/legal/cloud-and-software/software-terms.html.

You hereby acknowledge and agree that certain Software and/or features are
licensed for a particular term, that the license to such Software and/or
features is valid only for the applicable term and that such Software and/or
features may be shut down or otherwise terminated by Cisco after expiration
of the applicable license term (e.g., 90-day trial period). Cisco reserves
the right to terminate any such Software feature electronically or by any
other means available. While Cisco may provide alerts, it is your sole
responsibility to monitor your usage of any such term Software feature to
ensure that your systems and networks are prepared for a shutdown of the
Software feature.



2023-07-15 06:28:44,157: launch     TRACE    OUTPUT: 
All TCP AO KDF Tests Pass
cisco CSR1000V (VXE) processor (revision VXE) with 2072007K/3075K bytes of memory.
2023-07-15 06:28:46,160: launch     TRACE    OUTPUT: 
Processor board ID 9A6QOVZ9BCQ
Router operating mode: Autonomous
1 Gigabit Ethernet interface
32768K bytes of non-volatile configuration memory.
3978420K bytes of physical memory.
6188032K bytes of virtual hard disk at bootflash:.

2023-07-15 06:28:51,925: launch     DEBUG    matched, Press RETURN to get started.
2023-07-15 06:28:51,925: launch     INFO     applying bootstrap configuration
2023-07-15 06:28:51,925: vrnetlab   TRACE    waiting for '>' on serial console

No errors during make docker-image and I can successfully build and run images using the same qcow2 files on another host. I cal also build and run for example n9kv on this host.

Not even sure where to look for clues, would appreciate any ideas. Thanks!

TFTP configuration for SROS License

Hi,

I was testing a simple VSR deployment and couldn't manage to retrieve the license.
docker run -d --privileged --name my-sros-router vrnetlab/vr-sros:20.5.R2

I identified the root cause in the following configuration :

root@41529a4f35f9:/# cat /etc/default/tftpd-hpa
/etc/default/tftpd-hpa

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/srv/tftp"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure"

The location of the tftp server points to /srv/tftp, while the license is stored under /tftpboot.
Did I miss anything ? Or should the location of tftp server points to /tftpboot ?

A:vSIM# show bof
===============================================================================
BOF (Memory)
===============================================================================
    primary-image    cf3:\timos\
    primary-config   tftp://172.31.255.29/config.txt
    license-file     tftp://172.31.255.29/license.txt

Kind regards,
Bastien

vMX make fails under Ubuntu 21.10

When trying to build the vMX docker image, the script always fails at step 4.
The command '/bin/sh -c apt-get update -qy && apt-get upgrade -qy && apt-get install -y bridge-utils iproute2 python3-ipy socat qemu-kvm tcpdump procps openvswitch-switch && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100

The commands mentioned in step 4 can all be executed successfully separately (after prepending sudo) and the mentioned domain names are all resolvable and reachable.

I am using a clean install of Ubuntu 21.10 (it apparently works on Rocky Linux 8.0):
Linux clab1-lab 5.13.0-28-generic #31-Ubuntu SMP Thu Jan 13 17:41:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

make-output.txt

add vmx/vqfx terminal massage

long commands may break as term len (supposedly) truncates commands.
This is a copy from scrapli to prepare junos session. Need to exec with write_wait("cmd", ">")

set cli screen-length 0
set cli screen-width 511
set cli complete-on-space off

config engine bad command on SR-OS 22.7 R1

EDIT: I configured something wrong...

Running SR-oS 22.7.R1 I saw the following error during the boot process of SR-oS in docker logs output:

2023-09-19T18:19:41.454195154Z 2023-09-19 18:19:41,454: vrnetlab   TRACE    read from serial console: '/configure system management-interface configuration-mode model-driven
2023-09-19T18:19:41.454209245Z          ^
2023-09-19T18:19:41.454213707Z Error: Bad command.

Config:

name: vr01
topology:
  nodes:
    sros:
      kind: vr-sros
      image: vrnetlab/vr-sros:22.7.R1
      type: sr-1
      startup-config: |
        /configure system location "I am an embedded config"
     license: license-sros.txt
2023-09-19T18:19:41.366214626Z 2023-09-19 18:19:41,366: vrnetlab   DEBUG    writing to serial console: '      '
2023-09-19T18:19:41.366336362Z 2023-09-19 18:19:41,366: vrnetlab   TRACE    waiting for '# ' on serial console
2023-09-19T18:19:41.410104941Z 2023-09-19 18:19:41,409: vrnetlab   TRACE    read from serial console: '
2023-09-19T18:19:41.410123826Z A:vSIM# '
2023-09-19T18:19:41.410136004Z 2023-09-19 18:19:41,410: vrnetlab   DEBUG    writing to serial console: '/configure system management-interface configuration-mode model-driven'
2023-09-19T18:19:41.410191373Z 2023-09-19 18:19:41,410: vrnetlab   TRACE    waiting for '# ' on serial console
2023-09-19T18:19:41.454195154Z 2023-09-19 18:19:41,454: vrnetlab   TRACE    read from serial console: '/configure system management-interface configuration-mode model-driven
2023-09-19T18:19:41.454209245Z          ^
2023-09-19T18:19:41.454213707Z Error: Bad command.
2023-09-19T18:19:41.454218052Z A:vSIM# '
2023-09-19T18:19:41.454221929Z 2023-09-19 18:19:41,454: vrnetlab   DEBUG    writing to serial console: '/logout'
2023-09-19T18:19:41.454684341Z 2023-09-19 18:19:41,454: launch     INFO     Startup complete in: 0:01:41.277085

ipinfusion OcNOS 6.4.1 not working

Hi,

I'm getting the following error when I try to make a docker image from ipinfusion OcNOS image.

I'm not sure to understand what's wrong.

[root@containerlab ocnos]# make docker-image for IMAGE in OcNOS-SP-MPLS-x86-6.4.1-37-GA.qcow2; do \ echo "Making $IMAGE"; \ make IMAGE=$IMAGE docker-build; \ done Making OcNOS-SP-MPLS-x86-6.4.1-37-GA.qcow2 make[1]: Entering directory '/root/vrnetlab/ocnos' rm -f docker/*.qcow2* docker/*.tgz* docker/*.vmdk* docker/*.iso Building docker image using OcNOS-SP-MPLS-x86-6.4.1-37-GA.qcow2 as vrnetlab/vr-ocnos: cp ../common/* docker/ make IMAGE=$IMAGE docker-build-image-copy make[2]: Entering directory '/root/vrnetlab/ocnos' cp OcNOS-SP-MPLS-x86-6.4.1-37-GA.qcow2* docker/ make[2]: Leaving directory '/root/vrnetlab/ocnos' (cd docker; docker build --build-arg http_proxy= --build-arg https_proxy= --build-arg IMAGE=OcNOS-SP-MPLS-x86-6.4.1-37-GA.qcow2 -t vrnetlab/vr-ocnos: .) [+] Building 0.0s (0/0) docker:default ERROR: invalid tag "vrnetlab/vr-ocnos:": invalid reference format make[1]: *** [../makefile.include:29: docker-build-common] Error 1 make[1]: Leaving directory '/root/vrnetlab/ocnos' make: *** [../makefile.include:9: docker-image] Error 2 [root@containerlab ocnos]#

add docker native networking configuration with macvtap, OvS and linux bridges

Summary

OVS dataplane works with LACP and that will be it.
macvtap screwed me over.

based on the work in vrnetlab#188

and comments here https://twitter.com/networkop1/status/1350006925023449088

1 linux bridge

added in #2

2 bridge with OvS (help wanted)

added in 238cb57

3 macvtap (not working, help wanted)

added macvtap in #22 but it still doesn't pass LACP

prepreqs

Kernel 4.7+ to have the fixes for netns aware macvtaps

To update the kernel on centos7:

sudo yum -y install http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# list available kernels
yum list available --disablerepo='*' --enablerepo=elrepo-kernel

# kernel-ml is a mainline release, whereas kernel-lt is a long term support release
sudo yum -y --enablerepo=elrepo-kernel install kernel-ml

# this sets the new kernel to be chosen on boot
sudo sed -i 's/GRUB_DEFAULT=.*/GRUB_DEFAULT=0/' /etc/default/grub
sudo grub2-mkconfig -o /boot/grub2/grub.cfg

macvtap articles

https://gist.github.com/networkop/4e04ef70b8c5f96d20cdf73ea32900d1
https://developers.redhat.com/blog/2018/10/22/introduction-to-linux-interfaces-for-virtual-networking/#macvtap
https://suhu0426.github.io/Web/Presentation/20150203/index.html
https://gist.github.com/mcastelino/43cc733e53d65ef67452ecaf78e936c2
https://ahelpme.com/linux/howto-do-qemu-full-virtualization-with-macvtap-networking/
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-attch-nic-physdev
https://www.qemu.org/docs/master/system/invocation.html#hxtool-5
https://www.qemu.org/docs/master/system/invocation.html#hxtool-5

vQFX factory default login has changed

on newer vQFX code, factory default login has changed. It is no longer root/Juniper. Instead, it is root and no password (simply a carriage return). Because of this, the container initial setup eventually fails. The bringup stalls at:

2022-02-03 14:43:57,110: launch     INFO     matched login prompt
2022-02-03 14:43:57,110: launch     DEBUG    writing to serial console: root
2022-02-03 14:43:57,110: launch     TRACE    Waiting for Password:

In earlier code (like 19.4), it would proceed correctly with root/Juniper:

2022-02-03 15:41:56,742: launch     INFO     matched login prompt
2022-02-03 15:41:56,742: launch     DEBUG    writing to serial console: root
2022-02-03 15:41:56,743: launch     TRACE    Waiting for Password:
2022-02-03 15:42:21,835: launch     TRACE    Read:  root
Password:
2022-02-03 15:42:21,835: launch     DEBUG    writing to serial console: Juniper
2022-02-03 15:42:24,839: launch     TRACE    OUTPUT VCP:

2022-02-03 15:42:33,067: launch     DEBUG    writing to serial console: cli
2022-02-03 15:42:33,068: launch     TRACE    Waiting for >
2022-02-03 15:42:38,651: launch     TRACE    Read:  cli
{master:0}
root@vqfx-re>
2022-02-03 15:42:38,651: launch     DEBUG    writing to serial console: configure
2022-02-03 15:42:38,651: launch     TRACE    Waiting for #
2022-02-03 15:42:38,723: launch     TRACE    Read:  configure

Can this new factory default login be accommodated please, as this essentially breaks vQFX in containerlab (for any new release)?

vMX vFP failed to boot

hello all,
I have followed the instruction of building the image for vmx. I wonder if I missed any step. I got vmx vCP boot up fine, but can't seems to get vFPC booted up. Any help would be greatly appreciated. I'm using Ubuntu 20.04 vm on a virtualbox with nested virtualization.

admin@vmx> show interfaces ge-0/0/0
error: device ge-0/0/0 not found

admin@vmx>
admin@vmx> ping 128.0.0.16 routing-instance __juniper_private1__
PING 128.0.0.16 (128.0.0.16): 56 data bytes
ping: sendto: No route to host
ping: sendto: No route to host
ping: sendto: No route to host
ping: sendto: No route to host
^C
--- 128.0.0.16 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss

Thanks.

Startup-config doesn't work on n9kv

Hello,

At the end of applying the default n9kv config, the current running config is being saved to startup-config:

2023-07-22 06:08:58,662: vrnetlab   TRACE    read from serial console:  feature netconf
feature grpc
2023 Jul 22 06:08:55 spine01 %$ VDC-1 %$ %SECURITYD-2-FEATURE_NXAPI_ENABLE: Feature nxapi is being enabled on HTTPS.
spine01(config)#
2023-07-22 06:08:58,662: vrnetlab   DEBUG    writing to serial console: exit
2023-07-22 06:08:58,662: vrnetlab   TRACE    waiting for '#' on serial console
2023-07-22 06:09:01,614: vrnetlab   TRACE    read from serial console:  feature grpc
spine01(config)#
2023-07-22 06:09:01,614: vrnetlab   DEBUG    writing to serial console: copy running-config startup-config

The way nexus devices indicate copy progress looks like this:

switch# copy r s
[########################################] 100%
Copy complete, now saving to disk (please wait)...
Copy complete.

So when the custom startup config is being applied the config is still being saved but every # from that progress bar is erroneously treated as the switch being ready to accept the next command:

2023-07-22 06:09:01,614: vrnetlab   DEBUG    writing to serial console: copy running-config startup-config
2023-07-22 06:09:01,614: launch     TRACE    Startup config file /config/startup-config.cfg exists
2023-07-22 06:09:01,614: launch     TRACE    Parsed startup config file /config/startup-config.cfg
2023-07-22 06:09:01,615: launch     INFO     Writing lines from /config/startup-config.cfg
2023-07-22 06:09:01,615: vrnetlab   TRACE    waiting for '#' on serial console
2023-07-22 06:09:01,655: vrnetlab   TRACE    read from serial console:  exit
spine01#
2023-07-22 06:09:01,655: vrnetlab   DEBUG    writing to serial console: configure terminal
2023-07-22 06:09:01,655: vrnetlab   TRACE    waiting for '#' on serial console
2023-07-22 06:09:01,728: vrnetlab   TRACE    read from serial console:  copy running-config startup-config
configure terminal
[#
2023-07-22 06:09:01,728: vrnetlab   DEBUG    writing to serial console: username admin password secret role network-admin
2023-07-22 06:09:01,728: vrnetlab   TRACE    waiting for '#' on serial console
2023-07-22 06:09:01,824: vrnetlab   TRACE    read from serial console:                                        ]   0%username admin password Hangup3-Upswing-Feast role network-admin
[#
2023-07-22 06:09:01,824: vrnetlab   DEBUG    writing to serial console: vrf context custom
2023-07-22 06:09:01,824: vrnetlab   TRACE    waiting for '#' on serial console
2023-07-22 06:09:01,867: vrnetlab   TRACE    read from serial console:                                        ]   1%vrf context ndfc
[#
2023-07-22 06:09:01,867: vrnetlab   DEBUG    writing to serial console:   address-family ipv4 unicast
2023-07-22 06:09:01,867: vrnetlab   TRACE    waiting for '#' on serial console
[#23-07-22 06:09:01,867: vrnetlab   TRACE    read from serial console:                                        ]   2%
2023-07-22 06:09:01,867: vrnetlab   DEBUG    writing to serial console: interface Eth1/1
2023-07-22 06:09:01,867: vrnetlab   TRACE    waiting for '#' on serial console
2023-07-22 06:09:01,867: vrnetlab   TRACE    read from serial console: #
2023-07-22 06:09:01,867: vrnetlab   DEBUG    writing to serial console:   no switchport
2023-07-22 06:09:01,867: vrnetlab   TRACE    waiting for '#' on serial console
[#23-07-22 06:09:01,867: vrnetlab   TRACE    read from serial console:                                       ]   3%
2023-07-22 06:09:01,867: vrnetlab   DEBUG    writing to serial console:   vrf member custom
2023-07-22 06:09:01,867: vrnetlab   TRACE    waiting for '#' on serial console
2023-07-22 06:09:01,867: vrnetlab   TRACE    read from serial console: #
2023-07-22 06:09:01,867: vrnetlab   DEBUG    writing to serial console:   ip address dhcp
2023-07-22 06:09:01,867: vrnetlab   TRACE    waiting for '#' on serial console
[#23-07-22 06:09:01,867: vrnetlab   TRACE    read from serial console:                                       ]   4%
2023-07-22 06:09:01,867: vrnetlab   DEBUG    writing to serial console:   no shutdown
2023-07-22 06:09:01,868: vrnetlab   TRACE    waiting for '#' on serial console
2023-07-22 06:09:01,868: vrnetlab   TRACE    read from serial console: #
2023-07-22 06:09:01,868: vrnetlab   DEBUG    writing to serial console: end
2023-07-22 06:09:01,868: vrnetlab   TRACE    waiting for '#' on serial console
2023-07-22 06:09:01,868: vrnetlab   TRACE    read from serial console: #
2023-07-22 06:09:01,868: vrnetlab   DEBUG    writing to serial console: copy running-config startup-config
2023-07-22 06:09:01,868: launch     INFO     Startup complete in: 0:07:05.629641

Because of this sometimes a few first commands from the custom startup-config manage to squeeze in and the rest is ignored and sometimes all of them are ignored depending on the latency and luck.

Unable to connect to qemu monitor (port 5000), retrying in a second

I cannot get this to work. The SROS version used is not listed as supported but my colleague has it up and running.. both Ubuntu 20.04 and 22.04 (I've tried both). I have containerlab 0.38.0 and a matching vrnetlab is not listed in the compabitlity matrix. But I have tried just git clone and also with "git checkout v0.9.0" which was the latest listed.

It complains in docker about modules.. but the modules exists and are loaded (I can run both ip6tables -t nat -L -n -v and ip6tables -L -n -v).

Attaching full output from "docker logs clab-vr-sros-r1"

2023-03-15 19:18:19,029: vrnetlab INFO Unable to connect to qemu monitor (port 5000), retrying in a second (attempt 60) Traceback (most recent call last): File "/launch.py", line 1059, in <module> ia.start(add_fwd_rules=False) File "/vrnetlab.py", line 711, in start vm.work() File "/vrnetlab.py", line 642, in work self.check_qemu() File "/vrnetlab.py", line 656, in check_qemu self.start() File "/vrnetlab.py", line 208, in start raise QemuBroken( vrnetlab.QemuBroken: Unable to connect to qemu monitor on port 5000

logs.txt
ip.txt

add support for broadcom-sonic/sonic-vs

I think we should add support for full-fat sonic vm (the non p4 variant) to containerlab and by proxy vrnetlab.

Broadcom have cut a vm here https://github.com/Broadcom/sonic-VirtualSwitch and for some reason the sonic upstream Jenkins pipeline now asks me for GitHub auth to continue to consume it (I might raise this in parallel.

Im unsure which is better to support but it would be nice to use at least one variant in conjunction with both lab projects.

vJunos-switch fails to create initial config, if startup-config is none

version

vrnetlab : v0.12.0
vJunos-switch : 23.2R1.14

config

startup-config is None.

name: vJunos-switch_lab

topology:
  nodes:
    R1:
      kind: vr-vjunosswitch
      image: vrnetlab/vr-vjunosswitch:23.2R1.14
    R2:
      kind: vr-vjunosswitch
      image: vrnetlab/vr-vjunosswitch:23.2R1.14
    R3:
      kind: vr-vjunosswitch
      image: vrnetlab/vr-vjunosswitch:23.2R1.14
    R4:
      kind: vr-vjunosswitch
      image: vrnetlab/vr-vjunosswitch:23.2R1.14

  links:
    - endpoints: ["R1:eth1", "R2:eth1"]
    - endpoints: ["R1:eth2", "R3:eth1"]
    - endpoints: ["R2:eth2", "R4:eth1"]
    - endpoints: ["R3:eth2", "R4:eth2"]

issue

docker logs is below.

2023-09-21 08:13:58,874: vrnetlab   DEBUG    Creating overlay disk image
2023-09-21 08:13:58,890: launch     TRACE    Startup config file /config/startup-config.cfg is not found
mv: missing file operand
Try 'mv --help' for more information.
cp: cannot stat 'juniper.conf': No such file or directory
Formatting 'config.img', fmt=qcow2 size=1048576 cluster_size=65536 lazy_refcounts=off refcount_bits=16
losetup: config.img: Warning: file does not fit into a 512-byte sector; the end of the file will be ignored.

...<snip>

2023-09-21 08:15:31,292: launch     TRACE    OUTPUT: Creating initial configuration:  ...

2023-09-21 08:15:33,295: launch     TRACE    OUTPUT: mgd: error: Cannot open configuration file: /config/juniper.conf ; No such file or directory
mgd: warning: activating factory configuration

2023-09-21 08:15:51,321: launch     TRACE    OUTPUT: mgd: commit complete
@ 1695284149 [2023-09-21 08:15:49 UTC] mgd done

 Lock Manager

...<snip>

2023-09-21 08:16:13,320: vrnetlab   TRACE    read from serial console: ' (Amnesiac) (ttyu0)

login:'
2023-09-21 08:16:13,321: vrnetlab   DEBUG    writing to serial console: 'admin'
2023-09-21 08:16:13,321: vrnetlab   TRACE    waiting for 'Password:' on serial console
2023-09-21 08:16:13,390: vrnetlab   TRACE    read from serial console: '

FreeBSD/amd64 (Amnesiac) (ttyu0)

login:

FreeBSD/amd64 (Amnesiac) (ttyu0)

login: admin
Password:'
2023-09-21 08:16:13,391: vrnetlab   DEBUG    writing to serial console: 'admin@123'
'023-09-21 08:16:13,391: vrnetlab   DEBUG    writing to serial console: '
2023-09-21 08:16:13,391: launch     INFO     Login completed
2023-09-21 08:16:13,391: launch     INFO     Startup complete in: 0:02:14.259745

init.conf is not set. (root user's password is empty, not "admin@123")

$ telnet clab-vJunos-switch_lab-R1 5000
Trying 2001:172:20:20::3...
Trying 172.20.20.3...
Connected to clab-vJunos-switch_lab-R1.
Escape character is '^]'.


FreeBSD/amd64 (Amnesiac) (ttyu0)

login: admin
Password:
Login incorrect
login:


login: root
Last login: Thu Jun 22 15:59:36 on ttyu0

--- JUNOS 23.2R1.14 Kernel 64-bit  JNPR-12.1-20230613.7723847_buil
root@:~ #
root@:~ #
root@:~ # cli
root>

root> show configuration | display set
set version 23.2R1.14
set system commit factory-settings
set system arp aging-timer 5
set system syslog file interactive-commands interactive-commands any
set system syslog file messages any notice
set system syslog file messages authorization info
set system processes dhcp-service traceoptions file dhcp_logfile
set system processes dhcp-service traceoptions file size 10m
set system processes dhcp-service traceoptions level all
set system processes dhcp-service traceoptions flag packet
set chassis auto-image-upgrade
set interfaces fxp0 unit 0 family inet dhcp vendor-id Juniper-ex9214-VM650BFBD7BF
set interfaces fxp0 unit 0 family inet6 dhcpv6-client client-type stateful
set interfaces fxp0 unit 0 family inet6 dhcpv6-client client-ia-type ia-na
set interfaces fxp0 unit 0 family inet6 dhcpv6-client client-identifier duid-type duid-ll
set interfaces fxp0 unit 0 family inet6 dhcpv6-client vendor-id Juniper:ex9214:VM650BFBD7BF
set multi-chassis mc-lag consistency-check
set protocols router-advertisement interface fxp0.0 managed-configuration
set protocols lldp interface all
set protocols lldp-med interface all

socat vs iptables

Initially it seemed like a good idea to use iptables rules for mgmt traffic steering instead of multiple socat rules.
Like a single rule could forward all traffic from a source to the dest.

Ref vrnetlab#191 (comment)
Socat rules can be substituted with an iptables rule

But after running this for a year we've seen many issues with iptables/nftables support on rhel-based derivatives specifically.
Either nftables is not installed, or dnat kernel module is not present.

This kinda makes me steer more to socat again, and let iptables/nftables another 10 years to settle =)

vJunos-switch fails to establish connection

Hi,

I'm currently testing the vJunos-switch device, but I'm encountering difficulties establishing a Telnet connection. It appears that the Telnet connection is not being established properly. Additionally, I have ensured that there is a sufficient amount of CPU and RAM resources available.

Docker logs:

2023-11-07 12:44:30,702: vrnetlab   DEBUG    Creating overlay disk image
'juniper.conf' -> '/var/tmp/tmp.j04QIfeoh5/config/juniper.conf'
Formatting 'config.img', fmt=qcow2 size=1048576 cluster_size=65536 lazy_refcounts=off refcount_bits=16
losetup: config.img: Warning: file does not fit into a 512-byte sector; the end of the file will be ignored.
mkfs.fat: warning - lowercase labels might not work properly with DOS or Windows
mkfs.fat 4.1 (2017-01-24)
/dev/loop8 has 64 heads and 32 sectors per track,
hidden sectors 0x0000;
logical sector size is 512,
using 0xf8 media descriptor, with 384 sectors;
drive number 0x80;
filesystem has 2 12-bit FATs and 4 sectors per cluster.
FAT size is 1 sector, and provides 87 clusters.
There is 1 reserved sector.
Root directory contains 512 slots and uses 32 sectors.
Volume ID is 12ecdf9e, volume label vmm-data   .
Copying file(s) to config disk config.img
./
./config/
./config/juniper.conf
Cleaning up...
removed '/var/tmp/tmp.j04QIfeoh5/config/juniper.conf'
removed directory '/var/tmp/tmp.j04QIfeoh5/config'
removed directory '/var/tmp/tmp.j04QIfeoh5'
removed directory '/var/tmp/tmp.3sPCSTaEBo'
Config disk config.img created
2023-11-07 12:44:30,961: vrnetlab   DEBUG    Starting vrnetlab VJUNOSSWITCH
2023-11-07 12:44:30,961: vrnetlab   DEBUG    VMs: [<__main__.VJUNOSSWITCH_vm object at 0x7f9ff850f430>]
2023-11-07 12:44:30,975: vrnetlab   DEBUG    VM not started; starting!
2023-11-07 12:44:30,975: vrnetlab   INFO     Starting VJUNOSSWITCH_vm
2023-11-07 12:44:30,976: vrnetlab   DEBUG    number of provisioned data plane interfaces is 0
2023-11-07 12:44:30,976: vrnetlab   DEBUG    qemu cmd: qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.0:4000,server,nowait -m 5120 -serial telnet:0.0.0.0:5000,server,nowait -drive if=ide,file=/vJunos-switch-23.2R1.14-overlay.qcow2 -smp 4,sockets=1,cores=4,threads=1 -cpu IvyBridge,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,umip=on,arch-capabilities=on,pdpe1gb=on,skip-l1dfl-vmentry=on,pschange-mc-no=on,bmi1=off,avx2=off,bmi2=off,erms=off,invpcid=off,rdseed=off,adx=off,smap=off,xsaveopt=off,abm=off,svm=off -drive if=none,id=config_disk,file=/config.img,format=raw -device virtio-blk-pci,drive=config_disk -overcommit mem-lock=off -display none -no-user-config -nodefaults -boot strict=on -machine pc-i440fx-focal,usb=off,dump-guest-core=off,accel=kvm -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -smbios "type=1,product=VM-VEX" -device pci-bridge,chassis_nr=1,id=pci.1 -device virtio-net-pci,netdev=p00,mac=0C:00:c0:45:1c:00 -netdev user,id=p00,net=10.0.0.0/24,tftp=/tftpboot,hostfwd=tcp::2022-10.0.0.15:22,hostfwd=udp::2161-10.0.0.15:161,hostfwd=tcp::2830-10.0.0.15:830,hostfwd=tcp::2080-10.0.0.15:80,hostfwd=tcp::2443-10.0.0.15:443

XRv9K is not booting

Hi Team,

I was able to build image, but when i tried to start the container i am seeing below error:
2023-03-20 14:32:14,372: launch ERROR no more credentials to try

can you please check and let me know if i missed something?

snippet from logs:
2023-03-20 14:29:10,134: vrnetlab DEBUG VM not started; starting!
2023-03-20 14:29:10,134: vrnetlab INFO Starting XRV_vm
2023-03-20 14:29:10,135: vrnetlab DEBUG number of provisioned data plane interfaces is 0
2023-03-20 14:29:10,135: vrnetlab DEBUG ['qemu-system-x86_64', '-enable-kvm', '-display', 'none', '-machine', 'pc', '-monitor', 'tcp:0.0.0.0:4000,server,nowait', '-m', '16384', '-serial', 'telnet:0.0.0.0:5000,server,nowait', '-drive', 'if=ide,file=/xrv9k-fullk9-x.vrr-7.5.2-overlay.qcow2', '-cpu', 'host', '-smp', 'cores=2,threads=1,sockets=1', '-machine', 'smm=off', '-boot', 'order=c', '-serial', 'telnet:0.0.0.0:5001,server,nowait', '-serial', 'telnet:0.0.0.0:5002,server,nowait', '-serial', 'telnet:0.0.0.0:5003,server,nowait', '-device', 'pci-bridge,chassis_nr=1,id=pci.1', '-device', 'pci-bridge,chassis_nr=2,id=pci.2', '-device', 'pci-bridge,chassis_nr=3,id=pci.3', '-device', 'pci-bridge,chassis_nr=4,id=pci.4', '-device', 'pci-bridge,chassis_nr=5,id=pci.5', '-device', 'virtio-net-pci,netdev=mgmt,mac=52:54:00:92:b8:00', '-netdev', 'user,id=mgmt,net=10.0.0.0/24,tftp=/tftpboot,hostfwd=tcp::2022-10.0.0.15:22,hostfwd=udp::2161-10.0.0.15:161,hostfwd=tcp::2830-10.0.0.15:830,hostfwd=tcp::17400-10.0.0.15:57400', '-device', 'virtio-net-pci,netdev=ctrl-dummy,id=ctrl-dummy,mac=52:54:00:ce:0e:00', '-netdev', 'tap,ifname=ctrl-dummy,id=ctrl-dummy,script=no,downscript=no', '-device', 'virtio-net-pci,netdev=dev-dummy,id=dev-dummy,mac=52:54:00:c6:35:00', '-netdev', 'tap,ifname=dev-dummy,id=dev-dummy,script=no,downscript=no']
2023-03-20 14:29:10,135: vrnetlab DEBUG joined cmd: qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.0:4000,server,nowait -m 16384 -serial telnet:0.0.0.0:5000,server,nowait -drive if=ide,file=/xrv9k-fullk9-x.vrr-7.5.2-overlay.qcow2 -cpu host -smp cores=2,threads=1,sockets=1 -machine smm=off -boot order=c -serial telnet:0.0.0.0:5001,server,nowait -serial telnet:0.0.0.0:5002,server,nowait -serial telnet:0.0.0.0:5003,server,nowait -device pci-bridge,chassis_nr=1,id=pci.1 -device pci-bridge,chassis_nr=2,id=pci.2 -device pci-bridge,chassis_nr=3,id=pci.3 -device pci-bridge,chassis_nr=4,id=pci.4 -device pci-bridge,chassis_nr=5,id=pci.5 -device virtio-net-pci,netdev=mgmt,mac=52:54:00:92:b8:00 -netdev user,id=mgmt,net=10.0.0.0/24,tftp=/tftpboot,hostfwd=tcp::2022-10.0.0.15:22,hostfwd=udp::2161-10.0.0.15:161,hostfwd=tcp::2830-10.0.0.15:830,hostfwd=tcp::17400-10.0.0.15:57400 -device virtio-net-pci,netdev=ctrl-dummy,id=ctrl-dummy,mac=52:54:00:ce:0e:00 -netdev tap,ifname=ctrl-dummy,id=ctrl-dummy,script=no,downscript=no -device virtio-net-pci,netdev=dev-dummy,id=dev-dummy,mac=52:54:00:c6:35:00 -netdev tap,ifname=dev-dummy,id=dev-dummy,script=no,downscript=no
2023-03-20 14:32:13,360: launch DEBUG matched login prompt
2023-03-20 14:32:13,360: launch DEBUG trying to log in with admin / admin
2023-03-20 14:32:13,360: vrnetlab DEBUG writing to serial console: admin
2023-03-20 14:32:13,369: vrnetlab DEBUG writing to serial console: admin
2023-03-20 14:32:13,369: launch DEBUG logged in with admin / admin
2023-03-20 14:32:14,371: launch DEBUG matched login prompt
2023-03-20 14:32:14,372: launch ERROR no more credentials to try

Regards
Harsha

vSRX Unable to connect to qemu monitor

I am using the latest build of vrnetlab and am running into some issues running a vSRX container and get the message Unable to connect to qemu monitor (port 4000), retrying in a second (attempt 1) when checking the container's logs. From recent issues I had seen suggestions that resources could be a cause for this error but that doesn't seem to fix my issue. I am getting an error with QEMU that there is no overlay file being created from my source image, I ran into no issues with vMX and was able to create a simple containerlab topology, but vSRX is giving me some issues. To my knowledge, all that is needed for the vSRX VM container to run is the .qcow2 image from https://support.juniper.net to be in the vsrx directory and type make, am I missing something? Any help is appreciated.

2023-07-07 23:44:18,117: vrnetlab   DEBUG    Creating overlay disk image
qemu-img: /junos-vsrx3-x86-64-23.2R1.13-overlay.qcow2: Backing file specified without backing format
Detected format of qcow2.2023-07-07 23:44:18,124: vrnetlab   DEBUG    Starting vrnetlab VSRX
2023-07-07 23:44:18,124: vrnetlab   DEBUG    VMs: [<__main__.VSRX_vm object at 0x7fa2bd06fdd0>]
2023-07-07 23:44:18,125: vrnetlab   DEBUG    VM not started; starting!
2023-07-07 23:44:18,125: vrnetlab   INFO     Starting VSRX_vm
2023-07-07 23:44:18,125: vrnetlab   DEBUG    number of provisioned data plane interfaces is 1
2023-07-07 23:44:18,126: vrnetlab   DEBUG    waiting for provisioned interfaces to appear...
2023-07-07 23:44:23,126: vrnetlab   DEBUG    highest allocated interface id determined to be: 1...
2023-07-07 23:44:23,127: vrnetlab   DEBUG    interfaces provisioned, continuing...
2023-07-07 23:44:23,127: vrnetlab   DEBUG    qemu cmd: qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.0:4000,server,nowait -m 4096 -serial telnet:0.0.0.0:5000,server,nowait -drive if=ide,file=/junos-vsrx3-x86-64-23.2R1.13-overlay.qcow2 -smp 2 -device pci-bridge,chassis_nr=1,id=pci.1 -device virtio-net-pci,netdev=p00,mac=0C:00:d4:8a:01:00 -netdev user,id=p00,net=10.0.0.0/24,tftp=/tftpboot,hostfwd=tcp::2022-10.0.0.15:22,hostfwd=udp::2161-10.0.0.15:161,hostfwd=tcp::2830-10.0.0.15:830,hostfwd=tcp::2080-10.0.0.15:80,hostfwd=tcp::2443-10.0.0.15:443 -device virtio-net-pci,netdev=p01,mac=0C:00:b9:9e:bc:01,bus=pci.1,addr=0x2 -netdev tap,id=p01,ifname=tap1,script=/etc/tc-tap-ifup,downscript=no
2023-07-07 23:44:23,154: vrnetlab   INFO     STDOUT:
2023-07-07 23:44:23,154: vrnetlab   INFO     STDERR: qemu-system-x86_64: -drive if=ide,file=/junos-vsrx3-x86-64-23.2R1.13-overlay.qcow2: Could not open '/junos-vsrx3-x86-64-23.2R1.13-overlay.qcow2': No such file or directory

2023-07-07 23:44:23,156: vrnetlab   INFO     Unable to connect to qemu monitor (port 4000), retrying in a second (attempt 1)
2023-07-07 23:44:24,156: vrnetlab   INFO     Unable to connect to qemu monitor (port 4000), retrying in a second (attempt 2)
2023-07-07 23:44:25,157: vrnetlab   INFO     Unable to connect to qemu monitor (port 4000), retrying in a second (attempt 3)
2023-07-07 23:44:26,157: vrnetlab   INFO     Unable to connect to qemu monitor (port 4000), retrying in a second (attempt 4)
2023-07-07 23:44:27,158: vrnetlab   INFO     Unable to connect to qemu monitor (port 4000), retrying in a second (attempt 5)
...
2023-07-07 23:45:21,184: vrnetlab   INFO     Unable to connect to qemu monitor (port 4000), retrying in a second (attempt 59)
2023-07-07 23:45:22,185: vrnetlab   INFO     Unable to connect to qemu monitor (port 4000), retrying in a second (attempt 60)
Traceback (most recent call last):
  File "/launch.py", line 145, in <module>
    vr.start()
  File "/vrnetlab.py", line 735, in start
    vm.work()
  File "/vrnetlab.py", line 666, in work
    self.check_qemu()
  File "/vrnetlab.py", line 680, in check_qemu
    self.start()
  File "/vrnetlab.py", line 193, in start
    raise QemuBroken(
vrnetlab.QemuBroken: Unable to connect to qemu monitor on port 4000

add support for nexus 9000v (not titanium)

Hi, currently only titanium seems to be supported for Nexus 9000v however, all latest images for N9000v are free to download from cisco.com, so I'm wondering if full support for this can be brought in. These are qcow2 images as well. 'make docker-image' fails with these images as of now:

root@aninchat-ubuntu:~/vrnetlab/nxos# make docker-image
for IMAGE in nexus9300v.9.3.9.qcow2; do
echo "Making $IMAGE";
make IMAGE=$IMAGE docker-build;
done
Making nexus9300v.9.3.9.qcow2
make[1]: Entering directory '/root/vrnetlab/nxos'
rm -f docker/.qcow2 docker/.tgz docker/.vmdk docker/*.iso
ERROR: Incorrect version string (nexus9300v.9.3.9.qcow2). The regexp for extracting version information is likely incorrect, check the regexp in the Makefile or open an issue at https://github.com/plajjan/vrnetlab/issues/new including the image file name you are using.
make[1]: *** [../makefile.include:25: docker-build-common] Error 1
make[1]: Leaving directory '/root/vrnetlab/nxos'
make: *** [../makefile.include:9: docker-image] Error 2

Not able to build vSRX image

I am not able to build a vSRX image. I am running "make docker-image" on a Ubuntu machine. I am using the latest version of the vrnetlab. Can you please check the below output?

for IMAGE in junos-media-vsrx-x86-64-vmdisk-22.4R2.8.qcow2; do \
	echo "Making $IMAGE"; \
	make IMAGE=$IMAGE docker-build; \
done
Making junos-media-vsrx-x86-64-vmdisk-22.4R2.8.qcow2
make[1]: Entering directory '/mnt/data/vrnetlab/vsrx'
rm -f docker/*.qcow2* docker/*.tgz* docker/*.vmdk* docker/*.iso
Building docker image using junos-media-vsrx-x86-64-vmdisk-22.4R2.8.qcow2 as vrnetlab/vr-vsrx:junos-media-vsrx-x86-64-vmdisk-22.4R2.8
cp ../common/* docker/
make IMAGE=$IMAGE docker-build-image-copy
make[2]: Entering directory '/mnt/data/vrnetlab/vsrx'
cp junos-media-vsrx-x86-64-vmdisk-22.4R2.8.qcow2* docker/
make[2]: Leaving directory '/mnt/data/vrnetlab/vsrx'
(cd docker; docker build --build-arg http_proxy= --build-arg https_proxy= --build-arg IMAGE=junos-media-vsrx-x86-64-vmdisk-22.4R2.8.qcow2 -t vrnetlab/vr-vsrx:junos-media-vsrx-x86-64-vmdisk-22.4R2.8 .)
[+] Building 12.3s (6/8)                                                                                                                               docker:default
 => [internal] load .dockerignore                                                                                                                                0.1s
 => => transferring context: 2B                                                                                                                                  0.0s
 => [internal] load build definition from Dockerfile                                                                                                             0.1s
 => => transferring dockerfile: 541B                                                                                                                             0.0s
 => [internal] load metadata for docker.io/library/debian:bullseye                                                                                               2.8s
 => [1/4] FROM docker.io/library/debian:bullseye@sha256:630454da4c59041a2bca987a0d54c68962f1d6ea37a3641bd61db42b753234f2                                         6.5s
 => => resolve docker.io/library/debian:bullseye@sha256:630454da4c59041a2bca987a0d54c68962f1d6ea37a3641bd61db42b753234f2                                         0.1s
 => => sha256:630454da4c59041a2bca987a0d54c68962f1d6ea37a3641bd61db42b753234f2 1.85kB / 1.85kB                                                                   0.0s
 => => sha256:2c407480ad7c98bdc551dbb38b92acb674dc130c8298f2e0fa2ad34da9078637 529B / 529B                                                                       0.0s
 => => sha256:35073ea3b0b7ab55ea55e1a111a8ded11a58ef2b6b3ae0e706160520e612df6f 1.46kB / 1.46kB                                                                   0.0s
 => => sha256:9a9e034800a1747ea288f38f6087c036acac99dd3ec5255bf7798abd8c23a304 55.06MB / 55.06MB                                                                 4.1s
 => => extracting sha256:9a9e034800a1747ea288f38f6087c036acac99dd3ec5255bf7798abd8c23a304                                                                        2.2s
 => CANCELED [internal] load build context                                                                                                                       9.3s
 => => transferring context: 1.34GB                                                                                                                              9.3s
 => ERROR [2/4] RUN apt-get update -qy    && apt-get upgrade -qy    && apt-get install -y    bridge-utils    iproute2    python3-ipy    socat    qemu-system-x8  2.8s
------
 > [2/4] RUN apt-get update -qy    && apt-get upgrade -qy    && apt-get install -y    bridge-utils    iproute2    python3-ipy    socat    qemu-system-x86=1:5.2+dfsg-11+deb11u2    qemu-utils=1:5.2+dfsg-11+deb11u2    && rm -rf /var/lib/apt/lists/*:
2.363 Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]
2.527 Get:2 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB]
2.586 Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]
2.594 Err:1 http://deb.debian.org/debian bullseye InRelease
2.594   At least one invalid signature was encountered.
2.680 Err:2 http://deb.debian.org/debian-security bullseye-security InRelease
2.680   At least one invalid signature was encountered.
2.768 Err:3 http://deb.debian.org/debian bullseye-updates InRelease
2.768   At least one invalid signature was encountered.
2.770 Reading package lists...
2.776 W: GPG error: http://deb.debian.org/debian bullseye InRelease: At least one invalid signature was encountered.
2.776 E: The repository 'http://deb.debian.org/debian bullseye InRelease' is not signed.
2.776 W: GPG error: http://deb.debian.org/debian-security bullseye-security InRelease: At least one invalid signature was encountered.
2.776 E: The repository 'http://deb.debian.org/debian-security bullseye-security InRelease' is not signed.
2.776 W: GPG error: http://deb.debian.org/debian bullseye-updates InRelease: At least one invalid signature was encountered.
2.776 E: The repository 'http://deb.debian.org/debian bullseye-updates InRelease' is not signed.
------
Dockerfile:6
--------------------
   5 |
   6 | >>> RUN apt-get update -qy \
   7 | >>>    && apt-get upgrade -qy \
   8 | >>>    && apt-get install -y \
   9 | >>>    bridge-utils \
  10 | >>>    iproute2 \
  11 | >>>    python3-ipy \
  12 | >>>    socat \
  13 | >>>    qemu-system-x86=1:5.2+dfsg-11+deb11u2 \
  14 | >>>    qemu-utils=1:5.2+dfsg-11+deb11u2 \
  15 | >>>    && rm -rf /var/lib/apt/lists/*
  16 |
--------------------
ERROR: failed to solve: process "/bin/sh -c apt-get update -qy    && apt-get upgrade -qy    && apt-get install -y    bridge-utils    iproute2    python3-ipy    socat    qemu-system-x86=1:5.2+dfsg-11+deb11u2    qemu-utils=1:5.2+dfsg-11+deb11u2    && rm -rf /var/lib/apt/lists/*" did not complete successfully: exit code: 100
make[1]: *** [../makefile.include:29: docker-build-common] Error 1
make[1]: Leaving directory '/mnt/data/vrnetlab/vsrx'
make: *** [../makefile.include:9: docker-image] Error 2 ``` 

NX-OS 7.x and 10.x Nexus 9300v images not correctly tagged

When building a vr-n9kv image based upon an NX-OS 10.x Nexus 9300v image (such as nexus9300v64.10.2.2.F.qcow2) or an NX-OS 7.x Nexus 9000v image (such as nxosv-final.7.0.3.I7.9.qcow2), the version the Docker image is tagged with is incorrect. An example is shown below with NX-OS 10.2(2)F:

christopher@ubuntu-vm:~/vrnetlab/n9kv$ cp ~/nexus9300v64.10.2.2.F.qcow2 ./
christopher@ubuntu-vm:~/vrnetlab/n9kv$ make docker-image
for IMAGE in nexus9300v64.10.2.2.F.qcow2; do \
        echo "Making $IMAGE"; \
        make IMAGE=$IMAGE docker-build; \
done
Making nexus9300v64.10.2.2.F.qcow2
make[1]: Entering directory '/home/christopher/vrnetlab/n9kv'
rm -f docker/*.qcow2* docker/*.tgz* docker/*.vmdk* docker/*.iso
Building docker image using nexus9300v64.10.2.2.F.qcow2 as vrnetlab/vr-n9kv:0.2.2    <<<
cp ../common/* docker/
make IMAGE=$IMAGE docker-build-image-copy
make[2]: Entering directory '/home/christopher/vrnetlab/n9kv'
cp nexus9300v64.10.2.2.F.qcow2* docker/
make[2]: Leaving directory '/home/christopher/vrnetlab/n9kv'
(cd docker; docker build --build-arg http_proxy= --build-arg https_proxy= --build-arg IMAGE=nexus9300v64.10.2.2.F.qcow2 -t vrnetlab/vr-n9kv:0.2.2 .)
Sending build context to Docker daemon  1.966GB
Step 1/12 : FROM ubuntu:20.04
 ---> ff0fea8310f3
Step 2/12 : LABEL maintainer="Kristian Larsson <[email protected]>"
 ---> Using cache
 ---> 149f67641a60
Step 3/12 : LABEL maintainer="Roman Dodin <[email protected]>"
 ---> Using cache
 ---> 658fcae4ed30
Step 4/12 : ARG DEBIAN_FRONTEND=noninteractive
 ---> Using cache
 ---> 54bff64c2e0f
Step 5/12 : RUN apt-get update -qy  && apt-get upgrade -qy  && apt-get install -y     bridge-utils     iproute2     python3-ipy     socat     qemu-kvm     tcpdump     tftpd-hpa     ssh     inetutils-ping     dnsutils     openvswitch-switch     iptables     telnet  && rm -rf /var/lib/apt/lists/*
 ---> Using cache
 ---> 4b8e900df4ce
Step 6/12 : ARG IMAGE
 ---> Using cache
 ---> 989c3f1d76d2
Step 7/12 : COPY $IMAGE* /
 ---> 099c10eb485c
Step 8/12 : COPY OVMF.fd /
 ---> 864feb0f47b1
Step 9/12 : COPY *.py /
 ---> 342d22d0a336
Step 10/12 : EXPOSE 22 80 161/udp 443 830 5000 6030 10000-10099 57400
 ---> Running in 63901ca2cb55
Removing intermediate container 63901ca2cb55
 ---> bd9dc041eb36
Step 11/12 : HEALTHCHECK CMD ["/healthcheck.py"]
 ---> Running in 848fb43260f6
Removing intermediate container 848fb43260f6
 ---> 34391472a968
Step 12/12 : ENTRYPOINT ["/launch.py"]
 ---> Running in 97274b1a93ab
Removing intermediate container 97274b1a93ab
 ---> 1017c43e2fb1
Successfully built 1017c43e2fb1
Successfully tagged vrnetlab/vr-n9kv:0.2.2    <<<
make[1]: Leaving directory '/home/christopher/vrnetlab/n9kv'

christopher@ubuntu-vm:~/vrnetlab/n9kv$ docker image ls | grep vr-n9kv
vrnetlab/vr-n9kv       0.2.2      1017c43e2fb1   14 minutes ago      2.39GB
vrnetlab/vr-n9kv       9.3.9      7eb770996827   21 minutes ago      2.41GB

The root cause of this is the regular expression pattern here: https://github.com/hellt/vrnetlab/blob/master/n9kv/Makefile#L7

This pattern does not handle multi-digit NX-OS versions. Images named nexus9300v64.10.2.2.F.qcow2 or nexus9300v.9.3.10.qcow2 (which has not yet been released, but is anticipated to come out in the future) will not work with this pattern. Furthermore, 7.x versions of NX-OS with a filename like nxosv-final.7.0.3.I7.9.qcow2 or nxosv-final.7.0.3.I7.5a.qcow2 will also not work with this pattern.

Startup config file not found

Hi,

Is there something specific that I need to do to get the containers to have a persistent config?
I'm have a csr and vmx image in containerlabs and when I destroy and deploy the lab again, the config is always wiped out to basic one written by netlab.
According to CSR boot trace,
2023-07-07 15:50:12,785: launch TRACE Startup config file /config/startup-config.cfg is not found

Other images , i.e EOS, no problems.
Docker version 24.0.3
clab version: 0.42.0

Br,
Heikki

allow setting control plane and line card' resources for SR OS

by reading env vars:

  • CP_CPU - number of vcpus to start qemu control plane image with
  • CP_RAM - number of GB of RAM to start qemu control plane image with
  • LC_CPU - number of vcpus to start qemu line card image with
  • LC_RAM - number of GB of RAM to start qemu line card image with

Low rates of throughput CSR1000V

Hi,

I'm testing the througput of the csr1000v router built with hellt/vrnetlab and I'm getting very low rates, even in certain intervals I have null values, has this been tested, would you know what could be related to such a low througput?

The topology i'm testing is two routers csr1000v connected each one to a host, the througput test is being done with iperf3, host-1 is the iperf server and host-2 is the iperf client
image

`iperf3 -c 10.0.1.2 -t 20
Connecting to host 10.0.1.2, port 5201
[ 5] local 10.0.2.2 port 46488 connected to 10.0.1.2 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 749 KBytes 6.13 Mbits/sec 0 109 KBytes
[ 5] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec 0 115 KBytes
[ 5] 2.00-3.00 sec 255 KBytes 2.09 Mbits/sec 0 120 KBytes
[ 5] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec 0 126 KBytes
[ 5] 4.00-5.00 sec 255 KBytes 2.09 Mbits/sec 0 144 KBytes
[ 5] 5.00-6.00 sec 318 KBytes 2.61 Mbits/sec 0 173 KBytes
[ 5] 6.00-7.00 sec 445 KBytes 3.65 Mbits/sec 0 214 KBytes
[ 5] 7.00-8.00 sec 0.00 Bytes 0.00 bits/sec 0 270 KBytes
[ 5] 8.00-9.00 sec 573 KBytes 4.69 Mbits/sec 0 328 KBytes
[ 5] 9.00-10.00 sec 0.00 Bytes 0.00 bits/sec 0 385 KBytes
[ 5] 10.00-11.00 sec 827 KBytes 6.78 Mbits/sec 0 441 KBytes
[ 5] 11.00-12.00 sec 0.00 Bytes 0.00 bits/sec 0 499 KBytes
[ 5] 12.00-13.00 sec 1.12 MBytes 9.39 Mbits/sec 0 556 KBytes
[ 5] 13.00-14.00 sec 0.00 Bytes 0.00 bits/sec 0 611 KBytes
[ 5] 14.00-15.00 sec 0.00 Bytes 0.00 bits/sec 0 667 KBytes
[ 5] 15.00-16.00 sec 0.00 Bytes 0.00 bits/sec 0 724 KBytes
[ 5] 16.00-17.00 sec 0.00 Bytes 0.00 bits/sec 0 779 KBytes
[ 5] 17.00-18.00 sec 0.00 Bytes 0.00 bits/sec 0 836 KBytes
[ 5] 18.00-19.00 sec 0.00 Bytes 0.00 bits/sec 0 891 KBytes
[ 5] 19.00-20.00 sec 0.00 Bytes 0.00 bits/sec 38 807 KBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-20.00 sec 4.46 MBytes 1.87 Mbits/sec 38 sender
[ 5] 0.00-25.04 sec 2.21 MBytes 739 Kbits/sec receiver

`

CSR1000v not connected via ssh

I have built the image of csr1000v-universalk9.17.04.03-serial.qcow2 as indicated in this guide https://github.com/hellt/vrnetlab/tree/master/csr , but when I try to access by ssh I get the following error:

ssh [email protected]
kex_exchange_identification: Connection closed by remote host

This is the image I obtained:

REPOSITORY TAG IMAGE ID CREATED SIZE
vrnetlab/vr-csr 17.03.06 731f49cc6a8e 51 minutes ago 1.88GB

I have seen in the logs that it fails to make the telnet connection but I don't know the reason for this:

docker logs f8c3aed22a77
2023-07-17 11:37:16,577: vrnetlab DEBUG Starting vrnetlab CSR
2023-07-17 11:37:16,577: vrnetlab DEBUG VMs: [<main.CSR_vm object at 0x7f162843b0a0>]
2023-07-17 11:37:16,590: vrnetlab DEBUG VM not started; starting!
2023-07-17 11:37:16,591: vrnetlab INFO Starting CSR_vm
2023-07-17 11:37:16,591: vrnetlab DEBUG number of provisioned data plane interfaces is 0
2023-07-17 11:37:16,593: vrnetlab DEBUG qemu cmd: qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.0:4000,server,nowait -m 4096 -serial telnet:0.0.0.0:5000,server,nowait -drive if=ide,file=/csr1000v-universalk9.17.03.06-serial-overlay.qcow2 -device pci-bridge,chassis_nr=1,id=pci.1 -device virtio-net-pci,netdev=p00,mac=0C:00:1c:ea:b8:00 -netdev user,id=p00,net=10.0.0.0 /24,tftp=/tftpboot,hostfwd=tcp::2022-10.0.0.15:22,hostfwd=udp::2161-10.0.0.15:161,hostfwd=tcp::2830-10.0.0.15:830,hostfwd=tcp::2080-10.0.0.15:80,hostfwd=tcp::2443-10.0.0.15:443 -device virtio-net-pci,n etdev=p02,mac=0C:00:d2:9d:af:02,bus=pci.1,addr=0x3 -netdev tap,id=p02,ifname=tap2,script=/etc/tc-tap-ifup,downscript=no -device virtio-net-pci,netdev=p03,mac=0C:00:0c:ab:da:03,bus=pci.1,addr=0x4 -netde v tap,id=p03,ifname=tap3,script=/etc/tc-tap-ifup,downscript=no -device virtio-net-pci,netdev=p04,mac=0C:00:9c:c5:79:04,bus=pci.1,addr=0x5 -netdev tap,id=p04,ifname=tap4,script=/etc/tc-tap-ifup,downscri pt=no -device virtio-net-pci,netdev=p05,mac=0C:00:9e:c7:85:05,bus=pci.1,addr=0x6 -netdev tap,id=p05,ifname=tap5,script=/etc/tc-tap-ifup,downscript=no
2023-07-17 11:40:32,132: launch DEBUG matched, Press RETURN to get started.
2023-07-17 11:40:32,132: launch INFO applying bootstrap configuration
2023-07-17 11:42:05,725: vrnetlab ERROR Telnet session was disconnected, restarting
2023-07-17 11:42:06,030: vrnetlab INFO Starting CSR_vm

Am I making a mistake?, how can I fix it?

Add user-defined SR OS Bof commands

Currently users have no way to provide commands to the BOF file. A workaround with good-boot-exec explained in the docs for SR OS kind is not the easiest one to implement and doesn't allow using partial configs.

A proposal for vrnetlab code to read the commands from a /tftpboot/user.bof and apply them on top of the default BOF configuration. That way users may add and overwrite default bof settings.

xrv9k : interfaces not detected

Hi,

I've build xrv9k docker image, but it does not boot up, it is always restarting due to dataplane interfaces not found (Gi0/0/0/0 not found in show interfaces description):
2022-09-13 11:13:25,017: vrnetlab DEBUG writing to serial console: show interfaces description
2022-09-13 11:13:35,028: vrnetlab DEBUG writing to serial console: show interfaces description
2022-09-13 11:13:45,030: vrnetlab DEBUG writing to serial console: show interfaces description
2022-09-13 11:13:55,037: vrnetlab DEBUG writing to serial console: show interfaces description
2022-09-13 11:14:05,045: launch ERROR Gi0/0/0/0 not found in show interfaces description
2022-09-13 11:14:05,046: launch DEBUG bootstrap_config failed, restarting device
2022-09-13 11:14:05,747: vrnetlab INFO Starting XRV_vm
2022-09-13 11:14:05,749: vrnetlab DEBUG number of provisioned data plane interfaces is 0

to troubleshoot, I modified launch.py to skip interfaces check and it boots up without dataplane interfaces
RP/0/RP0/CPU0:vr-xrv9k#show interfaces description
Tue Sep 13 12:19:53.608 UTC

Interface Status Protocol Description

Nu0 up up
Mg0/RP0/CPU0/0 up up

to compare with vrnetlab/vrnetlab there are missing additional devices in qemu cmd
'-device', 'e1000,netdev=p01,mac=52:54:00:84:e6:01,bus=pci.1,addr=0x2', '-netdev', 'socket,id=p01,listen=:10001',
'-device', 'e1000,netdev=p02,mac=52:54:00:ed:bd:02,bus=pci.1,addr=0x3', '-netdev', 'socket,id=p02,listen=:10002',
'-device', 'e1000,netdev=p03,mac=52:54:00:8b:2d:03,bus=pci.1,addr=0x4', '-netdev', 'socket,id=p03,listen=:10003',
'-device', 'e1000,netdev=p04,mac=52:54:00:97:fc:04,bus=pci.1,addr=0x5', '-netdev', 'socket,id=p04,listen=:10004',
'-device', 'e1000,netdev=p05,mac=52:54:00:dd:e9:05,bus=pci.1,addr=0x6', '-netdev', 'socket,id=p05,listen=:10005',
'-device', 'e1000,netdev=p06,mac=52:54:00:46:dc:06,bus=pci.1,addr=0x7', '-netdev', 'socket,id=p06,listen=:10006',
'-device', 'e1000,netdev=p07,mac=52:54:00:51:68:07,bus=pci.1,addr=0x8', '-netdev', 'socket,id=p07,listen=:10007',
'-device', 'e1000,netdev=p08,mac=52:54:00:24:37:08,bus=pci.1,addr=0x9', '-netdev', 'socket,id=p08,listen=:10008',
'-device', 'e1000,netdev=p09,mac=52:54:00:5f:cb:09,bus=pci.1,addr=0xa', '-netdev', 'socket,id=p09,listen=:10009',
'-device', 'e1000,netdev=p10,mac=52:54:00:be:39:0a,bus=pci.1,addr=0xb', '-netdev', 'socket,id=p10,listen=:10010',
'-device', 'e1000,netdev=p11,mac=52:54:00:76:85:0b,bus=pci.1,addr=0xc', '-netdev', 'socket,id=p11,listen=:10011',
'-device', 'e1000,netdev=p12,mac=52:54:00:37:b5:0c,bus=pci.1,addr=0xd', '-netdev', 'socket,id=p12,listen=:10012',
'-device', 'e1000,netdev=p13,mac=52:54:00:6b:20:0d,bus=pci.1,addr=0xe', '-netdev', 'socket,id=p13,listen=:10013',
'-device', 'e1000,netdev=p14,mac=52:54:00:04:38:0e,bus=pci.1,addr=0xf', '-netdev', 'socket,id=p14,listen=:10014',
'-device', 'e1000,netdev=p15,mac=52:54:00:3a:2d:0f,bus=pci.1,addr=0x10', '-netdev', 'socket,id=p15,listen=:10015',
'-device', 'e1000,netdev=p16,mac=52:54:00:60:1a:10,bus=pci.1,addr=0x11', '-netdev', 'socket,id=p16,listen=:10016',
'-device', 'e1000,netdev=p17,mac=52:54:00:7b:98:11,bus=pci.1,addr=0x12', '-netdev', 'socket,id=p17,listen=:10017',
'-device', 'e1000,netdev=p18,mac=52:54:00:75:ef:12,bus=pci.1,addr=0x13', '-netdev', 'socket,id=p18,listen=:10018',
'-device', 'e1000,netdev=p19,mac=52:54:00:d8:2d:13,bus=pci.1,addr=0x14', '-netdev', 'socket,id=p19,listen=:10019',
'-device', 'e1000,netdev=p20,mac=52:54:00:d0:6d:14,bus=pci.1,addr=0x15', '-netdev', 'socket,id=p20,listen=:10020',
'-device', 'e1000,netdev=p21,mac=52:54:00:80:fe:15,bus=pci.1,addr=0x16', '-netdev', 'socket,id=p21,listen=:10021',
'-device', 'e1000,netdev=p22,mac=52:54:00:dc:35:16,bus=pci.1,addr=0x17', '-netdev', 'socket,id=p22,listen=:10022',
'-device', 'e1000,netdev=p23,mac=52:54:00:ca:f2:17,bus=pci.1,addr=0x18', '-netdev', 'socket,id=p23,listen=:10023',
'-device', 'e1000,netdev=p24,mac=52:54:00:ae:9c:18,bus=pci.1,addr=0x19', '-netdev', 'socket,id=p24,listen=:10024'

I guess this is part of the code of gen_nics() function in vrnetlab.py, probably something not working there, but I didnt investigate deeper... please could you check and let me know how to move forward?

thanks
D

Daemon to align oper state between container's `ethX` and `tapX` interfaces

The issue with datapath stitching we do with tc mirred redirect between VM's tap interfaces and container's eth is that the state of the ethX interfaces does not propagate to the tapX interfaces.

This prevents users from simulating network outages when you want to shutdown a link on one end and see it go oper down on the other end.

What we can do is to package a daemon service inside the container that would watch for the oper state of tapX<->ethX pair and coordinate their state.

So when tapX goes down, ethX should be down and vice versa.

CSR1000v make fails due to f-string in vrnetlab.py

CSR1000v docker file is set to use Debian:stretch which installs python 3.5.3 which does not support f-strings. (fstring support was added in python 3.6 I beleive). As f-strings are used in vrnetlab.py this causes the build to fail with a syntax error:

Step 10/10 : ENTRYPOINT ["/launch.py"]
 ---> Using cache
 ---> 6388d8f13b7b
Successfully built 6388d8f13b7b
Successfully tagged vrnetlab/vr-csr:16.11.01b
docker run --cidfile cidfile --privileged vrnetlab/vr-csr:16.11.01b --trace --install
Traceback (most recent call last):
  File "/launch.py", line 13, in <module>
    import vrnetlab
  File "/vrnetlab.py", line 51
    logging.getLogger().info(f"Delaying VM boot of by {delay} seconds")
                                                                     ^
SyntaxError: invalid syntax
make[1]: *** [Makefile:19: docker-build] Error 1
make[1]: Leaving directory '/home/denis/vrnetlab/csr'

Changing docker to use to debian:buster resolves that issue, but then get errors on conn_mode not existing on the CSR class...

main differences between this fork and src

I am looking to work on options to run Juniper's vJunosEvolved and vJunos-switch products in containerlab, which is similar to how vSRX gets spun up, but am curious the core differences between hellt/vrnetlab and its source vrnetlab/vrnetlab. It appears this repo has support for vSRX while the source repo does not among other miscellaneous changes. Is hellt/vrnetlab the offical dependency for containerlab?

allow for multi line cards variants for sr os

need a way to support >1 line card for distributed sros

for example:

Lc#1 in slot=1:

lc: cpu=4 ram=4 max_nics=8 chassis=sr-2s slot=1 card=xcm-2s xiom/x1=iom-s-1.5t mda/x1/1=ms2-400gb-qsfpdd+2-100gb-qsfp28

 
lc#2 in slot=2 would look like so:

lc: cpu=4 ram=4 max_nics=8 chassis=sr-2s slot=2 card=xcm-2s xiom/x1=iom-s-1.5t mda/x1/1=ms2-400gb-qsfpdd+2-100gb-qsfp28

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.