Giter Club home page Giter Club logo

trex-core's Introduction

TRex Low-Cost, High-Speed Stateful Traffic Generator

TRex is a traffic generator for Stateful and Stateless use cases.

Traditionally, network infrastructure devices have been tested using commercial traffic generators, while the performance was measured using metrics like packets per second (PPS) and No Drop Rate (NDR). As the network infrastructure functionality has become more complex, stateful traffic generators have become necessary in order to test with more realistic application traffic pattern scenarios. Realistic and Stateful traffic generators are needed in order to:

  • Test and provide more realistic performance numbers

  • Design and architecture of SW and HW based on realistic use cases

Current Challenges

  • Cost: Commercial State-full traffic generators are expensive

  • Scale: Bandwidth does not scale up well with features complexity

  • Standardization: Lack of standardization of traffic patterns and methodologies

  • Flexibility: Commercial tools do not allow agility when flexibility and changes are needed

Implications

  • High capital expenditure (capEx) spent by different teams

  • Testing in low scale and extrapolation became a common practice, it is not accurate and hides real life bottlenecks and quality issues

  • Different feature/platform teams benchmark and results methodology

  • Delays in development and testing due to testing tools feature dependency

  • Resource and effort investment in developing different ad hoc tools and test methodologies

TRex addresses these problems through an innovative and extendable software implementation and by leveraging standard and open SW and working on COTS x86/ARM server.

TRex Stateful/Stateless in a Nutshell

  • Fueled by DPDK

  • Generates L3-7 traffic and able to provide in one tool capabilities provided by commercial tools.

  • Stateful/Stateless traffic generator.

  • Scale to 200Gb/sec

  • Python automation API

  • Low cost

  • Virtualization support. Enable TRex to be used in a fully virtual environment without physical NICs and the following example use cases:

    • Amazon AWS

    • TRex on your laptop

    • Docker

    • Self-contained packaging

  • Cisco Pioneer Award Winner 2015

Current TRex DPDK interfaces supported

  • Support Physical DPDK 1/2.5/10/25/50/40/100Gbps interfaces (Broadcom/Intel/Mellanox/Cisco VIC/Napatech/Amazon ENA)

  • Virtualization interfaces support (virtio/VMXNET3/E1000)

  • SR-IOV support for best performance

Current Stateful TRex Feature sets (STF)

This feature is for stateful features that inspect the traffic.

  • High scale of realistic traffic (number of clients, number of server, bandwidth)

  • Latency/Jitter measurements

  • Flow ordering checks

  • NAT, PAT dynamic translation learning

  • Learn TCP SYN sequence randomization - vASA/Firepower use case

  • Cluster mode for Controller tests

  • IPV6 inline replacement

  • Some cross flow support (e.g RTSP/SIP)

TRex Stateless feature sets (STL)

This feature is for Stateless features that do routing/switching e.g. Cisco VPP/OVS. It is more packet based.

  • Large-scale - Supports about 10-30 million packets per second (Mpps) per core, scalable with the number of cores

  • Profile can support multiple streams, scalable to 10K parallel streams

  • Supported for each stream:

    • Packet template - ability to build any packet (including malformed) using Scapy (example: MPLS/IPv4/Ipv6/GRE/VXLAN/NSH)

    • Field engine program

      • Ability to change any field inside the packet (example: src_ip = 10.0.0.1-10.0.0.255)

      • Ability to change the packet size (example: random packet size 64-9K)

    • Mode - Continuous/Burst/Multi-burst support

    • Rate can be specified as:

      • Packets per second (example: 14MPPS)

      • L1/L2 bandwidth (example: 500Mb/sec)

      • Interface link percentage (example: 10%)

    • Support for basic HLTAPI-like profile definition

    • Action - stream can trigger a stream

  • Interactive support - Fast Console, GUI

  • Statistics per interface

  • Statistics per stream done in hardware/software

  • Latency and Jitter per stream

  • Blazingly fast Python automation API

  • L2 Emulation Python event-driven framework with examples of ARP/ICMP/ICMPv6/IPv6ND/DHCP and more. The framework can be extendable with new protocols

  • Capture/Monitor traffic with BPF filters - no need for Wireshark

  • Capture network traffic by redirecting the traffic to Wireshark

  • Functional tests

  • PCAP file import/export

  • Huge pcap file transmission (e.g. 1TB pcap file) for DPI

  • Multi-user support

  • Routing protocol support BGP/OSPF/RIP using BIRD integration

The following example shows three streams configured for Continuous, Burst, and Multi-burst traffic.

stl streams example 02
Figure 1.

A new JSON-RPC2 Architecture provides support for interactive mode

trex architecture 01
Figure 2.

more info can be found here Documentation

TRex Advance Stateful feature sets (ASTF)

With the new advanced scalable TCP/UDP support, TRex uses TCP/UDP layer for generating the L7 data. This opens the following new capabilities:

  • Ability to work when the DUT terminates the TCP stack (e.g. compress/uncompress). In this case, there is a different TCP session on each side, but L7 data are almost the same.

  • Ability to work in either client mode or server mode. This way TRex client side could be installed in one physical location on the network and TRex server in another.

  • Performance and scale

    • High bandwidth - 200gb/sec with many realistic flows (not one elephant flow )

    • High connection rate - order of MCPS

    • Scale to millions of active established flows

  • Emulate L7 application, e.g. HTTP/HTTPS/Citrix- there is no need to implement the exact protocol.

  • Accurate TCP implementation

  • Ability to change fields in the L7 application - for example, change HTTP User-Agent field

more information can be found here:

What you can do with it

Stateful

  • Benchmark/Stress stateful features :

    • NAT

    • DPI

    • Load Balancer

    • Network cache devices

    • FireWall

    • IPS/IDS

  • Mixing Application level traffic/profile (HTTP/SIP/Video)

  • Unlimited concurrent flows, limited only by memory

Stateless

  • Benchmark/Stress vSwitch RFC2544

Documentation

Wiki

Internal Wiki

How to build

Internal Wiki

YouTrack

Our old report bug/request tool YouTrack Better to use github issues

Blogs

blogs can be found TRex blog

Stateless Client GUI

  • Cross-Platform - runs on Windows, Linux, Mac OS X

  • Written in JavaFX use TRex RPC API

  • Scapy base packet builder to build any type of packet using GUI

    • very easy to add new protocols builders (using scapy)

  • Open and edit PCAP files, replay and save back

  • visual latency/jitter/per stream statistic

  • Free

Github is here trex-stateless-gui

t g1
Figure 3.

TRex EMU

The objective is to implement client side L3 protocols i.e ARP, IPv6, ND, MLD, IGMP in order to simulate a scale of clients and servers. This project is not limited to client protocols, but it is a good start. The project provides a framework to implement and use client protocols.

The framework is fast enough for control plane protocols and will work with TRex server. Very fast L7 applications (on top of TCP/UDP) will run on TRex server. One single thread of TRex-EMU can achieve a high rate of client creation/teardown. Each of the aforementioned protocol is implemented as a plugin. These plugins are self contained and can signal events one to the other, or to the framework, using an event-bus. (e.g. DHCP signals that it has a new IPv6 address). The framework has an event driven architecture, this way it can scale. The framework also provides to a protocol plugin infrastructure, for example RPC, timers, packet parsers, simulation and more.

The main properties:

  • Fast client creation/teardown. ~3K/sec for one thread.

  • Number of active client/namespace is limited only by the memory on the server.

  • Packet per second (PPS) in the range of 3-5 MPPS.

  • Python 2.7/3.0 Client API exposed through JSON-RPC.

  • Interactive support - Integrated with the TRex console.

  • Modular design. Each plugin is self contained and can be tested on its own.

  • TRex-EMU supports the following protocols:

Plug-in Description

ARP

RFC 826

ICMP

RFC 777

DHCPv4

RFC 2131 client side

IGMP

IGMP v3/v2/v1 RFC3376

IPv6

IPv6 ND, RFC 4443, RFC 4861, RFC 4862 and MLD and MLDv2 RFC 3810

DHCPv6

RFC 8415 client side

more infor here trex-emu

TRex BIRD integration

Bird Internet Routing Daemon is a project aimed to develop a fully functional linux dynamic IP routing daemon. It was integrated into TRex to run alongside in order to exploit it’s features together with Python automation API.

  • Run on top of IPv4 and IPv6 (using kernel veth)

    • BGP (eBGP/iBGP), RPKI (RFC 6480)/RFC 6483 records type are pv4,ipv6,vpn4,vpn6,multicast,flow4,flow6

    • RFC 4271 - Border Gateway Protocol 4 (BGP)

    • RFC 1997 - BGP Communities Attribute

    • RFC 2385 - Protection of BGP Sessions via TCP MD5 Signature

    • RFC 2545 - Use of BGP Multiprotocol Extensions for IPv6

    • RFC 2918 - Route Refresh Capability

    • RFC 3107 - Carrying Label Information in BGP

    • RFC 4360 - BGP Extended Communities Attribute

    • RFC 4364 - BGP/MPLS IPv4 Virtual Private Networks

    • RFC 4456 - BGP Route Reflection

    • RFC 4486 - Subcodes for BGP Cease Notification Message

    • RFC 4659 - BGP/MPLS IPv6 Virtual Private Networks

    • RFC 4724 - Graceful Restart Mechanism for BGP

    • RFC 4760 - Multiprotocol extensions for BGP

    • RFC 4798 - Connecting IPv6 Islands over IPv4 MPLS

    • RFC 5065 - AS confederations for BGP

    • RFC 5082 - Generalized TTL Security Mechanism

    • RFC 5492 - Capabilities Advertisement with BGP

    • RFC 5549 - Advertising IPv4 NLRI with an IPv6 Next Hop

    • RFC 5575 - Dissemination of Flow Specification Rules

    • RFC 5668 - 4-Octet AS Specific BGP Extended Community

    • RFC 6286 - AS-Wide Unique BGP Identifier

    • RFC 6608 - Subcodes for BGP Finite State Machine Error

    • RFC 6793 - BGP Support for 4-Octet AS Numbers

    • RFC 7311 - Accumulated IGP Metric Attribute for BGP

    • RFC 7313 - Enhanced Route Refresh Capability for BGP

    • RFC 7606 - Revised Error Handling for BGP UPDATE Messages

    • RFC 7911 - Advertisement of Multiple Paths in BGP

    • RFC 7947 - Internet Exchange BGP Route Server

    • RFC 8092 - BGP Large Communities Attribute

    • RFC 8203 - BGP Administrative Shutdown Communication

    • RFC 8212 - Default EBGP Route Propagation Behavior without Policies

    • OSPF (v2/v3) RFC 2328/ RFC 5340

    • RIP - RIPv1 (RFC 1058),RIPv2 (RFC 2453), RIPng (RFC 2080), and RIP cryptographic authentication (RFC 4822).

  • Scale of Millions of routes (depends on the protocol scale e.g. BGP) in a few seconds

  • Integration with Multi-RX software model (-software and -c higher than 1) to support dynamic filters for BIRD protocols while keeping high rates of traffic

  • Can support up to 10K veth (virtual interfaces) each with different QinQ/VLAN configuration

  • Simple automation Python API for pushing configuration and read statistics

Sandbox for evaluation

Try the new Devnet Sandbox TRex Sandbox

Who is using TRex?

trex-core's People

Contributors

ahothan avatar bdollma avatar beubanks avatar bkuhre avatar cyborn-minjae-lee avatar dablock avatar danklein10 avatar egorblagov avatar egwakim avatar ejaecki avatar ejangle avatar elados93 avatar hedjuo avatar hhaim avatar ibarnea avatar imarom avatar jsmoon avatar juliusmilan avatar junsubhan4rmericsson avatar kisel avatar mamonney avatar mcallaghan-sandvine avatar rhymmor avatar rjarry avatar syaakov avatar teknoraver avatar viacheslavo avatar ybrustin avatar yonghwan007 avatar yskoh-mellanox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

trex-core's Issues

NameError: name 'TRexTimeoutError' is not defined (trex_client.py)

Hi,

We encountered a bug while calling the STL api in custom Python scripts.
Bug appears in Python2 and Python3.

OS : Ubuntu 14.04 LTS (recommended)
Trex version : 2.43
Bug summary : When calling "wait_on_traffic" method on an STLClient instance, trying to raise TRexTimeoutError will result as an error as it is not defined.
It is defined in /automation/trex_control_plane/interactive/trex/common/trex_exceptions.py, but not in __all__.

To fix it, change "trex_exceptions.py:8" :
__all__ = ["TRexError", "TRexArgumentError", "TRexTypeError"]
To :
__all__ = ["TRexError", "TRexArgumentError", "TRexTypeError", "TRexTimeoutError"]

And maybe add the "TRexConsoleError" and "TRexConsoleNoAction" to this list.

Backtrace :
c.wait_on_traffic(ports = conf_port, timeout = 2*duration)
File "/home/trex/trex/v2.43/automation/trex_control_plane/interactive/trex/common/trex_api_annotators.py", line 51, in wrap2
ret = f(*args, **kwargs)
File "/home/trex/trex/v2.43/automation/trex_control_plane/interactive/trex/stl/trex_stl_client.py", line 602, in wait_on_traffic
TRexClient.wait_on_traffic(self, ports, timeout)
File "/home/trex/trex/v2.43/automation/trex_control_plane/interactive/trex/common/trex_api_annotators.py", line 51, in wrap2
ret = f(*args, **kwargs)
File "/home/trex/trex/v2.43/automation/trex_control_plane/interactive/trex/common/trex_client.py", line 1620, in wait_on_traffic
raise TRexTimeoutError(timeout)
NameError: name 'TRexTimeoutError' is not defined

clarity for underlying Linux operating system support (w.r.t. Ubuntu)

As per https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_supported_versions

At a high level:

  1. Fedora (SUPER OLD) versions are supported.
  2. Ubuntu (SUPER OLD) 14.04 LTS is supported, 16.04 has "not fully supported", and 18.04 not mentioned
  3. CentOS/RHEL 7.4, is supported (MUST for ConnectX-4)

Here's what we say (as of Jul20 2018):

Fedora 20-23, 64-bit kernel (not 32-bit)
Ubuntu 14.04.1 LTS, 64-bit kernel (not 32-bit)
Ubuntu 16.xx LTS, 64-bit kernel (not 32-bit) — Not fully supported.
CentOS/RHEL 7.4, 64-bit kernel (not 32-bit) — This is the only working option for ConnectX-4.


Does Cisco have any internal CI/regression for TRex that can help easily determine if newer operating systems are supported? It probably simply comes down to driver support and DPDK compatibility right?

We should:

  1. Get more up-to-date support confirmation
  2. If there are caveats/limitations, make this clear

(might be easier to distinguish in a table/matrix too)
, once we get a bit more information, I'd be happy to update the docs with the determinations

import (creation) of new trex-ansible repository

Good day!

Several weeks ago I created some useful ansible scripts to deploy TRex to a Linux system.
https://github.com/mcallaghan-sandvine/trex-ansible

Had e-mailed Hanoh but I believe he was on vacation during that time and it may have been buried.

My account does not have permissions to create a repository within the cisco-system-traffic-generator project.

I would like to propose that we create "trex-ansible" , and import from my base as linked above.

Hoping these install ansible scripts will be useful to others, and community can extend as needed.

Thanks!

Unable to connect to Trex in v2.18

When trex_client had been extracted and trying to connect to the trex chassis using python interpreter, facing an issue,

Trex client do not have 32bit folder libraries included in v2.18, trex_client/external_libs/pyzmq-14.5.0/python3/ucs4/32bit

import trex_stl_lib
Unable to find required module library: 'pyzmq-14.5.0'
Please provide the correct path using TREX_STL_EXT_PATH variable
current path used: '/auto/cenbu/dev-test/ATS_python/regression/Iotssg/lib/trex_client_v2.18/external_libs/pyzmq-14.5.0/python3/ucs4/32bit'
[ATS_python] bgl-xdm-112:248>

ConnectX-3 Two Key Same Value

Hi,

I'm using a Mellanox ConnectX-3 device which provides two ports but expose a single PCI bus address. I tried playing with the config file it seems I need to provide interfaces as a multiple of 2.

Is there any way to make this work with trex?

Thank You

several traffic profiles include deprecated generator fields (min_clients and clients_per_gb)

as per https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_global_traffic_yaml_section

          clients_per_gb : 201             9
          min_clients    : 101             10

, this is one of MANY examples where we reference a deprecated parameter:

  | Deprecated. not used
  | Deprecated. not used

we can also search in the entire code base and find this all over :(

--
can this stuff be removed and cleaned up?

$ pwd
~/projects/trex-core/scripts

$ grep -rins min_clients *
automation/trex_control_plane/client_utils/trex_yaml_gen.py:21:                                    'min_clients': 101,
automation/regression/cfg/imix.yaml:13:          min_clients    : 101
automation/regression/cfg/imix_fast_1g.yaml:14:          min_clients    : 101
avl/sfr_delay_10_1g_asa_nat.yaml:9:          min_clients    : 101
avl/sfr_delay_10_1g.yaml:9:          min_clients    : 101
avl/sfr_delay_10_1g_no_bundeling.yaml:9:          min_clients    : 101
avl/sfr_branch_profile_delay_10.yaml:9:          min_clients    : 101
avl/sfr_delay_10_no_bundeling.yaml:9:          min_clients    : 101
avl/sfr_delay_50_tunnel_no_bundeling.yaml:9:          min_clients    : 101
avl/sfr_delay_10.yaml:9:          min_clients    : 101
cap2/cur_flow.yaml:9:          min_clients    : 101
cap2/cur_flow_single_tw_8.yaml:9:          min_clients    : 101
cap2/http_very_long.yaml:9:          min_clients    : 101
cap2/lb_ex1.yaml:9:          min_clients    : 101
cap2/per_template_gen4.yaml:9:          min_clients    : 101
cap2/per_template_gen5.yaml:9:          min_clients    : 101
cap2/rtsp.yaml:9:          min_clients    : 101
cap2/per_template_gen3.yaml:9:          min_clients    : 101
cap2/per_template_gen1.yaml:9:          min_clients    : 101
cap2/http.yaml:9:          min_clients    : 101
cap2/test_pcap_mode2.yaml:9:          min_clients    : 101
cap2/ipv6_load_balance.yaml:9:          min_clients    : 101
cap2/dns_single_server.yaml:9:          min_clients    : 101
cap2/dns_wlen.yaml:9:          min_clients    : 101
cap2/sip_short2.yaml:9:          min_clients    : 101
cap2/short_tcp.yaml:9:          min_clients    : 101
cap2/rtsp_full2.yaml:9:          min_clients    : 101
cap2/imix_9k.yaml:12:          min_clients    : 101
cap2/rtsp_short3.yaml:9:          min_clients    : 101
cap2/cur_flow_single.yaml:9:          min_clients    : 101
cap2/imix_64.yaml:12:          min_clients    : 101
cap2/imix_64_100k.yaml:12:          min_clients    : 101
cap2/http_simple_ipv6.yaml:9:          min_clients    : 101
cap2/asa_explot1.yaml:9:          min_clients    : 101
cap2/imix_64_fast.yaml:12:          min_clients    : 101
cap2/many_client_example.yaml:9:          min_clients    : 101
cap2/sfr2.yaml:9:          min_clients    : 101
cap2/http_simple.yaml:9:          min_clients    : 101
cap2/dns_wlen1.yaml:9:          min_clients    : 101
cap2/wrong_ip.yaml:9:          min_clients    : 101
cap2/dns_one_server.yaml:9:          min_clients    : 101
cap2/sfr3.yaml:9:          min_clients    : 101
cap2/test_pcap_mode1.yaml:9:          min_clients    : 101
cap2/dyn_pyld1.yaml:9:          min_clients    : 101
cap2/imix_1518.yaml:12:          min_clients    : 101
cap2/per_template_gen2.yaml:9:          min_clients    : 101
cap2/http_short.yaml:9:          min_clients    : 101
cap2/dns_wlen2.yaml:9:          min_clients    : 101
cap2/jumbo.yaml:9:          min_clients    : 101
cap2/limit_multi_pkt.yaml:12:          min_clients    : 101
cap2/ipv4_load_balance.yaml:9:          min_clients    : 101
cap2/nat_test.yaml:10:          min_clients    : 101
cap2/sfr.yaml:9:          min_clients    : 101
cap2/dns_tw.yaml:9:          min_clients    : 101
cap2/sfr4.yaml:9:          min_clients    : 101
cap2/dns_no_delay.yaml:9:          min_clients    : 101
cap2/rtsp_short1.yaml:9:          min_clients    : 101
cap2/rtsp_short1_slow.yaml:9:          min_clients    : 101
cap2/sip_short1.yaml:9:          min_clients    : 101
cap2/dns.yaml:9:          min_clients    : 101
cap2/imix_9k_burst_10.yaml:12:          min_clients    : 101
cap2/ipv6.yaml:9:          min_clients    : 101
cap2/http_plugin.yaml:9:          min_clients    : 101
cap2/sfr_agg_tcp14_udp11_http200msec_new_high_new_nir_profile_ipg_mix.yaml:9:          min_clients    : 101
cap2/dns_wlength.yaml:9:          min_clients    : 101
cap2/limit_single_pkt.yaml:12:          min_clients    : 101
cap2/rtsp_short2.yaml:9:          min_clients    : 101
cap2/tuple_gen.yaml:7:  min_clients    : 101

$ grep -rins clients_per_gb *
automation/trex_control_plane/client_utils/trex_yaml_gen.py:17:                                    'clients_per_gb': 201,
automation/regression/cfg/imix.yaml:12:          clients_per_gb : 201
automation/regression/cfg/imix_fast_1g.yaml:13:          clients_per_gb : 201
avl/sfr_delay_10_1g_asa_nat.yaml:8:          clients_per_gb : 201
avl/sfr_delay_10_1g.yaml:8:          clients_per_gb : 201
avl/sfr_delay_10_1g_no_bundeling.yaml:8:          clients_per_gb : 201
avl/sfr_branch_profile_delay_10.yaml:8:          clients_per_gb : 201
avl/sfr_delay_10_no_bundeling.yaml:8:          clients_per_gb : 201
avl/sfr_delay_50_tunnel_no_bundeling.yaml:8:          clients_per_gb : 201
avl/sfr_delay_10.yaml:8:          clients_per_gb : 201
cap2/cur_flow.yaml:8:          clients_per_gb : 201
cap2/cur_flow_single_tw_8.yaml:8:          clients_per_gb : 201
cap2/http_very_long.yaml:8:          clients_per_gb : 201
cap2/lb_ex1.yaml:8:          clients_per_gb : 201
cap2/per_template_gen4.yaml:8:          clients_per_gb : 201
cap2/per_template_gen5.yaml:8:          clients_per_gb : 201
cap2/rtsp.yaml:8:          clients_per_gb : 201
cap2/per_template_gen3.yaml:8:          clients_per_gb : 201
cap2/per_template_gen1.yaml:8:          clients_per_gb : 201
cap2/http.yaml:8:          clients_per_gb : 201
cap2/test_pcap_mode2.yaml:8:          clients_per_gb : 201
cap2/ipv6_load_balance.yaml:8:          clients_per_gb : 201
cap2/dns_single_server.yaml:8:          clients_per_gb : 201
cap2/dns_wlen.yaml:8:          clients_per_gb : 201
cap2/sip_short2.yaml:8:          clients_per_gb : 201
cap2/short_tcp.yaml:8:          clients_per_gb : 201
cap2/rtsp_full2.yaml:8:          clients_per_gb : 201
cap2/imix_9k.yaml:11:          clients_per_gb : 201
cap2/rtsp_short3.yaml:8:          clients_per_gb : 201
cap2/cur_flow_single.yaml:8:          clients_per_gb : 201
cap2/imix_64.yaml:11:          clients_per_gb : 201
cap2/imix_64_100k.yaml:11:          clients_per_gb : 201
cap2/http_simple_ipv6.yaml:8:          clients_per_gb : 201
cap2/asa_explot1.yaml:8:          clients_per_gb : 201
cap2/imix_64_fast.yaml:11:          clients_per_gb : 201
cap2/many_client_example.yaml:8:          clients_per_gb : 201
cap2/sfr2.yaml:8:          clients_per_gb : 201
cap2/http_simple.yaml:8:          clients_per_gb : 201
cap2/dns_wlen1.yaml:8:          clients_per_gb : 201
cap2/wrong_ip.yaml:8:          clients_per_gb : 201
cap2/dns_one_server.yaml:8:          clients_per_gb : 201
cap2/sfr3.yaml:8:          clients_per_gb : 201
cap2/test_pcap_mode1.yaml:8:          clients_per_gb : 201
cap2/dyn_pyld1.yaml:8:          clients_per_gb : 201
cap2/imix_1518.yaml:11:          clients_per_gb : 201
cap2/per_template_gen2.yaml:8:          clients_per_gb : 201
cap2/http_short.yaml:8:          clients_per_gb : 201
cap2/dns_wlen2.yaml:8:          clients_per_gb : 201
cap2/jumbo.yaml:8:          clients_per_gb : 201
cap2/limit_multi_pkt.yaml:11:          clients_per_gb : 201
cap2/ipv4_load_balance.yaml:8:          clients_per_gb : 201
cap2/nat_test.yaml:9:          clients_per_gb : 201
cap2/sfr.yaml:8:          clients_per_gb : 201
cap2/dns_tw.yaml:8:          clients_per_gb : 201
cap2/sfr4.yaml:8:          clients_per_gb : 201
cap2/dns_no_delay.yaml:8:          clients_per_gb : 201
cap2/rtsp_short1.yaml:8:          clients_per_gb : 201
cap2/rtsp_short1_slow.yaml:8:          clients_per_gb : 201
cap2/sip_short1.yaml:8:          clients_per_gb : 201
cap2/dns.yaml:8:          clients_per_gb : 201
cap2/imix_9k_burst_10.yaml:11:          clients_per_gb : 201
cap2/ipv6.yaml:8:          clients_per_gb : 201
cap2/http_plugin.yaml:8:          clients_per_gb : 201
cap2/sfr_agg_tcp14_udp11_http200msec_new_high_new_nir_profile_ipg_mix.yaml:8:          clients_per_gb : 201
cap2/dns_wlength.yaml:8:          clients_per_gb : 201
cap2/limit_single_pkt.yaml:11:          clients_per_gb : 201
cap2/rtsp_short2.yaml:8:          clients_per_gb : 201
cap2/tuple_gen.yaml:6:  clients_per_gb : 201

rtt is not the same as ipg (and by ipg I think we mean inter-packet-delay)

related to #137 ... (continuation from that story)

as per https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_per_template_section

       ipg : 10000         3
       rtt : 10000         4

, we define these as:

(3) (ipg) = If the global section of the YAML file includes cap_ipg : false, this line sets the inter-packet gap in microseconds.
(4) (rtt) = Should be set to the same value as ipg (microseconds).


This is false.

  1. ipg = inter-packet-gap -- this is actually about IDLE FRAME duration between Ethernet packets. (aka interframe spacing, interframe gap) -- https://en.wikipedia.org/wiki/Interpacket_gap

  2. I think by "ipg", we actually mean "inter-packet delay" -- (how long to delay pkt-to-pkt transmission when RTT is not at play for TCP control flow packets?), this is likely a non-industry accepted term since it's specific to TRex's implementation?

  3. rtt = round-trip-time -- this is different from ipg (and inter-packet delay) -- "the length of time it takes for a signal to be sent plus the length of time it takes for an acknowledgment of that signal to be received" -- https://en.wikipedia.org/wiki/Round-trip_delay_time


I have tested and confirmed this theory:

====
CASE A) (status quo)

if we send a TCP flow when RTT == IPG == 100ms

ipg : 100000
rtt : 100000

-> here it goes:

16:44:39.662082 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:44:39.761095 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
16:44:39.863706 (SNIP) length 854: 4.0.0.1.41668 > 5.0.0.1.15000: Flags [P.], ack 1006318459, win 57344, length 792
16:44:39.963630 (SNIP) length 413: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 351
16:44:40.061033 (SNIP) length 923: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 861
16:44:40.161048 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
16:44:40.261059 (SNIP) length 78: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 16
16:44:40.361073 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
16:44:40.460085 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
16:44:40.560104 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:44:40.660013 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:44:40.760028 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]

, we can see that the timestamp correctly increments by ~100ms for each packet (without much understanding, this is an implicit assumption without knowing enough...)

====
CASE B) rtt > ipd (a MORE realistic situation)

ipg : 10000
rtt : 100000

results in ->

16:49:13.627160 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
16:49:13.728070 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
         ^
16:49:13.827081 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
         ^
16:49:13.840082 (SNIP) length 854: 4.0.0.1.41668 > 5.0.0.1.15000: Flags [P.], ack 1006318459, win 57344, length 792
16:49:13.939894 (SNIP) length 413: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 351
16:49:13.948096 (SNIP) length 923: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 861
          ^
16:49:13.958097 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
          ^
16:49:13.968099 (SNIP) length 78: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 16
16:49:13.978100 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
16:49:14.078117 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
16:49:14.178129 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:49:14.188128 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:49:14.288043 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]

we can see here for all TCP control packets, the RTT is honoured, with 100ms delay
and for all DATA packets, 10ms increments happen!

====
CASE C) rtt < ipd (awkward, but possible)
(*this is a scenario where the RTT of the network is LESS (faster) than the server or client delay in sending sequential data packets ... fairly unlikely in the real world, but possible if the application artificially delays packets, or perhaps if the OS or network stack is throttling ... etc.)

ipg : 100000
rtt : 10000

and:

16:51:52.940052 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
          ^
16:51:52.949049 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
          ^
16:51:52.959047 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
          ^
16:51:53.064264 (SNIP) length 854: 4.0.0.1.41668 > 5.0.0.1.15000: Flags [P.], ack 1, win 57344, length 792
16:51:53.072161 (SNIP) length 413: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 351
16:51:53.170076 (SNIP) length 923: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 861
         ^
16:51:53.269991 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
         ^
16:51:53.370003 (SNIP) length 78: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 16
         ^
16:51:53.469016 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
16:51:53.479017 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
16:51:53.489017 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:51:53.590032 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:51:53.600034 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]

, as we can see, it's the reverse from CASE (B). RTT TCP control packets adhere to a 100ms delay, and TCP data stream pkts go for 10ms delay

TODO: NEEDS UDP TESTING


SO; I'd like to propose
A> remove all notions that "rtt == ipg"
B> properly explain RTT and what TRex does with it
C> change "ipg" to inter-packet-delay and properly define it (ipg must remain for backwards compatibility I guess...)

Dockerfile fails to build based on 16.10

Attempting to build with the ubuntu/16.10/Dockerfile fails with issues finding yakkety repository files. Ubuntu 16.10 is EOL and I believe (though i haven't confirmed) they pull EOL docker tags, their hub page doesn't show any EOL tags.

W: The repository 'http://security.ubuntu.com/ubuntu yakkety-security Release' does not have a Release file.
...
E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/yakkety/universe/source/Sources  404  Not Found [IP: 91.189.88.162 80]
...
E: Some index files failed to download. They have been ignored, or old ones used instead.
The command '/bin/sh -c apt-get update' returned a non-zero code: 100

I copied that directory and based on 16.04 LTS which works (builds trex and simulator) at #84 . I didn't update the original/docs in that PR as I was unsure if others existing build systems would fail though anyone who hasn't pulled that image prior to it's tag disappearing is also in a bad state.

problem to launch t-rex-64

Hi,

I installed TRex with the OVA Vm in my ESXi. I added 2 interfaces to my vm and configured them. I've done the ./dpdk_setup_ports.py -s command and I see my interfaces in Active status. Finaly I launch t-rex-64
with this command: ./t-rex-64 -f cap2/dns.yaml -c 4 -m 1 -d 10 -l 1000
It returns me :

mapping values are not allowed here
  in "/etc/trex_cfg.yaml", line 2, column 19
mapping values are not allowed here
  in "/etc/trex_cfg.yaml", line 2, column 19

My trex config file (/etc/trex_cfg.yaml) just below:

<none>
- port_limit      : 2
  version         : 2
#List of interfaces. Change to suit your setup. Use ./dpdk_setup_ports.py -s to see


interfaces    : ["02:03.0", "02:05.0"]
port_info       :  # Port IPs. Change to suit your needs. In case of loopback, you can

         - ip         : 1.1.1.1
           default_gw : 2.2.2.2
         - ip         : 2.2.2.2
           default_gw : 1.1.1.1

Best regards

Victor

Packet bit randomization support

Does Trex support randomizing the bits in a packet.?

For example:
I want to randomize the UDP Source Port(0 to 65535) of my packet when i run the throughput test.

Thanks
Suresh

TRex STF silently fails to properly send complete long flow if pkts exceed 9K-bytes JUMBO size

Here's an interesting issue I've stumbled over.

(actually its not super new, posted about it on Google groups some time ago --> https://groups.google.com/forum/#!topic/trex-tgn/87piZtvlgUY)

Have since conducted a bit more investigation to prove that its not related to "total size" or "too many packets".

To start, consider "what works" - I understand that TRex STF supports up to 65k packets per source flow. I validated that based on a source capture (simple HTTP_GET) with ~50k packets, >400sec duration, and >50MB, TRex handles this fine!

For baseline purposes, here is the tshark analysis of a capture taken on our DUT, from the TRex output of this flow:


======================================
| IO Statistics                      |
|                                    |
| Duration: 414.4 secs               |
| Interval: 414.4 secs               |
|                                    |
| Col 1: Frames and bytes            |
|------------------------------------|
|                |1                  |
| Interval       | Frames |   Bytes  |
|------------------------------------|
|   0.0 <> 414.4 |  57561 | 53800122 |
======================================

Nice right?

(For reference, that 50MB flow is the exact same as this 1MB flow https://github.com/mcallaghan-sandvine/trex-core/blob/issue_143_temp_files/issue_143_temp_files/http_get_1-1mbytes.zip, but with similar data repeat patterns forged into the flow).

OK now to the issue. I was attempting to validate TRex sanity with a real-world, long-lived flow, from a very popular application on the Internet -- Netflix. Took a capture of a real-live system, connecting to Netflix, starting a stream, and stopping a stream.

I then eyeballed (using wireshark) to find the "major" flow which contained the most data, and exported specifically that flow using TCP-follow-stream functionality in wireshark. The duration of that primary stream data was about 50 seconds, contains ~8.7k frames, and totals ~25MB -- all within TRex's capabilities according to the above baseline.

Here it is:
https://github.com/mcallaghan-sandvine/trex-core/blob/issue_146_temp_files/issue_146_temp_files/netflix_mcallaghan_attempt_twoB.pcap

$ tshark -Q -z io,stat,60 -r (SNIP_PATH)/netflix_mcallaghan_attempt_twoB.pcap

====================================
| IO Statistics                    |
|                                  |
| Duration: 50.7 secs              |
| Interval: 50.7 secs              |
|                                  |
| Col 1: Frames and bytes          |
|----------------------------------|
|              |1                  |
| Interval     | Frames |   Bytes  |
|----------------------------------|
|  0.0 <> 50.7 |   8773 | 25567625 |
====================================

As per the google group references I showed, it has a standard "new flow" TCP sequence:

    1   0.000000      1.0.0.6 � 209.148.214.201 TCP 74 42952 � 443 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=3715450010 TSecr=0 WS=128
    2   0.019439 209.148.214.201 � 1.0.0.6      TCP 74 443 � 42952 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 WS=512 SACK_PERM=1 TSval=4052429818 TSecr=3715450010
    3   0.019470      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=1 Ack=1 Win=29312 Len=0 TSval=3715450030 TSecr=4052429818
    4   0.019703      1.0.0.6 � 209.148.214.201 SSL 627 Client Hello
    5   0.036503 209.148.214.201 � 1.0.0.6      TCP 66 [TCP Window Update] 443 � 42952 [ACK] Seq=1 Ack=1 Win=1049600 Len=0 TSval=4052429835 TSecr=3715450030
    6   0.037367 209.148.214.201 � 1.0.0.6      TLSv1.2 177 Server Hello, Change Cipher Spec
    7   0.037388      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=562 Ack=112 Win=29312 Len=0 TSval=3715450048 TSecr=4052429836
    8   0.037405 209.148.214.201 � 1.0.0.6      TLSv1.2 111 Hello Request, Hello Request
    9   0.037412      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=562 Ack=157 Win=29312 Len=0 TSval=3715450048 TSecr=4052429836
   10   0.037567      1.0.0.6 � 209.148.214.201 TLSv1.2 117 Change Cipher Spec, Hello Request, Hello Request
...
(snip)
...

And its midflow contains all expected streaming data (TCP+TLSv1.2)

...
(snip)
...
 4991   2.259324      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=9576 Ack=14583032 Win=2025216 Len=0 TSval=3715452270 TSecr=4052432048
 4992   2.259482 209.148.214.201 � 1.0.0.6      TCP 5858 [TCP segment of a reassembled PDU]
 4993   2.259488      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=9576 Ack=14588824 Win=2025216 Len=0 TSval=3715452270 TSecr=4052432048
 4994   2.259710 209.148.214.201 � 1.0.0.6      TCP 2962 [TCP segment of a reassembled PDU]
 4995   2.259717      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=9576 Ack=14591720 Win=2025216 Len=0 TSval=3715452270 TSecr=4052432048
 4996   2.259875 209.148.214.201 � 1.0.0.6      TLSv1.2 2962 Application Data[TCP segment of a reassembled PDU]
 4997   2.259880      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=9576 Ack=14594616 Win=2025216 Len=0 TSval=3715452270 TSecr=4052432048
 4998   2.260039 209.148.214.201 � 1.0.0.6      TCP 4410 [TCP segment of a reassembled PDU]
 4999   2.260046      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=9576 Ack=14598960 Win=2025216 Len=0 TSval=3715452270 TSecr=4052432048
 5000   2.260203 209.148.214.201 � 1.0.0.6      TCP 5858 [TCP segment of a reassembled PDU]
...
(snip)
...

I should also note in the middle of the stream, there are a bunch of TCP duplicate ACKs and other things (but presumably TRex shouldn't care).

And finally, a standard TCP stream close sequence:

...
(snip)
...
 8764  44.913644 209.148.214.201 � 1.0.0.6      TCP 5858 [TCP segment of a reassembled PDU]
 8765  44.913654      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=17753 Ack=24960841 Win=2025216 Len=0 TSval=3715494924 TSecr=4052474700
 8766  44.913840 209.148.214.201 � 1.0.0.6      TLSv1.2 5858 Application Data[TCP segment of a reassembled PDU]
 8767  44.913849      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=17753 Ack=24966633 Win=2025216 Len=0 TSval=3715494924 TSecr=4052474700
 8768  44.914134 209.148.214.201 � 1.0.0.6      TLSv1.2 3565 Application Data
 8769  44.914145      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=17753 Ack=24970132 Win=2025216 Len=0 TSval=3715494925 TSecr=4052474700
 8770  50.671930      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [FIN, ACK] Seq=17753 Ack=24970132 Win=2025216 Len=0 TSval=3715500682 TSecr=4052474700
 8771  50.693945 209.148.214.201 � 1.0.0.6      TCP 66 443 � 42952 [ACK] Seq=24970132 Ack=17754 Win=1049600 Len=0 TSval=4052480488 TSecr=3715500682
 8772  50.694266 209.148.214.201 � 1.0.0.6      TCP 66 443 � 42952 [FIN, ACK] Seq=24970132 Ack=17754 Win=1049600 Len=0 TSval=4052480488 TSecr=3715500682
 8773  50.694274      1.0.0.6 � 209.148.214.201 TCP 66 42952 � 443 [ACK] Seq=17754 Ack=24970133 Win=2025216 Len=0 TSval=3715500705 TSecr=4052480488
...
(snip)
...

So there it is. Now what happens if we send that through TRex STF?

$ cat long_flow.yaml 
- duration : 9999
  generator :
          distribution : "seq"
          clients_start : "4.0.0.1"
          clients_end   : "4.0.0.1"
          servers_start : "5.0.0.1"
          servers_end   : "5.0.20.255"
  cap_ipg  : false
  cap_info :
     - name: netflix_mcallaghan_attempt_twoB.pcap
       w   : 1
       cps : 0.01
       ipg : 10000
       rtt : 10000

(note due to #143 we force ipg/rtt=10ms)

Send it with:

sudo ./t-rex-64 -f ./long_flow.yaml -c 1 -m 1 -d 999

Captured it to a file called trex_netflix_50sec_output.pcap:
https://github.com/mcallaghan-sandvine/trex-core/blob/issue_146_temp_files/issue_146_temp_files/trex_netflix_50sec_output.pcap

And here's the summary output:

$ tshark -Q -z io,stat,60 -r trex_netflix_50sec_output.pcap

===================================
| IO Statistics                   |
|                                 |
| Duration: 0.359 secs            |
| Interval: 0.359 secs            |
|                                 |
| Col 1: Frames and bytes         |
|---------------------------------|
|                |1               |
| Interval       | Frames | Bytes |
|---------------------------------|
| 0.000 <> 0.359 |     27 |  9689 |
===================================

Yikes! It's drastically broken; Missing many pkts and significantly shorter than expected.

$ tshark -r issue_146_temp_files/trex_netflix_50sec_output.pcap 
    1   0.000000      4.0.0.1 → 5.0.0.1      0 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 74 41668 → 443 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=3715450010 TSecr=0 WS=128
    2   0.008995      5.0.0.1 → 4.0.0.1      0 Sandvine_19:f8:6b → Sandvine_19:f8:ea TCP 74 443 → 41668 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 WS=512 SACK_PERM=1 TSval=4052429818 TSecr=3715450010
    3   0.018995      4.0.0.1 → 5.0.0.1      1 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 41668 → 443 [ACK] Seq=1 Ack=1 Win=29312 Len=0 TSval=3715450030 TSecr=4052429818
    4   0.028996      4.0.0.1 → 5.0.0.1      1 Sandvine_19:f8:6b → Sandvine_19:f8:eb TLSv1 627 Client Hello
    5   0.038997      5.0.0.1 → 4.0.0.1      1 Sandvine_19:f8:6b → Sandvine_19:f8:ea TCP 66 [TCP Window Update] 443 → 41668 [ACK] Seq=1 Ack=1 Win=1049600 Len=0 TSval=4052429835 TSecr=3715450030
    6   0.048999      5.0.0.1 → 4.0.0.1      1 Sandvine_19:f8:6b → Sandvine_19:f8:ea TLSv1.2 177 Server Hello, Change Cipher Spec
    7   0.059001      4.0.0.1 → 5.0.0.1      562 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 41668 → 443 [ACK] Seq=562 Ack=112 Win=29312 Len=0 TSval=3715450048 TSecr=4052429836
    8   0.069002      5.0.0.1 → 4.0.0.1      112 Sandvine_19:f8:6b → Sandvine_19:f8:ea TLSv1.2 111 Encrypted Handshake Message
    9   0.079003      4.0.0.1 → 5.0.0.1      562 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 41668 → 443 [ACK] Seq=562 Ack=157 Win=29312 Len=0 TSval=3715450048 TSecr=4052429836
   10   0.089005      4.0.0.1 → 5.0.0.1      562 Sandvine_19:f8:6b → Sandvine_19:f8:eb TLSv1.2 117 Change Cipher Spec, Encrypted Handshake Message
   11   0.099006      4.0.0.1 → 5.0.0.1      613 Sandvine_19:f8:6b → Sandvine_19:f8:eb TLSv1.2 821 Application Data
   12   0.109008      5.0.0.1 → 4.0.0.1      157 Sandvine_19:f8:6b → Sandvine_19:f8:ea TLSv1.2 642 Application Data
   13   0.119008      5.0.0.1 → 4.0.0.1      733 Sandvine_19:f8:6b → Sandvine_19:f8:ea TCP 1514 [TCP segment of a reassembled PDU]
   14   0.129010      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 41668 → 443 [ACK] Seq=1368 Ack=2181 Win=33280 Len=0 TSval=3715450067 TSecr=4052429852
   15   0.149012      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 [TCP ACKed unseen segment] 41668 → 443 [ACK] Seq=1368 Ack=9421 Win=47744 Len=0 TSval=3715450067 TSecr=4052429856
   16   0.169015      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 [TCP ACKed unseen segment] 41668 → 443 [ACK] Seq=1368 Ack=13765 Win=56448 Len=0 TSval=3715450067 TSecr=4052429856
   17   0.179017      5.0.0.1 → 4.0.0.1      13765 Sandvine_19:f8:6b → Sandvine_19:f8:ea TCP 1514 [TCP Previous segment not captured] 443 → 41668 [ACK] Seq=13765 Ack=1368 Win=1049600 Len=1448 TSval=4052429881 TSecr=3715450067 [TCP segment of a reassembled PDU] [TCP segment of a reassembled PDU]
   18   0.199021      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 [TCP ACKed unseen segment] 41668 → 443 [ACK] Seq=1368 Ack=19557 Win=68096 Len=0 TSval=3715450092 TSecr=4052429881
   19   0.209020      5.0.0.1 → 4.0.0.1      19557 Sandvine_19:f8:6b → Sandvine_19:f8:ea TCP 1514 [TCP Previous segment not captured] 443 → 41668 [ACK] Seq=19557 Ack=1368 Win=1049600 Len=1448 TSval=4052429881 TSecr=3715450067 [TCP segment of a reassembled PDU]
   20   0.229025      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 [TCP ACKed unseen segment] 41668 → 443 [ACK] Seq=1368 Ack=25349 Win=79616 Len=0 TSval=3715450093 TSecr=4052429881
   21   0.249026      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 [TCP ACKed unseen segment] 41668 → 443 [ACK] Seq=1368 Ack=31141 Win=91264 Len=0 TSval=3715450093 TSecr=4052429881
   22   0.269029      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 [TCP ACKed unseen segment] 41668 → 443 [ACK] Seq=1368 Ack=36933 Win=102784 Len=0 TSval=3715450093 TSecr=4052429881
   23   0.289031      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 [TCP ACKed unseen segment] 41668 → 443 [ACK] Seq=1368 Ack=41277 Win=111488 Len=0 TSval=3715450094 TSecr=4052429881
   24   0.309034      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 [TCP ACKed unseen segment] 41668 → 443 [ACK] Seq=1368 Ack=44173 Win=117248 Len=0 TSval=3715450114 TSecr=4052429896
   25   0.329037      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 [TCP ACKed unseen segment] 41668 → 443 [ACK] Seq=1368 Ack=52861 Win=134656 Len=0 TSval=3715450115 TSecr=4052429896
   26   0.349039      4.0.0.1 → 5.0.0.1      1368 Sandvine_19:f8:6b → Sandvine_19:f8:eb TCP 66 [TCP ACKed unseen segment] 41668 → 443 [ACK] Seq=1368 Ack=61549 Win=152064 Len=0 TSval=3715450115 TSecr=4052429896
   27   0.359041      5.0.0.1 → 4.0.0.1      61549 Sandvine_19:f8:6b → Sandvine_19:f8:ea TLSv1.2 1514 [TCP Previous segment not captured] , Ignored Unknown Record

TRex doesn't even complain or anything about this :(

-Per port stats table 
      ports |               0 |               1 
 -----------------------------------------------------------------------------------------
   opackets |              18 |              19      <---- MISSING PKTS OUT!
     obytes |            2635 |           64334   <---- MISSING BYTES OUT!
   ipackets |              19 |              18
     ibytes |           64334 |            2635 
    ierrors |               0 |               0 
    oerrors |               0 |               0 
      Tx Bw |       0.00  bps |       0.00  bps 

-Global stats enabled 
 Cpu Utilization : 0.1  %  0.0 Gb/core 
 Platform_factor : 1.0  
 Total-Tx        :       0.00  bps  
 Total-Rx        :       0.00  bps  
 Total-PPS       :       0.00  pps  
 Total-CPS       :       0.00  cps  

 Expected-PPS    :       0.37  pps  
 Expected-CPS    :       0.01  cps  
 Expected-BPS    :       5.36 Kbps  

 Active-flows    :        0  Clients :        1   Socket-util : 0.0000 %    
 Open-flows      :        1  Servers :     5375   Socket :        0 Socket/Clients :  0.0 
 drop-rate       :       0.00  bps   
 current time    : 12.3 sec  
 test duration   : 986.7 sec  

<CTRL+C>

Would like to determine why TRex chokes on this flow.

This is 100% reproducible

 Version : v2.43
 DPDK version : DPDK 17.11.0
 User    : hhaim
 Date    : Jul 11 2018 , 09:40:05
 Uuid    : 3cb87d62-84d5-11e8-8eba-0006f62b3e88
 Git SHA : e74fc281e57dcd60cac05c2fc43df65967a63671

Compiled with GCC     :   6.2.0
Compiled with glibc   :   2.16 (host: 2.23)

$ uname -r
4.4.0-130-generic
$ cat /etc/issue
Ubuntu 16.04.5 LTS \n \l

Any guidance for how to debug/isolate would be appreciated! Thank you.

Trex want dest MAC insteed of dest IP

Hi, (Sorry I've done a mistake while writing this issue)

I followed the tutorial "TRex first time confiugration" (https://trex-tgn.cisco.com/trex/doc/trex_config_guide.html) and adapted it to my lab. So I launched TRex with ./t-rex-64 -f cap2/dns.yaml -m 1 -d 10 -l 1000 , just below it gets me this return:

root@76lab-trex-02:/opt/trex/v2.20# sudo ./t-rex-64 -f cap2/dns.yaml -c 1 -m 1 -d 10  -l 1000
Starting  TRex v2.20 please wait  ... 
 set driver name net_vmxnet3 
no client generator pool configured, using default pool
no server generator pool configured, using default pool
zmq publisher at: tcp://*:4500
 Number of ports found: 2 
 wait 1 sec .
port : 0 
------------
link         :  link : Link Up - speed 10000 Mbps - full-duplex
promiscuous  : 0 
port : 1 
------------
link         :  link : Link Up - speed 10000 Mbps - full-duplex
promiscuous  : 0 
 -------------------------------
RX core uses TX queue number 0 on all ports
 core, c-port, c-queue, s-port, s-queue, lat-queue
 ------------------------------------------
 1        0      0       1       0      0  
 -------------------------------
 number of ports         : 2 
 max cores for 2 ports   : 1 
 max queue per port      : 3 
no client generator pool configured, using default pool
no server generator pool configured, using default pool
 -- loading cap file cap2/dns.pcap 
Failed resolving dest MAC for default gateway:11.11.11.1 on port 0
root@76lab-trex-02:/opt/trex/v2.20# ./t-rex-64 -f cap2/dns.yaml -m 1  -d 10  -l 1000 --lo --lm 1
Starting  TRex v2.20 please wait  ... 
 set driver name net_vmxnet3 
no client generator pool configured, using default pool
no server generator pool configured, using default pool
zmq publisher at: tcp://*:4500
 Number of ports found: 2 
 wait 1 sec .
port : 0 
------------
link         :  link : Link Up - speed 10000 Mbps - full-duplex
promiscuous  : 0 
port : 1 
------------
link         :  link : Link Up - speed 10000 Mbps - full-duplex
promiscuous  : 0 
 -------------------------------
RX core uses TX queue number 0 on all ports
 core, c-port, c-queue, s-port, s-queue, lat-queue
 ------------------------------------------
 1        0      0       1       0      0  
 -------------------------------
 number of ports         : 2 
 max cores for 2 ports   : 1 
 max queue per port      : 3 
no client generator pool configured, using default pool
no server generator pool configured, using default pool
 -- loading cap file cap2/dns.pcap 
Failed resolving dest MAC for default gateway:11.11.11.1 on port 0
root@76lab-trex-02:/opt/trex/v2.20# ./t-rex-64 -f cap2/dns.yaml -m 1  -d 10  -l 1000
Starting  TRex v2.20 please wait  ... 
 set driver name net_vmxnet3 
no client generator pool configured, using default pool
no server generator pool configured, using default pool
zmq publisher at: tcp://*:4500
 Number of ports found: 2 
 wait 1 sec .
port : 0 
------------
link         :  link : Link Up - speed 10000 Mbps - full-duplex
promiscuous  : 0 
port : 1 
------------
link         :  link : Link Up - speed 10000 Mbps - full-duplex
promiscuous  : 0 
 -------------------------------
RX core uses TX queue number 0 on all ports
 core, c-port, c-queue, s-port, s-queue, lat-queue
 ------------------------------------------
 1        0      0       1       0      0  
 -------------------------------
 number of ports         : 2 
 max cores for 2 ports   : 1 
 max queue per port      : 3 
no client generator pool configured, using default pool
no server generator pool configured, using default pool
 -- loading cap file cap2/dns.pcap 
Failed resolving dest MAC for default gateway:11.11.11.1 on port 0
root@76lab-trex-02:/opt/trex/v2.20# 

My trex /etc/trex_cfg.yaml is configured automatically , I just change the port_info :


### Config file generated by dpdk_setup_ports.py ###

- port_limit: 2
  version: 2
  interfaces: ['0b:00.0', '1b:00.0']
  port_info:
      - ip: 11.11.11.2
        default_gw: 11.11.11.1
      - ip: 12.12.12.2
        default_gw: 12.12.12.1

  platform:
      master_thread_id: 0
      latency_thread_id: 1
      dual_if:
        - socket: 0
          threads: [2,3,4,5,6,7]

instructions for "basic usage" DNS leverage bp-sim, "-o my.erf" doesn't work

following https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_dns_basic_example

was quite interested to see the pcap-like output by example walkthrough!

several minor issues in documentation (going to fix what I can)

but the main problems here are:

  • 32-bit binary no longer exists (switch to 64-bit)
  • "-o" isn't a documented flag in the help output
  • unable to find my.erf anywhere ...

tried:

./bp-sim-64-debug -f cap2/dns_test.yaml -o my.erf -v 3
./bp-sim-64 -f cap2/dns_test.yaml -o my.erf -v 3
./bp-sim-64-debug -f cap2/dns_test.yaml -o ~/my.erf -v 3
./bp-sim-64-debug -f cap2/dns_test.yaml -o my.pcap --pcap -v 3

, also w/ and w/out sudo (no difference since bp-sim doesn't appear to require root permissions)

...
nothing is actually writing the file anyway :(

here's full output of the example that SHOULD have worked

/opt/trex/v2.39]$ ./bp-sim-64-debug -f cap2/dns.yaml -o my.erf -v 3
no client generator pool configured, using default pool
no server generator pool configured, using default pool
 -- loading cap file cap2/dns.pcap
 id name                                     tps      cps       f-pkts f-bytes  duration Mb/sec  MB/sec c-flows PPS    errors flows
 00 cap2/dns.pcap                                1.00      1.00      2      170     0.02    0.00   0.00       0      2    0    1

 00 sum                                          1.00      1.00      2      170     0.02    0.00   0.00       0      2    0    1
 Memory usage
 size_64        : 2
 size_128       : 0
 size_256       : 0
 size_512       : 0
 size_1024      : 0
 size_2048      : 0 
 size_4096      : 0 
 size_8192      : 0 
 size_16384     : 0 
 Total    :     128.00  bytes  376% util due to buckets 
 pkt_id,time,fid,pkt_info,pkt,len,type,is_init,is_last,type,thread_id,src_ip,dest_ip,src_port 
 0 ,0.010000,1,0x22ac5e0,1,73,0,1,0,0,0,10000001,30000001,41668
 1 ,0.020000,1,0x22ab110,2,89,0,0,1,0,0,10000001,30000001,41668
 2 ,2.010000,2,0x22ac5e0,1,73,0,1,0,0,0,10000002,30000002,59073
 3 ,2.020000,2,0x22ab110,2,89,0,0,1,0,0,10000002,30000002,59073
 4 ,3.010000,3,0x22ac5e0,1,73,0,1,0,0,0,10000003,30000003,10942
 5 ,3.019980,3,0x22ab110,2,89,0,0,1,0,0,10000003,30000003,10942
 6 ,4.010000,4,0x22ac5e0,1,73,0,1,0,0,0,10000004,30000004,28347
 7 ,4.019980,4,0x22ab110,2,89,0,0,1,0,0,10000004,30000004,28347
 8 ,5.010000,5,0x22ac5e0,1,73,0,1,0,0,0,10000005,30000005,45752
 9 ,5.019980,5,0x22ab110,2,89,0,0,1,0,0,10000005,30000005,45752
 10 ,6.010000,6,0x22ac5e0,1,73,0,1,0,0,0,10000006,30000006,63157
 11 ,6.019980,6,0x22ab110,2,89,0,0,1,0,0,10000006,30000006,63157
 12 ,7.010000,7,0x22ac5e0,1,73,0,1,0,0,0,10000007,30000007,15026
 13 ,7.019980,7,0x22ab110,2,89,0,0,1,0,0,10000007,30000007,15026
 14 ,8.010000,8,0x22ac5e0,1,73,0,1,0,0,0,10000008,30000008,32431
 15 ,8.019980,8,0x22ab110,2,89,0,0,1,0,0,10000008,30000008,32431
 16 ,9.010000,9,0x22ac5e0,1,73,0,1,0,0,0,10000009,30000009,49836
 17 ,9.020000,9,0x22ab110,2,89,0,0,1,0,0,10000009,30000009,49836

file stats 
=================
 m_total_bytes                           :       1.49 Kbytes 
 m_total_pkt                             :      18.00  pkt 
 m_total_open_flows                      :       9.00  flows
 m_total_pkt                             : 18
 m_total_open_flows                      : 9
 m_total_close_flows                     : 9
 m_total_bytes                           : 1530


normal
-------------
 min_delta  : 10 usec
 cnt        : 0
 high_cnt   : 0
 max_d_time : 0 usec
 sliding_average    : 0 usec
 precent    : -nan %
 histogram
 -----------
 d time = 23l 100l

I note several differences from the documentation. But the major difference is that in the docs, (using the 32-bit version), it has:
Generating erf file ...
, but in mine, that line does not appear.

T-Rex "Statefullness" issues on Intel X540 ?

First I point is that I am not sure if this is caused by my NIC cards, maybe something else here as well, but I tried also the provided examples from the trex install folders and most of them got similar behaviour of TCP packets out of sequence passing through DUT.

Background is that I am doing a preparation before testing firewall throughput by preparing for myself some pcap/yaml files and my own SRT with 30k sessions. But I am unfortunately running into troubles with firewall declaring many streams as errors with out of order TCP or non-SYN first packet errors. Even the most basic hello-world test provided with trex as 'http_short.yaml' and several others have trouble in my setup. Where for example simulated TCP session malformed even the provided http_get.pcap example by creating something like this:
screenshot of wireshark showing out-of-sequence packets

Here is the pcap file as seen by the tested device (filtered tcp stream 0) with TCP acks out-of-sequence

Topology: I reduced my topology to having two servers connected back-to-back to each other with two 10G Ethernet copper cables (closed loop of T-Rex and DUT box), one is T-Rex generator and the second one is just simple linux with activated routing where I am doing tcpdump capturing to see the traffic generated above.

Any elemental mistake in my config that would cause T-Rex to ignore proper TCP seq/ack handling and sending ACK packets out of port 1 before that port actually received on port 1 the packet with adequate SEQ ?

These are my Intel X540 NICs:
0000:06:00.0 'Ethernet Controller 10-Gigabit X540-AT2' drv=igb_uio unused=ixgbe,vfio-pci,uio_pci_generic

Here is my YAML:

- duration : 10
  generator :  
          distribution : "seq"
          clients_start : "16.0.0.1"
          clients_end   : "16.0.0.255"
          servers_start : "48.0.0.1"
          servers_end   : "48.0.255.255"
          clients_per_gb : 201
          min_clients    : 101
          dual_port_mask : "1.0.0.0" 
          tcp_aging      : 10
          udp_aging      : 10
  cap_ipg    : true
  cap_info : 
     - name: cap2/http_get.pcap
       cps : 1
       ipg : 1000
       rtt : 1000
       w   : 1
       limit: 1

Here is my ports config:

- port_limit: 2
  version: 2
  interfaces: ['09:00.0', '06:00.0']
  port_info:
      - ip: 4.4.4.100
        default_gw: 4.4.4.5
      - ip: 192.168.0.100
        default_gw: 192.168.0.5

  platform:
      master_thread_id: 0
      latency_thread_id: 6
      dual_if:
        - socket: 0
          threads: [1,2,3,4,5,12,13,14,15,16,17]

T-rex starting issue

Hi there

I have done a new server instance for t-rex.
Installed Ubuntu 16.04 Server 64bit and CentOS 7.2 Minimal 64bit for t-rex for testing.
After the NIC bind script build successfully the configuration I tried to start t-rex. I get follwing errors on both operating systems:

[root@rcd-tlg-01 scripts]# ./t-rex-64 -c
./t-rex-64: line 47: ./_t-rex-64: No such file or directory
[root@rcd-tlg-01 scripts]# ./t-rex-64
./t-rex-64: line 47: ./_t-rex-64: No such file or directory
[root@rcd-tlg-01 scripts]# ./t-rex-64 -i
Killing Scapy server... Scapy server is killed
Starting Scapy server.... Scapy server is started
./t-rex-64: line 47: ./_t-rex-64: No such file or directory

I'm using version v2.22 of t-rex. Are there any linux packages to install for t-rex?

Cheers

Akai

document project expectations and guidelines for contributing (commits, pull requests)

as per https://groups.google.com/forum/#!topic/trex-tgn/nd6cKqqDq1Q

this project should probably have some guidelines and expectations for how one can contribute
(I am willing to help and get this done) but need to gather complete information

Propose that it be added to
https://trex-tgn.cisco.com/trex/doc/index.html directly
or,
https://github.com/cisco-system-traffic-generator/trex-core/wiki

will append comments with "things to add"

cache_size field question

Hi all,

I am using Trex stl trafic profiles to test some VNFs' performace. In the following piece of code ( part of scripts/stl/udp_1pkt_repeat_random.py), I don't really understand what cache_size actually is. It caches 255 packets or 255 bytes ? What is the maximum number it takes ?

 vm = STLScVmRaw( [   STLVmFlowVar ( "ip_src",
                                          min_value="10.0.0.1",
                                          max_value="10.0.0.255",
                                          size=4,
                                          step=1,
                                          op="inc"),
                           STLVmWrFlowVar (fv_name="ip_src",
                                           pkt_offset= "IP.src" ), # write ip to packet IP.src
                           STLVmFixIpv4(offset = "IP")                                # fix checksum
                         ],
                         split_by_field = "ip_src",
                         cache_size =255 # cache the packets, much better performance
                      ); 

Thank you in advance,
Dimitra

Starting TRex fail due to EAL error

Hi,
I try to use TRex, but when I start the binary file "t-rex-64 -i". I have the following error:

EAL: Can only reserve 3850 pages from 4096 requested
Current CONFIG_RTE_MAX_MEMSEG=256 is not enough
Please either increase it or request less amount of memory.
EAL: FATAL: Cannot init memory

I attach the procedure used to build TRex and the output of "trex_cfg.yaml" generated by "dpdk_setup_ports.py -i" script, and the NIC used by TRex.
And, I would like if we can decrease the number of reserved page to start TRex and how ?

Thanks,

trex-procedure.txt

Create vagrant box

Currently there is an ova with trex that is great to explore the tool, also it would be awesome to have a vagrant box so you can use the image to setup different scenarios and tests the scripts before use them in production.

I can help in the automation process of the image creation and publishing in atlas

Unable to make multiple consequent start/stop traffic requests

Hi @hhaim @imarom

I'm finishing the apps (https://github.com/zverevalexei/killer-app-console, https://github.com/zverevalexei/trex-http-proxy), and I've just run into issue with multiple and subsequent start/stop requests: when I make them too fast, I keep getting a lot of errors in my console, like these:

Traceback (most recent call last):
File "/home/trex/v2.03/http-proxy/trex-http-proxy/app.py", line 37, in start_traffic
src_n=traffic_config['src_n']
File "/home/trex/v2.03/http-proxy/trex-http-proxy/trex_api.py", line 116, in start_traffic
client.connect()
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 1017, in wrap2
ret = f(_args, *_kwargs)
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 1380, in connect
rc = self.__connect()
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 785, in __connect
rc = self._transmit("api_sync", params = {'api_vers': self.api_vers}, api_class = None)
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 932, in _transmit
return self.comm_link.transmit(method_name, params, api_class)
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 428, in transmit
return self.rpc_link.invoke_rpc_method(method_name, params, api_class)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 120, in invoke_rpc_method
id, msg = self.create_jsonrpc_v2(method_name, params, api_class)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 101, in create_jsonrpc_v2
msg["id"] = next(self.id_gen)
ValueError: generator already executing


Unhandled exception in thread started by <function start_traffic at 0x7f5dc7b368c0>


Traceback (most recent call last):
File "/home/trex/v2.03/http-proxy/trex-http-proxy/app.py", line 37, in start_traffic
Unhandled exception in thread started by <function start_traffic at 0x7fe3da0228c0>
Traceback (most recent call last):
File "/home/trex/v2.03/http-proxy/trex-http-proxy/app.py", line 37, in start_traffic
src_n=traffic_config['src_n']
File "/home/trex/v2.03/http-proxy/trex-http-proxy/trex_api.py", line 116, in start_traffic
client.connect()
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 1017, in wrap2
ret = f(_args, *_kwargs)
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 1380, in connect
src_n=traffic_config['src_n']
rc = self.__connect()
File "/home/trex/v2.03/http-proxy/trex-http-proxy/trex_api.py", line 116, in start_traffic
client.connect()
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 1017, in wrap2
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 795, in __connect
ret = f(_args, *_kwargs)
rc = self._transmit("get_version")
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 932, in _transmit
return self.comm_link.transmit(method_name, params, api_class)
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 428, in transmit
return self.rpc_link.invoke_rpc_method(method_name, params, api_class)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 122, in invoke_rpc_method
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 1380, in connect
return self.send_msg(msg)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 159, in send_msg
response = self.send_raw_msg(buffer)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 202, in send_raw_msg
rc = self.__connect()
response = self.socket.recv()
File "zmq/backend/cython/socket.pyx", line 631, in zmq.backend.cython.socket.Socket.recv (zmq/backend/cython/socket.c:5772)
File "zmq/backend/cython/socket.pyx", line 662, in zmq.backend.cython.socket.Socket.recv (zmq/backend/cython/socket.c:5548)
File "zmq/backend/cython/socket.pyx", line 96, in zmq.backend.cython.socket._check_closed (zmq/backend/cython/socket.c:1297)
zmq.error. File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 785, in __connect
ZMQError: Socket operation on non-socket
rc = self._transmit("api_sync", params = {'api_vers': self.api_vers}, api_class = None)
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 932, in _transmit
return self.comm_link.transmit(method_name, params, api_class)
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 428, in transmit
return self.rpc_link.invoke_rpc_method(method_name, params, api_class)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 122, in invoke_rpc_method
return self.send_msg(msg)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 159, in send_msg
response = self.send_raw_msg(buffer)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 190, in send_raw_msg
self.socket.send(msg)
File "zmq/backend/cython/socket.pyx", line 574, in zmq.backend.cython.socket.Socket.send (zmq/backend/cython/socket.c:5434)
File "zmq/backend/cython/socket.pyx", line 611, in zmq.backend.cython.socket.Socket.send (zmq/backend/cython/socket.c:5118)
File "zmq/backend/cython/socket.pyx", line 96, in zmq.backend.cython.socket._check_closed (zmq/backend/cython/socket.c:1297)
zmq.error.ZMQError: Socket operation on non-socket


File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 428, in transmit
return self.rpc_link.invoke_rpc_method(method_name, params, api_class)
Unhandled exception in thread started by <function start_traffic at 0x7f2d157308c0>
Traceback (most recent call last):
File "/home/trex/v2.03/http-proxy/trex-http-proxy/app.py", line 37, in start_traffic
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 122, in invoke_rpc_method
src_n=traffic_config['src_n']
File "/home/trex/v2.03/http-proxy/trex-http-proxy/trex_api.py", line 116, in start_traffic
client.connect()
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 1017, in wrap2
ret = f(_args, *_kwargs)
return self.send_msg(msg)
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 1380, in connect
rc = self.__connect()
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 777, in __connect
rc = self.comm_link.connect()
File "trex_client/stl/trex_stl_lib/trex_stl_client.py", line 415, in connect
return self.rpc_link.connect()
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 281, in connect
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 159, in send_msg
rc = self.invoke_rpc_method('ping', api_class = None)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 122, in invoke_rpc_method
return self.send_msg(msg)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 159, in send_msg
response = self.send_raw_msg(buffer)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 202, in send_raw_msg
response = self.socket.recv()
response = self.send_raw_msg(buffer)
File "zmq/backend/cython/socket.pyx", line 631, in zmq.backend.cython.socket.Socket.recv (zmq/backend/cython/socket.c:5772)
File "trex_client/stl/trex_stl_lib/trex_stl_jsonrpc_client.py", line 190, in send_raw_msg
self.socket.send(msg)
File "zmq/backend/cython/socket.pyx", line 665, in zmq.backend.cython.socket.Socket.recv (zmq/backend/cython/socket.c:5572)
File "zmq/backend/cython/socket.pyx", line 139, in zmq.backend.cython.socket._recv_copy (zmq/backend/cython/socket.c:1725)
File "zmq/backend/cython/checkrc.pxd", line 18, in zmq.backend.cython.checkrc._check_rc (zmq/backend/cython/socket.c:6184)
zmq.error.ContextTerminated: Context was terminated
File "zmq/backend/cython/socket.pyx", line 574, in zmq.backend.cython.socket.Socket.send (zmq/backend/cython/socket.c:5434)
False
File "zmq/backend/cython/socket.pyx", line 611, in zmq.backend.cython.socket.Socket.send (zmq/backend/cython/socket.c:5118)
File "zmq/backend/cython/socket.pyx", line 96, in zmq.backend.cython.socket._check_closed (zmq/backend/cython/socket.c:1297)


many more...

Unicode is not accepted

Hi @hhaim

I sent the number of PPS to TRex over HTTP API, which happened to have been encoded in UTF-8. Unicode turned out to mess with TRex, because when the UTF-8-encoded pps string was sent to the traffic generator, it threw the error - see below:

Error at t_rex_stateless.py:296 - 'client.start(ports=[0, 1], mult=rate, duration=duration)'

specific error:

Argument: 'mult' invalid value: '10000pps'

I believe try-except block with ValueError exception must be enough to avoid this.

what is the purpose of defining ipg and rtt per cap when they're all the same

As per: https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_global_traffic_yaml_section

  cap_ipg    : true                            3
  cap_ipg_min    : 30                          4
  cap_override_ipg    : 200                    5

  | true (default) indicates that the IPG is taken from the cap file (also  taking into account cap_ipg_min and cap_override_ipg if they exist).  false indicates that IPG is taken from per template section.
  | The following two options can set the min ipg in microseconds: (if (pkt_ipg<cap_ipg_min) { pkt_ipg=cap_override_ipg} )

and,
https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_per_template_section

       ipg : 10000         3
       rtt : 10000         4

  | If the global section of the YAML file includes cap_ipg    : false, this line sets the inter-packet gap in microseconds.
  | Should be set to the same value as ipg (microseconds).

--
My confusion lies in the traffic profiles shipped by default. Why do many of them explicitly say cap_ipg true, and then proceed to have the SAME values in all template definitions?
example:

$ egrep "name|ipg|rtt" avl/sfr_branch_profile_delay_10.yaml 
  cap_ipg    : true
  #cap_ipg_min    : 30
  #cap_override_ipg    : 200
     - name: avl/delay_10_http_get_0.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_http_post_0.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_https_0.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_http_browsing_0.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_exchange_0.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_mail_pop_0.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_mail_pop_1.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_mail_pop_2.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_oracle_0.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_rtp_160k_full.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_rtp_250k_full.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_smtp_0.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_smtp_1.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_smtp_2.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_video_call_0.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_sip_video_call_full.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_citrix_0.pcap
       ipg : 10000
       rtt : 10000
     - name: avl/delay_10_dns_0.pcap
       ipg : 10000
       rtt : 10000

In summary:

  1. why should shipped profiles say "use ipg from cap files", but then include all same values in template sections
  2. why does rtt need to be set to the same value as ipg in the template section? (what does it do)

Race condition in NAT learning mode

I am trying to use t-rex v2.28 to test my NAT implementation (it is an example in YANFF project). When I run t-rex like this I almost always get a failed assertion in release, and quite often in debug:

./t-rex-64-debug -f cap2/myimix1.yaml -c 8 -m 50 -d 30 --learn-mode 1

Assertion stack trace looks like this

assert: ../../src/bp_sim.h:4159 void CFlowGenListPerThread::associate(uint32_t, CGenNode*) Assertion 'm_flow_id_to_node_lookup.lookup(fid)==0' failed.

*** traceback follows ***

1             0x609083 __assert_fail + 768
2             0x5c06c3 CFlowGenListPerThread::associate(unsigned int, CGenNode*) + 73
3             0x5ab54c CCapFileFlowInfo::generate_flow(CTupleTemplateGeneratorSmart*, CNodeGenerator*, double, unsigned long, CFlowYamlInfo*, CGenNode*) + 702
4             0x5c0b5b CFlowGeneratorRecPerThread::generate_flow(CNodeGenerator*, double, unsigned long, CGenNode*) + 75
5             0x5b334d CFlowGenListPerThread::generate_flows_roundrobin(bool*) + 319
6             0x5c5f91 int CNodeGenerator::flush_file_realtime<24, false>(double, double, CFlowGenListPerThread*, double&) + 843
7             0x5b23bf CNodeGenerator::flush_file(double, double, bool, CFlowGenListPerThread*, double&) + 215
8             0x5b42fb CFlowGenListPerThread::start_generate_stateful(std::string, CPreviewMode&) + 1179
9             0x5ee597 CGlobalTRex::run_in_core(unsigned char) + 613
10            0x5f048c ./_t-rex-64-debug() [0x5f048c]
11            0x93e0ff eal_thread_loop + 738
12      0x7f6f7505736d /lib64/libpthread.so.0(+0x736d) [0x7f6f7505736d]
13      0x7f6f740bfb8f clone + 63

I attached YAML config file and pcap templates that I use.
imix.zip

bpf filter for IPv6 is not enabled

Hi,
IPv6 BPF filter did not work when capturing packet.
I checked current TRex libbpf, "INET6" is not defined and it's not built by default.
I added it in build flag(-DINET6) and it worked.
I'm not sure why it's not enabled by default, is there any special reason for it?
If no special reason, shall I enable this build flag?

TRex/linux_dpdk/ws_main.py
...
291 # build the BPF as a shared library
1292 bld.shlib(features = 'c',
1293 includes = bpf_includes_path,
1294 cflags = cflags + ['-DSLJIT_CONFIG_AUTO=1','-DINET6'],
1295 source = bpf.file_list(top),
1296 target = build_obj.get_bpf_target())
1297

statefull control over rtt+ipg degrades <=10ms (results in pkts within a flow sent too quickly with same timestamp)

== Context ==

Have been evaluating accuracy of TRex stateful mode (for primary use of traffic profiles), and this report has focus on rtt/ipg (continuation of #142 and #137). - EDIT/CLARITY: this was not for Advanced Stateful (ASTF) -- just normal stateful.

Goal was to assess how accurate TRex is for control over rtt/ipg - for these results we kept rtt==ipg for simplicity (despite #137/#142), and executed a series of tests with controlled imputs, varying rtt/ipg.

== Method ==

  • Config: statefull, single client, 1 connection per second
  • Traffic: IPv4, simple (read: standard) TCP flow HTTP_GET
  • ipg/rtt: varied: 100ms, 10ms, 1ms, 100us, 10us

Using TRex stateful mode, YAML config, and .pcap, send a single TCP flow, 1cps, for 30s, capture all packets at machine-in-the-middle between TRex client/server ports. Using tshark, extract the tcp.time_delta for every packet (excluding first SYN packet), and observe the accuracy of time_delta compared to the requested ipg/rtt configuration. Calculate standard deviation and coefficient of variation to compare across all sample sets.

== Summary ==

*Note: "accuracy tolerance" is undefined as far as I can tell from TRex documentation, so this report is predicated on my expectations of tolerance and evaluated intuitively whether or not to be acceptable within margins. (none-the-less, raw data is supplied so that the core team can make their own assessments and conclusions)

High level takeaway for accuracy:

  1. 100ms is within tolerance (<1% CV)
  2. 10ms starts to show issues, though possibly "acceptable at aggregate scale" (~5-7% CV)
  3. 1ms unacceptable (~55% CV)
  4. <1ms is unusable (!<! 0% CV)

The "issues" observed here is that as rtt/ipg is lowered, the probability that TRex sends >1 pkt within a flow at the same time increases. So much so, that below 1ms rtt/ipg, MOST packets within a flow are sent at the same time, rendering the tool's output useless.

== Raw Data ==

TRex configuration:

~/trex/v2.43$ cat ./one_flow.yaml
- duration : 9999
  generator :
          distribution : "seq"
          clients_start : "4.0.0.1"
          clients_end   : "4.0.0.1"
          servers_start : "5.0.0.1"
          servers_end   : "5.0.20.255"
  cap_ipg    : false
  cap_info :
     - name: trex-temp/v4_TCP_http_get_foo.cap
       w   : 1
       cps : 1
       ipg : X
       rtt : X

(-dur is overridden at CLI, same client always, client_src_port changes, server IP changes)

TRex invocation:

sudo ./t-rex-64 -f ./one_flow.yaml -c 1 -m 1 -d 30

(single flow, single core, 1x multiplier, send for 30 seconds so ~30x instances of the flow)

Source Flow (pcap):
The flow used for this test is a standard (read: typical) IPv4, TCP, HTTP_GET flow. Specifically for youtube (attached the flow used). Note that the timestamps in that flow are all 0.000000, this is due to the nature in how we generated (forged) the flow. This detail should be irrelevant however, since we're having TRex control rtt/ipg as per template configuration, (which is proven to work fine in 100ms scenario), and it ignores the packet timestamps in the file anyway.

(had trouble attaching it, simplistic view:)

tshark Data Gather:

From the capture per test iteration

Used the following tshark invocation to parse it, and this was dumped into the LibreOffice Spreadsheet for mathematical processing.

tshark -r ./trex_tcp_sample_10us_rtt_C.pcap -Y 'ip.addr == 4.0.0.1' -T fields -e tcp.time_delta -Y "!(tcp.flags == 0x002)"

Filter on the client IP, though that's all we sent from anyway, display the tcp.time_delta (our primary target for analysis), and never include the first SYN packet from the flow because time delta for that is always zero and is not of interest to our analysis.

The Math

See attached mathematical_comparisons_of_accuracy_tcp.time_delta.ods. For each series of tests, I conducted THREE iterations of it (A, B, and C). C was ultimately the "most clean and consistent", but A and B show similar results even with smaller datasets.

  • note: no "clean" way to show the summary data in github issue, so just open the ODS file
  • note: the CV (aka relative standard deviation) was so bad for <1ms, that I had to avoid divide-by-zero because mean was 0.0 :( -- so forged it to 0.000000001).

SAMPLESET A (FIRST CAPTURES, RANDOM DURATIONS)

  | 100ms | 10ms | 1ms | 100us | 10us
ABSOLUTE | 0.100000000 | 0.010000000 | 0.001000000 | 0.000100000 | 0.000010000
MIN | 0.097110000 | 0.007603000 | 0.000095000 | 0.000000000 | 0.000000000
MAX | 0.102918000 | 0.011907000 | 0.003504000 | 0.001002000 | 0.001001000
AVG | 0.099930085 | 0.010001136 | 0.001016898 | 0.000135169 | 0.000084000
MEAN | 0.100013000 | 0.010001000 | 0.001000000 | 0.00000000 | 0.00000000
STD_DV | 0.000800865 | 0.000691828 | 0.000543267 | 0.000298314 | 0.000203923
RSD | 0.80% | 6.92% | 54.33% | 29831354.44% | 20392332.91%

SAMPLESET B (SECOND SET, MOSTLY 10s durs, one 20s)

  | 100ms | 10ms | 1ms | 100us | 10us
ABSOLUTE | 0.10000000 | 0.01000000 | 0.00100000 | 0.00010000 | 0.00001000
MIN | 0.09841500 | 0.00789800 | 0.00009600 | 0.00000000 | 0.00000000
MAX | 0.10141100 | 0.01290300 | 0.00360200 | 0.00248700 | 0.00100000
AVG | 0.09989641 | 0.00995364 | 0.00101673 | 0.00017895 | 0.00007022
MEAN | 0.10001300 | 0.01000100 | 0.00100000 | 0.00000000 | 0.00000000
STD_DV | 0.00050270 | 0.00052320 | 0.00055397 | 0.00042885 | 0.00017322
RSD | 0.50% | 5.23% | 55.40% | 42884843.49% | 17322230.76%
  | 100ms | 10ms | 1ms | 100us | 10us
-- | -- | -- | -- | -- | --
ABSOLUTE | 0.10000000 | 0.01000000 | 0.00100000 | 0.00010000 | 0.00001000
MIN | 0.09641300 | 0.00730300 | 0.00009700 | 0.00000000 | 0.00000000
MAX | 0.10271500 | 0.01189900 | 0.00389800 | 0.00100000 | 0.00100100
AVG | 0.09989308 | 0.00993159 | 0.00103373 | 0.00010114 | 0.00005339
MEAN | 0.10001300 | 0.01000100 | 0.00100000 | 0.00000000 | 0.00000000
STD_DV | 0.00085254 | 0.00052742 | 0.00065473 | 0.00025019 | 0.00013419
RSD | 0.85% | 5.27% | 65.47% | 25018892.57% | 13419196.45%

== Appendix ==

Initially I had conducted the analysis for tcp.analysis.ack_rtt, however this was a big failure since MANY of the TCP control packets are sent at the same time, so the analysis capabilities are ruined :( -- none-the-less, included the data anyway (attached as mathematical_comparisons_of_accuracy_tcp.analysis.ack_rtt.ods). It was after this that I refocused on time delta which was more direct to my goal anyway.

Use the trex DPDK model test 10 g flow anomalies

I installed two pieces x520 - D2 NIC on the server of DELL-R730, and compile good linux_dpdk model code.
1、Using dpdk_setup_ports. py can view the information card.

Network devices using DPDK-compatible driver

0000:05:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=
0000:05:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=
0000:06:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=
0000:06:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=

2、Create a configuration file: /etc/trex_cfg.yaml.
[root@bogon etc]# vim trex_cfg.yaml

  • port_limit : 4
    version : 2
    interfaces : ["05:00.0","05:00.1","06:00.0","06:00.1"]
    c : 4
    port_bandwidth_gb : 10
    port_info :
    - dest_mac : [0x00,0x1b,0x21,0x98,0x0c,0xe1]
    src_mac : [0x00,0x1b,0x21,0x96,0x7a,0x50]
    - dest_mac : [0x00,0x1b,0x21,0x96,0x7a,0x50]
    src_mac : [0x00,0x1b,0x21,0x98,0x0c,0xe1]
    - dest_mac : [0x00,0x1b,0x21,0x98,0x0c,0xe0]
    src_mac : [0x00,0x1b,0x21,0x96,0x7a,0x51]
    - dest_mac : [0x00,0x1b,0x21,0x96,0x7a,0x51]
    src_mac : [0x00,0x1b,0x21,0x98,0x0c,0xe0]

3、Run TRex
[root@bogon scripts]# ./t-rex-64 -f cap2/dns.yaml -c 4 -m 1 -d 100 -l 1000
-Per port stats table
ports | 0 | 1 | 2 | 3


opackets | 8564 | 8564 | 8564 | 8564
obytes | 565268 | 565332 | 565268 | 565332
ipackets | 0 | 0 | 0 | 8564
ibytes | 0 | 0 | 0 | 565268
ierrors | 0 | 0 | 0 | 0
oerrors | 0 | 0 | 0 | 0
Tx Bw | 524.56 Kbps | 524.56 Kbps | 524.56 Kbps | 524.56 Kbps

-Global stats enabled
Cpu Utilization : 0.0 % 5.3 Gb/core
Platform_factor : 1.0
Total-Tx : 2.10 Mbps
Total-Rx : 524.56 Kbps
Total-PPS : 3.97 Kpps
Total-CPS : 0.00 cps
Expected-PPS : 2.00 pps
Expected-CPS : 1.00 cps
Expected-BPS : 1.30 Kbps
Active-flows : 0 Clients : 504 Socket-util : 0.0000 %
Open-flows : 8 Servers : 248 Socket : 8 Socket/Clients : 0.0
drop-rate : 1.57 Mbps
current time : 11.7 sec
test duration : 88.3 sec

-Latency stats enabled
Cpu Utilization : 0.2 %
if| tx_ok , rx_ok , rx check ,error, latency (usec) , Jitter max window
| , , , , average , max , (usec)


0 | 8560, 0, 0, 0, 0 , 0, 0 | 0 0 0 0 0 0 0 0 0 0 0 0 0
1 | 8560, 0, 0, 0, 0 , 0, 0 | 0 0 0 0 0 0 0 0 0 0 0 0 0
2 | 8560, 0, 0, 0, 0 , 0, 0 | 0 0 0 0 0 0 0 0 0 0 0 0 0
3 | 8560, 8560, 0, 0, 3 , 16, 0 | 7 5 5 5 4 4 4 4 5 4 4 4 4

I don't know why appear this kind of phenomenon, can you help me to solve this problem?

Finally, thank you very much can see trex project in GitHub, I'm looking forward to it can help me to solve some practical application problems, forward to your reply!

What is the file size limit

Hello, Trex-core Team.

I'm new in using trex.

I want to import my own stream in pcap file. I try to put 1 tcp stream with about 8000 packets through Trex Stateless GUI, and then i have this error.

sudo ./t-rex-64 -i
Killing Scapy server... Scapy server is killed
Starting Scapy server.... Scapy server is started
The ports are bound/configured.
Starting TRex v2.41 please wait ...
i40e_enable_extended_tag(): Does not support Extended Tag
i40e_enable_extended_tag(): Does not support Extended Tag
set driver name net_i40e
driver capability : TCP_UDP_OFFLOAD TSO
Number of ports found: 2
zmq publisher at: tcp://*:4500
wait 1 sec .
port : 0
link : link : Link Up - speed 10000 Mbps - full-duplex
promiscuous : 0
port : 1
link : link : Link Up - speed 10000 Mbps - full-duplex
promiscuous : 0
number of ports : 2
max cores for 2 ports : 1
max queue per port : 3
RX core uses TX queue number 1 on all ports
core, c-port, c-queue, s-port, s-queue, lat-queue
1 0 0 1 0 2
Per port stats table
ports | 0 | 1
opackets | 0 | 0
obytes | 0 | 0
ipackets | 0 | 0
ibytes | 0 | 0
ierrors | 0 | 0
oerrors | 0 | 0
Tx Bw | 0.00 bps | 0.00 bps
Global stats enabled
Cpu Utilization : 0.0 % 0.0 Gb/core
Platform_factor : 1.0
Total-Tx : 0.00 bps
Total-Rx : 0.00 bps
Total-PPS : 0.00 pps
Total-CPS : 0.00 cps

Expected-PPS : 0.00 pps
Expected-CPS : 0.00 cps
Expected-BPS : 0.00 bps

Active-flows : 0 Clients : 0 Socket-util : 0.0000 %
Open-flows : 0 Servers : 0 Socket : 0 Socket/Clients : -nan
drop-rate : 0.00 bps
current time : 1.7 sec
test duration : 0.0 sec
-Per port stats table
ports | 0 | 1
opackets | 0 | 0
obytes | 0 | 0
ipackets | 0 | 0
ibytes | 0 | 0
ierrors | 0 | 0
oerrors | 0 | 0
Tx Bw | 0.00 bps | 0.00 bps

-Global stats enabled
Cpu Utilization : 0.0 % 0.0 Gb/core
Platform_factor : 1.0
Total-Tx : 0.00 bps
Total-Rx : 0.00 bps
Total-PPS : 0.00 pps
Total-CPS : 0.00 cps

Expected-PPS : 0.00 pps
Expected-CPS : 0.00 cps
Expected-BPS : 0.00 bps

Active-flows : 0 Clients : 0 Socket-util : 0.0000 %
Open-flows : 0 Servers : 0 Socket : 0 Socket/Clients : -nan
drop-rate : 0.00 bps
current time : 2.2 sec
test duration : 0.0 sec

WATCHDOG: task 'ZMQ sync request-response' has not responded for more than 1.00405 seconds - timeout is 1 seconds

*** traceback follows ***

1 0x55e31305b61a ./_t-rex-64(+0x12761a) [0x55e31305b61a]
2 0x7ff4a8dd86d0 /lib64/libpthread.so.0(+0xf6d0) [0x7ff4a8dd86d0]
3 0x55e31317bb55 TrexStreamsGraph::add_rate_events_for_stream_single_burst(double&, TrexStream*) + 245
4 0x55e31317dfa6 TrexStreamsGraph::add_rate_events_for_stream(double&, TrexStream*) + 1302
5 0x55e3131845d5 TrexStreamsGraph::generate_graph_for_one_root(unsigned int) + 821
6 0x55e313184bfa TrexStreamsGraph::generate(std::vector<TrexStream*, std::allocator<TrexStream*> > const&) + 426
7 0x55e31317729d TrexStatelessPort::generate_streams_graph() + 205
8 0x55e3131777cc TrexStatelessPort::validate() + 876
9 0x55e31319ef2a TrexRpcCmdValidate::_run(Json::Value const&, Json::Value&) + 58
10 0x55e3131428ae TrexRpcCommand::run(Json::Value const&, Json::Value&) + 62
11 0x55e31313ee80 JsonRpcMethod::_execute(Json::Value&) + 48
12 0x55e31313c9f3 TrexJsonRpcV2ParsedObject::execute(Json::Value&) + 131
13 0x55e31313ae0b TrexRpcServerReqRes::process_request_raw(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >&) + 267
14 0x55e31313b61e TrexRpcServerReqRes::process_zipped_request(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >&) + 462
15 0x55e31313b9f5 TrexRpcServerReqRes::handle_request(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 469
16 0x55e31313c17a TrexRpcServerReqRes::_rpc_thread_cb_int() + 1306
17 0x55e31313c84b TrexRpcServerReqRes::_rpc_thread_cb() + 11
18 0x7ff4a86e727f so/libstdc++.so.6(+0xba27f) [0x7ff4a86e727f]
19 0x7ff4a8dd0e25 /lib64/libpthread.so.0(+0x7e25) [0x7ff4a8dd0e25]
20 0x7ff4a7e46bad clone + 109
Did i input this right? What is the file size limit?

Thank you for great job,
Anna.

TRex single interface (dummy port) support clarity

Documentation shows support for "dummy ports" as of version 2.38 (great!)
https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_dummy_ports

However, it isn't clear to me if this is only for stateless mode. Probably dummy ports can be used in any mode, but SINGLE PORT support is probably only stateless right? (I cannot understand how stateful or ASTF would actually work properly w/ only a single port)

This has been dangling in https://groups.google.com/forum/#!searchin/trex-tgn/dummy$20port%7Csort:date/trex-tgn/jLM5lENichQ/afUUnaMvBgAJ, for a while, so creating an issue for it.

Once we have clarity, I am happy to correct the documentation.

TRex manual listings of "traffic profiles include" is stale

check this
https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_traffic_profiles_provided_with_the_trex_package

was this meant to be a "quick list" of traffic profiles included?
(certainly it is not complete)

$ pwd
~/projects/trex-core/scripts
$ find . | egrep ".*yaml$" | egrep -v "automation|lib"
./stl/gui_example.yaml
./avl/test_mac.yaml
./avl/sfr_delay_10_1g_asa_nat.yaml
./avl/sfr_delay_10_1g.yaml
./avl/sfr_delay_10_1g_no_bundeling.yaml
./avl/sfr_branch_profile_delay_10.yaml
./avl/sfr_delay_10_no_bundeling.yaml
./avl/sfr_delay_50_tunnel_no_bundeling.yaml
./avl/mac_uit.yaml
./avl/sfr_delay_10.yaml
./cfg/cfg_example1.yaml
./cfg/trex_07_cfg.yaml
./cfg/simple_cfg.yaml
./cfg/ins2.yaml
./cfg/kiwi02_more_flows.yaml
./cfg/ins3.yaml
./cfg/trex_08_5mflows.yaml
./cfg/trex_advanced_cfg-10g.yaml
./cfg/ucs_h1.yaml
./cfg/xl710.yaml
./cfg/trex_advanced_dont_use_x710-card1.yaml
./cfg/ins1.yaml
./cfg/x710_advance_more_flows.yaml
./cfg/ucs_h0.yaml
./cfg/cfg_example2.yaml
./cap2/cur_flow.yaml
./cap2/cur_flow_single_tw_8.yaml
./cap2/http_very_long.yaml
./cap2/lb_ex1.yaml
./cap2/per_template_gen4.yaml
./cap2/per_template_gen5.yaml
./cap2/rtsp.yaml
./cap2/per_template_gen3.yaml
./cap2/per_template_gen1.yaml
./cap2/http.yaml
./cap2/test_pcap_mode2.yaml
./cap2/ipv6_load_balance.yaml
./cap2/dns_single_server.yaml
./cap2/dns_wlen.yaml
./cap2/sip_short2.yaml
./cap2/short_tcp.yaml
./cap2/rtsp_full2.yaml
./cap2/imix_9k.yaml
./cap2/rtsp_short3.yaml
./cap2/cur_flow_single.yaml
./cap2/imix_64.yaml
./cap2/imix_64_100k.yaml
./cap2/http_simple_ipv6.yaml
./cap2/asa_explot1.yaml
./cap2/imix_64_fast.yaml
./cap2/many_client_example.yaml
./cap2/sfr2.yaml
./cap2/http_simple.yaml
./cap2/dns_wlen1.yaml
./cap2/wrong_ip.yaml
./cap2/dns_one_server.yaml
./cap2/sfr_agg_tcp14_udp11_http200msec_new_high_new_nir_profile.yaml
./cap2/sfr3.yaml
./cap2/test_pcap_mode1.yaml
./cap2/dyn_pyld1.yaml
./cap2/imix_1518.yaml
./cap2/per_template_gen2.yaml
./cap2/http_short.yaml
./cap2/dns_wlen2.yaml
./cap2/jumbo.yaml
./cap2/limit_multi_pkt.yaml
./cap2/ipv4_load_balance.yaml
./cap2/nat_test.yaml
./cap2/sfr.yaml
./cap2/dns_tw.yaml
./cap2/sfr4.yaml
./cap2/dns_no_delay.yaml
./cap2/rtsp_short1.yaml
./cap2/rtsp_short1_slow.yaml
./cap2/sip_short1.yaml
./cap2/dns.yaml
./cap2/imix_9k_burst_10.yaml
./cap2/ipv6.yaml
./cap2/http_plugin.yaml
./cap2/sfr_agg_tcp14_udp11_http200msec_new_high_new_nir_profile_ipg_mix.yaml
./cap2/dns_wlength.yaml
./cap2/cluster_example.yaml
./cap2/limit_single_pkt.yaml
./cap2/rtsp_short2.yaml
./cap2/tuple_gen.yaml
./cap2/rtsp_full1.yaml
./astf/cc_http_simple_src_mac.yaml
./astf/cc_http_simple.yaml
./astf/cc_http_simple2.yaml
./astf/cc_http_simple4.yaml
./astf/cc_http_simple3.yaml
./astf/cc_http_simple5.yaml

Perhaps arguably worse, is that it contains some traffic profiles which are stale (do not exist)

  • cap2/imix_fast_1g.yaml | imix profile with 1600 flows normalized to 1Gb/sec.
  • cap2/imix_fast_1g_100k_flows.yaml | imix profile with 100k flows normalized to 1Gb/sec.
$ find . | egrep ".*yaml$" | grep imix_fast_1g
./automation/regression/cfg/imix_fast_1g.yaml

--

I can clean this up, but just need some guidance depending on the intention.

Hi, can trex just support one port?

I use one port to send, do not care the recv ,but warning:

Configuration file /etc/trex_cfg.yaml should include even number of interfaces, got: 1
ERROR encountered while configuring trex system

compilation error due to failed static assertion

When compiling version v2.05 on Ubuntu 16.04 the compilation fails with the following output:

cd linux_dpdk/
./b configure
...
./b

...

[136/878] cxx: ../src/flow_stat.cpp -> build_dpdk/src/flow_stat.cpp.3.o
In file included from ../../src/trex_watchdog.cpp:22:0:
../../src/trex_watchdog.h:206:1: error: static assertion failed: sizeof(TrexMonitor) != RTE_CACHE_LINE_SIZE
 static_assert(sizeof(TrexMonitor) == RTE_CACHE_LINE_SIZE, "sizeof(TrexMonitor) != RTE_CACHE_LINE_SIZE" );
 ^
In file included from ../../src/bp_sim.h:61:0,
                 from ../../src/main_dpdk.h:21,
                 from ../../src/debug.cpp:30:
../../src/trex_watchdog.h:206:1: error: static assertion failed: sizeof(TrexMonitor) != RTE_CACHE_LINE_SIZE
 static_assert(sizeof(TrexMonitor) == RTE_CACHE_LINE_SIZE, "sizeof(TrexMonitor) != RTE_CACHE_LINE_SIZE" );
 ^
In file included from ../../src/bp_sim.h:61:0,
                 from ../../src/main_dpdk.cpp:54:
../../src/trex_watchdog.h:206:1: error: static assertion failed: sizeof(TrexMonitor) != RTE_CACHE_LINE_SIZE
 static_assert(sizeof(TrexMonitor) == RTE_CACHE_LINE_SIZE, "sizeof(TrexMonitor) != RTE_CACHE_LINE_SIZE" );
 ^
In file included from /mnt/hgfs/shared/trex-core/src/rpc-server/trex_rpc_server_api.h:33:0,
                 from /mnt/hgfs/shared/trex-core/src/stateless/cp/trex_stateless.h:32,
                 from ../../src/flow_stat.cpp:53:
/mnt/hgfs/shared/trex-core/src/trex_watchdog.h:206:1: error: static assertion failed: sizeof(TrexMonitor) != RTE_CACHE_LINE_SIZE
 static_assert(sizeof(TrexMonitor) == RTE_CACHE_LINE_SIZE, "sizeof(TrexMonitor) != RTE_CACHE_LINE_SIZE" );
 ^
Waf: Leaving directory `/mnt/hgfs/shared/trex-core/linux_dpdk/build_dpdk'
Build failed
...

The cpp used is 'cpp (Ubuntu 5.4.0-6ubuntu1~16.04.1) 5.4.0 20160609' and the system is a 'Intel(R) Core(TM) i5-4288U CPU @ 2.60GHz'.
It seems like the RTE_CACHE_LINE_SIZE is 64 but sizeof(TrexMonitor) is 128.
I could also reproduce this on a 'Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz' running an up to date Ubuntu 16.04.

Running TREX from Ubuntu VM to Cisco VM

Hello, I have two virtual boxes, Ubuntu in one and Cisco ios on other. I installed TREX on a Ubuntu VM. My goal is to generate load from Ubuntu to Cisco. Currently, there is only one interface enp0s3 on Ubuntu and its 1G/s interface.

My config file looks like follows-

port_limit : 1
version : 2
#List of interfaces. Change according to your setup. Use ./dpdk_setup_ports.py -s to see available options.
interfaces : ["03:00.0"]

  • ip : 192.168.56.2 ---IP of the destination Cisco router
    default_gw : 10.0.2.2 ----Default gateway IP of the Linux VM
    When I run the trex command-

I am receiving this error- Configuration file /etc/trex_cfg.yaml should include even number of interfaces, got: 1
ERROR encountered while configuring trex system

Please let me know if the process and config files look correct and how can I proceed with the error, thanks.

ASTF Multiple CPU sending packets on wrong port

I found a problem with the way Trex ASTF is sending packets when using 2 CPU’s instead of 1. I configured 2 sets of ports 1,2 and 3,4 and put the ports in loopback so that I could capture the packets being received via the –v 7 server option and running the debug version of the server.
I suspected that Trex was sending data on the wrong port because in non loopback I would only receive half the packets. To prove this I captured data with ports 1,2 in loopback and 3,4 disconnected so that I could capture the data received on ports 1,2

Based on the config files attached I expected trex to do the following:

Ports 1,2
40.125.1.1 00:00:0a:25:00:01 <-> 11.140.1.1 00:11:01:40:01:01
40.125.1.2 00:00:0a:25:00:02 <-> 11.140.1.2 00:11:01:40:01:02

Ports 3,4
40.125.1.3 00:00:0a:26:00:01 <-> 11.140.1.3 00:11:01:40:01:03
40.125.1.4 00:00:0a:26:00:02 <-> 11.140.1.4 00:11:01:40:01:04

What trex actually sent on ports 1,2
Ports 1,2
40.125.1.1 00:00:0a:25:00:01 <-> 11.140.1.1 00:11:01:40:01:01
40.125.1.3 00:00:0a:26:00:01 <-> 11.140.1.3 00:11:01:40:01:03 #### This should have been sent on Ports 3,4 not Ports 1,2

trex_mcore.zip

TRex HTML documentation is not rendering LaTeX formulas

Consider:
https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_how_to_determine_the_packet_per_second_pps_and_bit_per_second_bps

There are a few formulas in there written in LaTeX support using the latexmathmacro.

For example:

latexmath:[$Total PPS = \sum_{k=0}^{n}(CPS_{k}\times {flow\_pkts}_{k})$]

This syntax is correct, as it renders correctly in the PDF output:
https://trex-tgn.cisco.com/trex/doc/trex_book.pdf

perhaps (wondering) the asciidoc HTML build process being used currently is not building with proper latex support?
, add -a math ?

--
(note that live editing in GitHub will seemingly NEVER work :( - due to asciidoctor/asciidoctor#492)

CTRexClient.is_running() errors out

Hi
Thanks for creating the best opensource traffic generator. I am just trying to connect to a trex server and get its status. It worked fine with v2.30 and is failing with v2.34. Any ideas?

pharidos@uks2:~/trex/trex_client_v2.34/$ PYTHONPATH=stl/:stf/ python
Python 2.7.14 (default, Sep 17 2017, 18:50:44)
[GCC 7.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from trex_stf_lib.trex_client import CTRexClient
>>> trex_server=CTRexClient('10.156.34.208')
>>> trex_server.is_running()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/pharidos/trex/trex_client_v2.34/stf/trex_stf_lib/trex_client.py", line 426, in is_running
    res = self.get_running_info()
  File "/home/pharidos/trex/trex_client_v2.34/stf/trex_stf_lib/trex_client.py", line 548, in get_running_info
    self.result_obj.update_result_data(latest_dump)
  File "/home/pharidos/trex/trex_client_v2.34/stf/trex_stf_lib/trex_client.py", line 1518, in update_result_data
    self._avg_latency        = CTRexResult.__avg_all_and_rename_keys(avg_latency)
  File "/home/pharidos/trex/trex_client_v2.34/stf/trex_stf_lib/trex_client.py", line 1601, in __avg_all_and_rename_keys
    all_list  = src_dict.values()
AttributeError: 'NoneType' object has no attribute 'values'
>>> print trex_server.result_obj.get_last_value("trex-latency.data", "avg-")
None
>>>
>>> print trex_server.result_obj
Is valid history?       True
Done warmup?            False
Expected tx rate:       {u'm_tx_expected_pps': 0.0, u'm_tx_expected_bps': 0.0, u'm_tx_expected_cps': 0.0}
Current tx rate:        {u'm_tx_bps': 0.0, u'm_tx_cps': 0.0, u'm_tx_pps': 0.0}
Minimum latency:        {}
Maximum latency:        {}
Average latency:        None
Average window latency: None
Total drops:            None
Drop rate:              None
History size so far:    1

default value "w" (=1) for template, yet it is required

https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_per_template_section

     - name: cap2/dns.pcap 1
       cps : 10.0          2
       ipg : 10000         3
       rtt : 10000         4
       w   : 1             5           <--- this

says:

 Default value: w=1. This indicates to the IP generator how to generate the flows. If w=2, two flows from the same template will be generated in a burst (more for HTTP that has burst of flows). 

yet without it in the template definition:

yaml-cpp: error at line 11, column 8: key not found: w

:(

is it mandatory or not

Could not be compiled successfully on aarch64 platform

[104/957] Compiling ../src/44bsd/tcp_output.cpp
../../external_libs/bpf/bpfjit/sljitLir.c:1743:11: fatal error: sljitNativeARM_64.c: No such file or directory
# include "sljitNativeARM_64.c"
^~~~~~~~~~~~~~~~~~~~~
compilation terminated.

In file "sljitLir.c"
#elif (defined SLJIT_CONFIG_ARM_64 && SLJIT_CONFIG_ARM_64)
# include "sljitNativeARM_64.c"

There is no such file for arm ported into such folder "external_libs/bpf/bpfjit"

trex-core-2.37/external_libs/bpf/bpfjit/sljitNativeX86_32.c
trex-core-2.37/external_libs/bpf/bpfjit/sljitNativeX86_64.c
trex-core-2.37/external_libs/bpf/bpfjit/sljitNativeX86_common.c

'list' type support in flow_var

Hi,
Many other commercial tool supports 'list' type in flow.
http://trex-tgn.cisco.com/trex/doc/trex_rpc_server_spec.html#_flow_var.
In TRex side, flow selected with inc/dec/random from {init_value,min_value,max_value} set.

Can I add 'list' type in this rpc command?
If I add 'list' type, which option will be natural and preferred approach in your current TRex design?

Option#1. add 'list' which is same as set of {init_value,min_value,max_value}.
User can use 'op' inc/dec/random from this 'list' set.

Option#2. add 'list' op and list set
User can use this 'list' as fixed order from specified 'list' set

Client - Server on different VMs on same ESX server

Hi,

I'm trying to setup two instances of trex, with one running as client and the other as server, on separate VMs on the same ESX node.

What I'm finding is that when I run a test it removes the interface route so traffic to it's local gateway goes down.

I can then manually add it back in which returns connectivity but it gets removed again when I run trex.

Is this use case supported? Or is it because it's trying to ARP out of the physical host interface instead of the virtual adaptor I want it to use?

Thanks

OSError installation Trex Centos

Hello, Trex-core Team!

I have an error when try to first start trex.

 sudo ./dpdk_setup_ports.py -s
Traceback (most recent call last):
  File "./dpdk_setup_ports.py", line 1332, in main
    dpdk_nic_bind.show_status()
  File "/opt/trex/v2.41/dpdk_nic_bind.py", line 603, in show_status
    get_nic_details()
  File "/opt/trex/v2.41/dpdk_nic_bind.py", line 281, in get_nic_details
    dev_lines = check_output(["lspci", "-Dvmmn"], universal_newlines = True).splitlines()
  File "/opt/trex/v2.41/dpdk_nic_bind.py", line 158, in check_output
    stderr=stderr, **kwargs).communicate()[0]
  File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
    errread, errwrite)
  File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory

# hostnamectl
   Static hostname: hw-trex
         Icon name: computer-desktop
           Chassis: desktop
        Machine ID: ...
           Boot ID: ...
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-862.3.2.el7.x86_64
      Architecture: x86-64

What can i do to fix it?

Thanks,
Anna

The counters returned by RPC command "get_port_stats" are always zero

Hi,

I'm trying to fetch the statistics for each port in my application by calling the RPC command "get_port_stats" to TRex server, then I can parse the Json response to get the counters.

the command can send out and received the response from TRex sever sucessfully, but all counter's values are 0, the valid counts values can be shown in console log.

TRex documentation and stateful traffic profiles are riddled with "SFR"

As per: https://groups.google.com/forum/#!topic/trex-tgn/_Owo86jrieU

I've noticed that throughout documentation for stateful TRex manual, and many of the supplied traffic profiles, have this acronym "SFR" used.

doc example: https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_traffic_profiles_provided_with_the_trex_package
traffic profile mix example: https://github.com/cisco-system-traffic-generator/trex-core/blob/master/scripts/cap2/sfr.yaml

We do not explain what SFR is. One user on Google groups noted that SFR is a telecom company in france. (Does this mean these traffic profiles are supplied by that SFR company?)

[New feature suggestion] Capture custom ethernet packets

  1. Motivation
    Some commercial tool users wanted to get same feature in TRex, but, capturing custom ethernet packets was not possible in current TRex.
    I checked it and it can be work-arounded by "--software" option when TRex start-up, but, it caused too high CPU load in heavy traffic situation.
    We added fixed value on it by temporary and it worked, but, I think it's better to make it general TRex feature.
    Therefore, We want to add new "custom_packet_type" feature in TRex side.

Can you review whether it's acceptable design in TRex?

If it's acceptable we want to push it to TRex.

  1. Syntax
    In trex_cfg.yaml
    custom_packet_types : { ethernet : "0xABCD" }

  2. Description :
    Define custom packet types to capture user defined ethernet packet.

  1. trex_cfg.yaml vs json_rpc commands ?
  • Prefer trex_cfg.yaml because it's simpler and more robust than json_rpc without side effect.
    User do not care about the sequence of using it when capture packets and TRex do not need to care about management(add/delete) at run-time.
  1. custom_packet_types vs custom_ethernet_types ?
  • Prefer custom_packet_types to extend to other types like custom IP in the future.
  1. number of custom packet types : Multiple vs Single
  • Prefer multiple because user can have multiple custom packet types
  1. Implementation
  1. If custom ethernet types defined, check custom ehternet type when loading profile instead of returning error with FSTAT_PARSER_E_UNKNOWN_HDR
    src/flow_stat_parser.cpp

  2. If custom ethernet types defined, add custom ethernet type filter to in set_rcv_all for all NIC types(1G,10G,40G)
    src/main_dpdk.cpp

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.