Giter Club home page Giter Club logo

mos-networking-stack's Introduction

Build Status Build Status

mOS-networking-stack

mOS networking stack provides a high performance user-space programming library specifically targeted for building software-based stateful monitoring solutions. Our API offers powerful event-based abstractions that can potentially simplify the codebase of future middleboxes that are developed on our stack.

We suggest you to browse through our example (samples/) programs to see how stateful middleboxes can be built using the mOS networking API.

To download our source, type:

# git clone https://github.com/ndsl-kaist/mOS-networking-stack.git

Pull requests and bug fixes are welcomed!

Documentation

Please visit http://mos.kaist.edu/ for more instructions.

Acknowledgment

This project is supported by ICT Research and Development Program of MSIP/IITP, Korea, under Grant B0101-16-1368, [Development of an NFV-inspired networked switch and an operating system for multi-middlebox services].

mos-networking-stack's People

Contributors

ajamshed avatar tbarbette avatar ygmoon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mos-networking-stack's Issues

insmod: ERROR: could not insert module drivers/dpdk-16.04/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko: Invalid module format

I am using Ubuntu-14.04 with docker container and able to successfully build using ./setup.sh --compile-dpdk command but when I run ./setup.sh --run-dpdk and select option 2, it fails to load the UIO module with invalid format error.

Option: 2
Unloading any existing DPDK UIO module
modinfo: ERROR: Module uio not found.
Loading DPDK UIO module
insmod: ERROR: could not insert module drivers/dpdk-16.04/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko: Invalid module format
## ERROR: Could not load kmod/igb_uio.ko.
./setup.sh: line 319: quit: command not found
# Please load the 'igb_uio' kernel module before querying or
# adjusting NIC device bindings

compile example error

[root@localhost midstat]# make
CC ../../core/lib/libmtcp.a
/usr/bin/ld: edges: TLS definition in ../../core/lib/libmtcp.a(sf_optimize.o) section .tbss mismatches non-TLS definition in /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libpcap.so section .bss
/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libpcap.so: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
make: *** [midstat] Error 1I can compile successfully

Hi,
when I use option "./setup.sh --compile-pcap",I can compile successfully. But use option "./setup.sh --compile-dpdk", I can't compile all examples. I'm sure mOS is compile successfully.

Problems with more than 23 flows.

I tested mOS with the Moongen packet generator. I used a tcp pcap to test the performance of the sample NAT and sample simple_firewall.
I used a server with the following configuration:
CPU: Intel(R) Xeon(R) CPU E31230 @ 3.20GHz Number of CPUs: 1 Memory: 16 GB Mainboard: X9SCL/X9SCM Mgmt MAC: 00:25:90:75:4c:16 IPMI MAC: 00:25:90:75:49:23 NICs: 2x Intel X540 (1x X540-T2)

with debian-jessie. I used mOS in inline mode. When I send and pcap with more than 23 Flows mOS stops processing the incoming traffic. It is still running but does not show anymore incoming traffic and also does not forward any more traffic.
This also applies to udp traffic. I have no idea what could cause this.
Can you help me out? I'm a relative newbie in this so sorry if something important is missing

Example for mtcp_peek?

Hi all,

I get [ mtcp_peek: 297] tcprb_ppeek() failed when trying to use mtcp_peek. What I want is to read all, ordered TCP payload.

Is there any example beyond http://mos.kaist.edu/guide/programmer/05_api_example.html ?

I used the nat example, from which I removed the nating part. I register the callback like this:

 mtcp_register_callback(mctx, sock, MOS_ON_CONN_NEW_DATA, MOS_NULL, callback);

The callback is basically only doing this:

char buf[1024];
while ((r == mtcp_peek(mctx, sock, side, buf, 1024)) > 0 && r < 1024);

Thanks,
Tom

nat sample MAC address not correct

Hi Asim

I might may have misconfigured something for the NAT sample, so here again is my test example, let me know if I missed anything :)

client <---->dpdk0 mOS dpdk1 <---->server

client (IP 10.0.0.7, MAC 00:1b:21:50:bc:38 )
mOS (dpdk0 10.0.0.9 MAC a0:36:9f:a1:4d:6c, dpdk1 10.0.1.9 MAC A0:36:9F:A1:4D:6D)
server (IP 10.0.0.8, MAC 00:15:60:0e:3d:0a )

1, nat mos.conf

mos {
forward = 1

    #######################
    ##### I/O OPTIONS #####
    #######################
    # number of memory channels per socket [mandatory for DPDK]
    nb_mem_channels = 4

    # devices used for MOS applications [mandatory]
    netdev {
            dpdk0 0x00FF
            dpdk1 0x00FF
    }

    #######################
    ### LOGGING OPTIONS ###
    #######################
    # NICs to print network statistics per second
    # if enabled, mTCP will print xx Gbps and xx pps for RX and TX
    stat_print = dpdk0 dpdk1

    # A directory contains MOS system log files
    mos_log = logs/

    ########################
    ## NETWORK PARAMETERS ##
    ########################
    # This to configure static arp table
    # (Destination IP address) (Destination MAC address)
    arp_table {
    }

    # This is to configure static routing table
    # (Destination address)/(Prefix) (Device name)
    route_table {
    }

    # This is to configure static bump-in-the-wire NIC forwarding table
    # DEVNIC_A DEVNIC_B ## (e.g. dpdk0 dpdk1)
    nic_forward_table {
            dpdk0 dpdk1
    }

2, add server MAC in client and client MAC in server

client: #arp -i p1p1 -s 10.0.0.8 00:15:60:0e:3d:0a

server:

arp -i eth0 -s 10.0.0.7 00:1b:21:50:bc:38

arp -i eth0 -s 10.0.0.9 A0:36:9F:A1:4D:6D ( Note I had to add dpdk1 MAC for NAT IP 10.0.0.9 on server)

3, run nat

./nat -i 10.0.0.9

4, run curl on client

curl http://10.0.0.8/

client side capture:

09:42:47.809727 00:1b:21:50:bc:38 > 00:15:60:0e:3d:0a, ethertype IPv4 (0x0800), length 74: 10.0.0.7.37680 > 10.0.0.8.80: Flags [S], seq 1791631497.....

09:42:47.809987 a0:36:9f:a1:4d:6c > a0:36:9f:a1:4d:6d, ethertype IPv4 (0x0800), length 74: 10.0.0.8.80 > 10.0.0.7.37680: Flags [S.], seq 3146701557, ack 1791631498, ...

Note the problem here is:

SYN+ACK from mOS has source MAC of dpdk0 and destination MAC of dpdk1, instead of source MAC of dpdk0 and destination MAC of client, thus this result in SYN+ACK dropped by client.

4 server side capture:

09:45:56.065765 a0:36:9f:a1:4d:6d > 00:15:60:0e:3d:0a, ethertype IPv4 (0x0800), length 74: 10.0.0.9.1027 > 10.0.0.8.80: Flags [S], seq 1791631497...........

09:45:56.065865 00:15:60:0e:3d:0a > a0:36:9f:a1:4d:6d, ethertype IPv4 (0x0800), length 74: 10.0.0.8.80 > 10.0.0.9.1027: Flags [S.], seq 3146701557, ack 1791631498.............

server side source MAC and destination MAC looks ok for both SYN and SYN+ACK.

Can mOS API run on a Raspberry Pi B+?

Hi,

I am a student from Viet Nam. I have a project that build a IDS/IPS with mOS API on Raspberry Pi B+.

I try to build mOS API on Raspbian Lite aarch 64, and then got an error: "unrecognized command line option -m64".

In my point, mOS API can't build on Raspberry Pi machine, but i want to confirm that.

Please tell me can mOS API build on Raspberry Pi?

Sorry about my bad English.

Thanks.

mOS with mlx5

Hi,

Is it possible to configure mOS to use Mellanox NICs (using the mlx5 driver)? --run-dpdk complains igb_uio is not loaded (because we don't need it with MLX).

Thanks,
Tom

[nat] Packets not translated when using 3 cores

Hi,

The midstat application works great with any number of cores (I tried 1 to 4), with an expected more or less linear improvement, so it works.
However, the NAT application does not translate some SYN+ACK when using 3 cores, and only with 3 cores. It is as if the packet did go through untouched.
Any idea where that could come from? Did I miss something?

Thanks,
Tom

Mac address prevent change?

I am building and testing an in-line environment.
Like this image(http://www.ndsl.kaist.edu/mos_guide/_images/midstat_inline.png)
I'm running sample progam is midstat

mos.conf
#######################
# MOS-RELATED OPTIONS #
#######################
mos {
    forward = 1

    #######################
    ##### I/O OPTIONS #####
    #######################
    # number of memory channels per socket [mandatory for DPDK]
    nb_mem_channels = 2

    # devices used for MOS applications [mandatory]
    netdev {
        dpdk0 0x0001
        dpdk1 0x0001
    }

    #######################
    ### LOGGING OPTIONS ###
    #######################
    # NICs to print network statistics per second
    # if enabled, mTCP will print xx Gbps and xx pps for RX and TX
    stat_print = dpdk0 dpdk1

    # A directory contains MOS system log files
    mos_log = logs/

    ########################
    ## NETWORK PARAMETERS ##
    ########################
    # This to configure static arp table
    # (Destination IP address) (Destination MAC address)
    arp_table {
    }

    # This is to configure static routing table
    # (Destination address)/(Prefix) (Device name)
    route_table {
    }

    # This is to configure static bump-in-the-wire NIC forwarding table
    # DEVNIC_A DEVNIC_B ## (e.g. dpdk0 dpdk1) 
    nic_forward_table {
      dpdk0 dpdk1
    }
}
Client(00:00:00:00:00:01) - Server(00:00:00:00:00:02)
root@Server:/tcpdump host 192.168.100.233 -en
00:00:00:00:e0:01 > 00:00:00:00:00:02, ethertype IPv4 (0x0800), length...



Client(00:00:00:00:00:01) - dpdk0(00:90:fb:45:d9:1d) - dpdk1(00:90:fb:45:d9:1e) - Server(00:00:00:00:00:02)
root@Server:/tcpdump host 192.168.100.233 -en
00:90:fb:45:d9:1e > 00:00:00:00:00:02, ethertype IPv4 (0x0800), length...

Is it possible to maintain MAC Address in an Inline configuration environment?

Client MAC Address is replaced with the MAC Address of the host
Can you prevent replace?

Problem starting midstat: EAL: Error - exiting with code: 1 Cause: Cannot init mbuf pool

I have a problem starting midstat.
I set up mOS with ./setup.sh --compile-dpdk and ./setup.sh --run-dpdk
I load eth Devices 04:00.0 and 04:00.1 and give them 2 IP addresses.
I create config files without an error.
I change into the directory of midstat and use make, it works without an error.
When I now try to start midstat with ./midstat I get:

`EAL: Detected 8 lcore(s)  
EAL: Auto-detected process type: PRIMARY  
EAL: Probing VFIO support...  
EAL: PCI device 0000:04:00.0 on NUMA socket -1  
EAL:   probe driver: 8086:1526 net_e1000_igb  
EAL: PCI device 0000:04:00.1 on NUMA socket -1  
EAL:   probe driver: 8086:1526 net_e1000_igb  
EAL: PCI device 0000:06:00.0 on NUMA socket -1  
EAL:   probe driver: 8086:1526 net_e1000_igb  
EAL: PCI device 0000:06:00.1 on NUMA socket -1  
EAL:   probe driver: 8086:1526 net_e1000_igb  
EAL: PCI device 0000:09:00.0 on NUMA socket -1  
EAL:   probe driver: 8086:10d3 net_e1000_em  
load_module(): 0x8b6fc0  
EAL: Error - exiting with code: 1  
  Cause: Cannot init mbuf pool  
`

It worked 2 weeks ago so I guess something in the dependencies changed?
I use a fresh debian-jessie on

  • CPU: Intel(R) Xeon(R) CPU E31230 @ 3.20GHz
  • Number of CPUs: 1
  • Memory: 16 GB
  • Mainboard: X9SCL/X9SCM
  • Mgmt MAC: 00:25:90:74:77:b8
  • IPMI MAC: 00:25:90:74:72:cd
  • NICs: 4x 82576

I install the dependencies with:

`#!/bin/bash  
apt-get update -y  
apt-get install linux-headers-$(uname -r) -y  
apt-get install libnuma-dev -y  
apt-get install libc6 -y  
apt-get install libssl-dev -y  
apt-get install libglib2.0-0 -y  
apt-get install libdpdk-dev -y  
apt-get install pciutils -y  
`

Can you help me out? I'm a relative newbie in this so sorry if something important is missing

DBGMSG debug flag on cause compiling error when TRACE_DBG used in source file

Hi Asim,

I turned on the debug flag on in core/src/Makefile and run into compiling error below:

DBG_OPT = -DDBGMSG -DDBGFUNC -DSTREAM -DSTATE -DTSTAT -DAPP -DEPOLL

DBG_OPT = -DDBGMSG -DDBGFUNC -DSTREAM -DSTATE

DBG_OPT += -DPKTDUMP

DBG_OPT += -DDUMP_STREAM

GCC_OPT += -g -DNETSTAT -DINFO -DDBGERR -DDBGCERR

GCC_OPT += -DNDEBUG -O3 -g -DNETSTAT -DINFO -DDBGERR -DDBGCERR
GCC_OPT += $(DBG_OPT)

CC tcp_stream.c
In file included from tcp_stream.c:14:0:
tcp_stream.c: In function ‘DisableBuf’:
./include/debug.h:50:16: error: ‘mtcp’ undeclared (first use in this function)
thread_printf(mtcp, mtcp->log_fp, "[%10s:%4d] "
^
tcp_stream.c:267:4: note: in expansion of macro ‘TRACE_DBG’
TRACE_DBG("Invalid side!\n");
^
./include/debug.h:50:16: note: each undeclared identifier is reported only once for each function it appears in
thread_printf(mtcp, mtcp->log_fp, "[%10s:%4d] "
^
tcp_stream.c:267:4: note: in expansion of macro ‘TRACE_DBG’
TRACE_DBG("Invalid side!\n");
^
tcp_stream.c: In function ‘GetLastTimestamp’:
./include/debug.h:50:16: error: ‘mtcp’ undeclared (first use in this function)
thread_printf(mtcp, mtcp->log_fp, "[%10s:%4d] "
^
tcp_stream.c:291:3: note: in expansion of macro ‘TRACE_DBG’
TRACE_DBG("Size passed is not >= sizeof(uint32_t)!\n");
^
make: *** [tcp_stream.o] Error 1

it seems TRACE_DGB used in quite a few sources need to be cleaned up properly?

Can mOS be used in conjunction with IPS, not IDS?

Can mOS be used in conjunction with IPS, not IDS?

I am building and testing an in-line environment. (http://www.ndsl.kaist.edu/mos_guide/_images/midstat_inline.png)

For example
If the HTTP data is 100 MB, the client's browser must be stopped for the time that the IPS will scan. After the IPS has been checked, it must be blocked or allowed.

is this possible?

In testing, when a packet is dropped using MOS_DROP, the client does not receive an ACK and retransmission occurs.

Can we solve this?

i40e/XL710 not receiving packets

Hi all,

I cannot receive any packets using XL710 cards.
I modified dpdk_module.c to print something if any packet is received and I've got nothing. MAC are set correctly though...
Did anyone actually achieved to use them with mOS?
Thanks,
Tom

midstat inline mode MAC address reversed and not translated to server MAC address

Hi Asim

I think mOS is very cool project so I followed http://mos.kaist.edu/guide/config/01_inline.html#configuration-steps to test midstat sample app, but I ran into issue below.

I have client, mOS middle box, server connected with direct cable as below:

client (p1p1 10.0.0.6)<--->(dpdk0 10.0.0.7 midstat dpdk1 10.0.1.7) <---> eth0 (10.0.1.8) server

1, I ran command curl on client to send http request to server to see if midstat can forward the request.

2, I configured client to use dpdk0 10.0.0.7 as gateway to server 10.0.1.8 and configured server to use dpdk1 10.0.1.7 as gateway to client 10.0.0.6.

3, I manually added ARP for 10.0.0.7 on client and added ARP for 10.0.1.7 on server

4, I did not manually add any ARP in mOS

here is my detail configuration information:

---client:

ip addr show p1p1

14: p1p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:1b:21:50:bc:38 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.6/24 scope global p1p1
valid_lft forever preferred_lft forever
inet6 fe80::21b:21ff:fe50:bc38/64 scope link
valid_lft forever preferred_lft forever

cat /proc/net/arp

IP address HW type Flags HW address Mask Device

10.0.0.7 0x1 0x6 a0:36:9f:a1:4d:6c * p1p1

ip route show

10.0.1.8 via 10.0.0.7 dev p1p1

curl http://10.0.1.8/

----mOS midstat

ip addr show dpdk0

27: dpdk0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether a0:36:9f:a1:4d:6c brd ff:ff:ff:ff:ff:ff
inet 10.0.0.7/24 brd 10.0.0.255 scope global dpdk0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fea1:4d6c/64 scope link
valid_lft forever preferred_lft forever

ip addr show dpdk1

28: dpdk1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether a0:36:9f:a1:4d:6d brd ff:ff:ff:ff:ff:ff
inet 10.0.1.7/24 brd 10.0.1.255 scope global dpdk1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fea1:4d6d/64 scope link
valid_lft forever preferred_lft forever

ip route show

10.0.0.0/24 dev dpdk0 proto kernel scope link src 10.0.0.7
10.0.1.0/24 dev dpdk1 proto kernel scope link src 10.0.1.7

cat config/mos.conf

######### MOS configuration file

MOS-RELATED OPTIONS

mos {
forward = 1

    #######################
    ##### I/O OPTIONS #####
    #######################
    # number of memory channels per socket [mandatory for DPDK]
    nb_mem_channels = 4

    # devices used for MOS applications [mandatory]
    netdev {
            dpdk0 0x00FF
            dpdk1 0x00FF
    }

.....................CUT..................

    ########################
    ## NETWORK PARAMETERS ##
    ########################
    # This to configure static arp table
    # (Destination IP address) (Destination MAC address)
    arp_table {
    }

    # This is to configure static routing table
    # (Destination address)/(Prefix) (Device name)
    route_table {
    }

    # This is to configure static bump-in-the-wire NIC forwarding table
    # DEVNIC_A DEVNIC_B ## (e.g. dpdk0 dpdk1)
    nic_forward_table {
            dpdk0 dpdk1
    }

   ..............CUT...........

}

EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI memory mapped at 0x7f5532c00000
EAL: PCI memory mapped at 0x7f5532d00000
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
...................

load_module(): 0x86f500
Initializing port 0... done:
Initializing port 1... done:

Checking link status.....................................done
Port 0 Link Up - speed 1000 Mbps - full-duplex
Port 1 Link Up - speed 1000 Mbps - full-duplex
===== MOS configuration =====
| num_cores: 8
| nb_mem_channels: 4
| max_concurrency: 100000
| rmem_size: 8192
| wmem_size: 8192
| tcp_tw_interval: 0
| tcp_timeout: 30000
| multiprocess: false
| mos_log: logs/
| stat_print: dpdk0 dpdk1
| forward: forward
|
+===== Netdev configuration (2 entries) =====
| dpdk0(idx: 0, HADDR: A0:36:9F:A1:4D:6C) maps to CPU 0x00000000000000FF
| dpdk1(idx: 1, HADDR: A0:36:9F:A1:4D:6D) maps to CPU 0x00000000000000FF
|
+===== Static ARP table configuration (0 entries) =====
|
+===== Routing table configuration (4 entries) =====
| IP: 0x00000000, NETMASK: 0x00000000, INTERFACE: br0(idx: 0)
| IP: 0x0A000000, NETMASK: 0xFFFFFF00, INTERFACE: dpdk0(idx: 0)
| IP: 0x0A000100, NETMASK: 0xFFFFFF00, INTERFACE: dpdk1(idx: 1)
| IP: 0xC0A80100, NETMASK: 0xFFFFFF00, INTERFACE: br0(idx: 0)
|
+===== NIC Forwarding table configuration (1 entries) =====
| NIC Forwarding Entry: dpdk0 <---> dpdk1 |
| NIC Forwarding Index Table: |
| 0 --> 1 |
| 1 --> 0 |
| 2 --> -1 |
| 3 --> -1 |
| 4 --> -1 |
| 5 --> -1 |
| 6 --> -1 |
| 7 --> -1 |
| 8 --> -1 |
| 9 --> -1 |
| 10 --> -1 |
| 11 --> -1 |
| 12 --> -1 |
| 13 --> -1 |
| 14 --> -1 |
| 15 --> -1 |

Proto CPU Client Address Client State Server Address Server State
tcp 7 10.0.0.6:54783 SYN_SENT 10.0.1.8:80 SYN_RCVD

----server

ip addr show eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:60:0e:3d:0a brd ff:ff:ff:ff:ff:ff
inet 10.0.1.8/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::215:60ff:fe0e:3d0a/64 scope link
valid_lft forever preferred_lft forever

ip route show

10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.1.8
10.0.0.6 via 10.0.1.7 dev eth0
10.0.1.0/24 dev eth0 proto kernel scope link src 10.0.1.8

cat /proc/net/arp

IP address HW type Flags HW address Mask Device

10.0.1.7 0x1 0x6 a0:36:9f:a1:4d:6d * eth0

tcpdump -nn -e -i eth0

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

10:06:24.367009 a0:36:9f:a1:4d:6d > a0:36:9f:a1:4d:6c, ethertype IPv4 (0x0800), length 74: 10.0.0.6.54783 > 10.0.1.8.80: Flags [S], seq 539663012, win 29200, options [mss 1460,sackOK,TS val 19265926 ecr 0,nop,wscale 7], length 0
10:06:25.366176 a0:36:9f:a1:4d:6d > a0:36:9f:a1:4d:6c, ethertype IPv4 (0x0800), length 74: 10.0.0.6.54783 > 10.0.1.8.80: Flags [S], seq 539663012, win 29200, options [mss 1460,sackOK,TS val 19266176 ecr 0,nop,wscale 7], length 0

as you can see from above tcpdump on server, there are two things which seems to be wrong:

1 the SYN packet from client 10.0.0.6 to server 10.0.1.8 is suppose to be forwarded by dpdk0 ---> dpdk1, but the source mac and destination MAC is reversed:

dpdk1 a0:36:9f:a1:4d:6d > dpdk 0 a0:36:9f:a1:4d:6c

2 the SYN packet destination MAC should be translated to server MAC 00:15:60:0e:3d:0a, but it is dpdk0 MAC a0:36:9f:a1:4d:6c. thus when server receives the SYN packet, the destination MAC does not exist on server so server did not respond with SYN+ACK and silently dropped

I think I may have missed some configuration. so any clue would be helpful

DBG_OPT flag cause compiling error

Hi,

After running ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET , I uncomment DBG_OPT in mtcp/src/Makefile and run make to compile the mtcp and examples, but get compiling error below:

`In file included from io_module.c:26:0:

io_module.c: In function ‘probe_all_rte_devices’:

./include/debug.h:50:16: error: ‘mtcp’ undeclared (first use in this function)

thread_printf(mtcp, mtcp->log_fp, "[%10s:%4d] " \

            ^

io_module.c:130:4: note: in expansion of macro ‘TRACE_DBG’

TRACE_DBG("Could not find pci info on dpdk "

^~~~~~~~~

./include/debug.h:50:16: note: each undeclared identifier is reported only once for each function it appears in

thread_printf(mtcp, mtcp->log_fp, "[%10s:%4d] " \

            ^

io_module.c:130:4: note: in expansion of macro ‘TRACE_DBG’

TRACE_DBG("Could not find pci info on dpdk "

^~~~~~~~~`

Is there anything wrong?

btw, I run mtcp in my VMware virtual machine.

thanks in advance

Error compile dpdk

system: centos 6.8
kernel: 3.10.105-1.el6.elrepo.x86_64

These problems occurred when I compiled DPDK.

make[5]: Nothing to be done for `depdirs'.
Configuration done
== Build lib
== Build lib/librte_compat
== Build lib/librte_eal
== Build lib/librte_eal/common
== Build lib/librte_eal/linuxapp
== Build lib/librte_eal/linuxapp/eal
CC eal.o
In file included from /data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:58:
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_debug.h:82:5: warning: "RTE_LOG_LEVEL" is not defined
In file included from /data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:59:
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_memory.h:81:5: warning: "RTE_CACHE_LINE_SIZE" is not defined
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_memory.h:83:7: warning: "RTE_CACHE_LINE_SIZE" is not defined
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_memory.h:86:2: error: #error "Unsupported cache line size"
In file included from /data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:62:
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_eal.h:83: error: ‘RTE_MAX_LCORE’ undeclared here (not in a function)
In file included from /data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_eal_memconfig.h:40,
from /data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:63:
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_malloc_heap.h:53: error: requested alignment is not a constant
In file included from /data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/generic/rte_rwlock.h:54,
from /data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_rwlock.h:41,
from /data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_eal_memconfig.h:41,
from /data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:63:
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_atomic.h:46:5: warning: "RTE_MAX_LCORE" is not defined
In file included from /data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:63:
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_eal_memconfig.h:73: error: ‘RTE_MAX_MEMSEG’ undeclared here (not in a function)
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_eal_memconfig.h:74: error: ‘RTE_MAX_MEMZONE’ undeclared here (not in a function)
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_eal_memconfig.h:76: error: ‘RTE_MAX_TAILQ’ undeclared here (not in a function)
/data/disk1/software/mosapp/drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_eal_memconfig.h:79: error: ‘RTE_MAX_NUMA_NODES’ undeclared here (not in a function)
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c: In function ‘eal_parse_sysfs_value’:
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:140: error: ‘RTE_LOG_LEVEL’ undeclared (first use in this function)
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:140: error: (Each undeclared identifier is reported only once
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:140: error: for each function it appears in.)
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c: In function ‘eal_proc_type_detect’:
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:298: error: ‘RTE_LOG_LEVEL’ undeclared (first use in this function)
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c: In function ‘eal_parse_socket_mem’:
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:388: error: ‘RTE_LOG_LEVEL’ undeclared (first use in this function)
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:381: warning: unused variable ‘arg’
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c: In function ‘eal_parse_args’:
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:566: error: ‘RTE_LOG_LEVEL’ undeclared (first use in this function)
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c: In function ‘eal_check_mem_on_local_socket’:
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:683: error: ‘RTE_LOG_LEVEL’ undeclared (first use in this function)
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c: In function ‘rte_eal_init’:
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:794: error: ‘RTE_LOG_LEVEL’ undeclared (first use in this function)
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c: In function ‘rte_eal_check_module’:
/data/disk1/software/mosapp/drivers/dpdk-16.11/lib/librte_eal/linuxapp/eal/eal.c:926: error: ‘RTE_LOG_LEVEL’ undeclared (first use in this function)
make[7]: *** [eal.o] Error 1
make[6]: *** [eal] Error 2
make[5]: *** [linuxapp] Error 2
make[4]: *** [librte_eal] Error 2
make[3]: *** [lib] Error 2
make[2]: *** [all] Error 2
make[1]: *** [pre_install] Error 2
make: *** [install] Error 2

mOS Configuration

Hi all,

I am a bit confused by the interface configuration. I've got two XL710 interfaces, ens6f0 and ens6f1. I bound them to igb_uio, and I got :

sudo ./nat -f config/mos.conf -c 1 -i 10.221.0.1                                                                                              
EAL: Detected 16 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: Probing VFIO support...
EAL: PCI device 0000:00:19.0 on NUMA socket 0
EAL:   probe driver: 8086:15a1 net_e1000_em
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1583 net_i40e
EAL: PCI device 0000:02:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1583 net_i40e
[LoadConfigurationLowerHalf:1213] Interface 'dpdk0' not found

I tried to rename dpdk0/1 to ens6f0/1 in the mOS config but it does not work. I also tried without binding the interface first and I get the same message.

I'm not sure to understand what dpdk0 and dpdk1 refer to, are they real interfaces names? Or dpdk virtual name (dpdk, port 0) ? Should we rename our interfaces to dpdk0/dpdk1 instead of the default name ens6f0/ens6f1?

############### MOS configuration file ###############

#######################
# MOS-RELATED OPTIONS #
#######################
mos {
	forward = 1

	#######################
	##### I/O OPTIONS #####
	#######################
	# number of memory channels per socket [mandatory for DPDK]
	nb_mem_channels = 4

	# devices used for MOS applications [mandatory]
	netdev {
		dpdk0 0x00FF
		dpdk1 0x00FF
	}

	#######################
	### LOGGING OPTIONS ###
	#######################
	# NICs to print network statistics per second
	# if enabled, mTCP will print xx Gbps and xx pps for RX and TX
	stat_print = dpdk0 dpdk1

	# A directory contains MOS system log files
	mos_log = logs/

	########################
	## NETWORK PARAMETERS ##
	########################
	# This to configure static arp table
	# (Destination IP address) (Destination MAC address)
	arp_table {
        10.220.0.1 3c:fd:fe:9e:5c:40
        10.221.0.1 3c:fd:fe:9e:5c:41
        10.220.0.5 3c:fd:fe:9e:5b:60
        10.221.0.5 3c:fd:fe:9f:57:18 
	}

	# This is to configure static routing table
	# (Destination address)/(Prefix) (Device name)
	route_table {
        10.220.0.0/16 dpdk0
        10.221.0.0/16 dpdk1
	}

	# This is to configure static bump-in-the-wire NIC forwarding table
	# DEVNIC_A DEVNIC_B ## (e.g. dpdk0 dpdk1) 
	nic_forward_table {	
        dpdk0 dpdk1
	}

	########################
	### ADVANCED OPTIONS ###
	########################
	# if required, uncomment the following options and change them

	# maximum concurrency per core [optional / default : 100000]
	# (MOS-specific parameter for preallocation)
	# max_concurrency = 100000

	# disable the ring buffer [optional / default : 0]
	# use disabled buffered managment only for standalone monitors.
	# end host applications always need recv buffers for TCP!
	# no_ring_buffers = 1

	# receive buffer size of sockets [optional / default : 8192]
	# rmem_size = 8192

	# send buffer size of sockets [optional / default : 8192]
	# wmem_size = 8192

	# tcp timewait seconds [optional / default : 0]
	tcp_tw_interval = 30

	# tcp timeout seconds [optional / default : 30]
	# (set tcp_timeout = -1 to disable timeout checking)
	# tcp_timeout = 30
}

Thanks,
Tom

QOS implementation support with mOS

Hi,

I would like to know any support available to develop QoS (per IP based shaping) with mOS. Please let me know any example or reference. I'm still in evaluation phase.

Thank you,
Manoj

Performance problem

I’m using your latest version mOS-networking-stack but meet a problem. I will be grateful if you can help me. I found that if I run the Midstat in inline mode and add a line code “g_max_cores = 6;” in here: https://github.com/mos-stack/mOS-networking-stack/blob/master/samples/midstat/midstat.c#L298 , the midstat will have very low performance(the throughput will decrease from 1.1G to ~140M or even 0 )(like the following picture). I thought there may be a problem of coordination of multiple mtcp threads.

If I don't edit the code and just run by ./midstat -c 6, the performance problem also exists.

image

Issue with running sample application using > 2 cores.

Hi, I'm trying to run sample epserver with 8 cores but it stops working without an error message. At times it prints the first stat print dump but stops after that. When only given 2 cores the sample epserver works perfectly fine. I'm also succesfull at running epwget with 8 cores.

Moreover, I'm able to run the mtcp epserver (https://github.com/eunyoung14/mtcp) on the same machine with any number of cores without an issue. But when using it I received a large number of HANDLE_TCP_ST_CLOSING:1133 NOT ACK errors when testing it with large number of total_flows in epwget. But the mOS sample epserver running with 2 cores doesn't give me that error with the same test run.

image

Running it with gdb I can see the new thread creation messages and then all of them simply exit.

image

I was hoping you can help me figure this out. Which information do you need from me? I was using the generated mos config and following the standard install instructions for dpdk setup from here http://mos.kaist.edu/guide/walkthrough/03_setup.html#compile-and-build-mos-net-library-with-dpdk

Sincerely, Nick

insmod: ERROR: could not insert module drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko: Unknown symbol in module

I can compile and build mOS net library with DPDK using command ./setup.sh --compile-dpdk
But when I run mOS with ./setup.sh --run-dpdk and select 2, I get unknown symbol in module error.

Option: 2
Unloading any existing DPDK UIO module
Loading uio module
Loading DPDK UIO module
insmod: ERROR: could not insert module drivers/dpdk-16.11/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko: Unknown symbol in module
## ERROR: Could not load kmod/igb_uio.ko.
./setup.sh: line 333: quit: command not found
# Please load the 'igb_uio' kernel module before querying or 
# adjusting NIC device bindings

simple_firewall segmentation fault

Hi Asim

I tried to run simple_firewall, it triggers segmentation fault, backtrace below:

Program received signal SIGSEGV, Segmentation fault.
0x000000000043c496 in GetMTCPManager (mctx=mctx@entry=0x7fffffffce20) at api.c:64
64              if (g_mtcp[mctx->cpu]->ctx->done || g_mtcp[mctx->cpu]->ctx->exit) {
(gdb) bt
#0  0x000000000043c496 in GetMTCPManager (mctx=mctx@entry=0x7fffffffce20) at api.c:64
#1  0x00000000004724ae in mtcp_define_event (event=event@entry=1, filter=filter@entry=0x433f10 <CatchInitSYN>, arg=arg@entry=0x0) at scalable_event.c:718
#2  0x00000000004327f3 in main (argc=<optimized out>, argv=<optimized out>) at simple_firewall.c:475

does simple_firewall need a array like g_mctx like midstat sample?

segfault when using only some interfaces

When using only some interfaces (eg only dpdk1 when there is in fact dpdk0 and dpdk1 on the machine) leads to a segfault, at least with epwget.

GetOutputInterface will return the "name" index instead of the interface index, therefore this line segfault :
api.c:936 saddr_base = g_config.mos->netdev_table->ent[nif_out]->ip_addr;
as nif_out should be 0 and not 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.