Giter Club home page Giter Club logo

vflow's Introduction

vFlow

Build Status Go Report Card GoDev

High-performance, scalable and reliable IPFIX, sFlow and Netflow collector (written in pure Golang).

Features

  • IPFIX RFC7011 collector
  • sFLow v5 raw header / counters collector
  • Netflow v5 collector
  • Netflow v9 collector
  • Decoding sFlow raw header L2/L3/L4
  • Produce to Apache Kafka, NSQ, NATS
  • Replicate IPFIX and sFlow to 3rd party collector
  • Supports IPv4 and IPv6
  • Prometheus and RESTful APIs monitoring

Alt text

Documentation

Decoded IPFIX data

The IPFIX data decodes to JSON format and IDs are IANA IPFIX element ID

{"AgentID":"192.168.21.15","Header":{"Version":10,"Length":420,"ExportTime":1483484642,"SequenceNo":1434533677,"DomainID":32771},"DataSets":[[{"I":8,"V":"192.16.28.217"},{"I":12,"V":"180.10.210.240"},{"I":5,"V":2},{"I":4,"V":6},{"I":7,"V":443},{"I":11,"V":64381},{"I":32,"V":0},{"I":10,"V":811},{"I":58,"V":0},{"I":9,"V":24},{"I":13,"V":20},{"I":16,"V":4200000000},{"I":17,"V":27747},{"I":15,"V":"180.105.10.210"},{"I":6,"V":"0x10"},{"I":14,"V":1113},{"I":1,"V":22500},{"I":2,"V":15},{"I":52,"V":63},{"I":53,"V":63},{"I":152,"V":1483484581770},{"I":153,"V":1483484622384},{"I":136,"V":2},{"I":243,"V":0},{"I":245,"V":0}]]}

Decoded sFlow data

{"Version":5,"IPVersion":1,"AgentSubID":5,"SequenceNo":37591,"SysUpTime":3287084017,"SamplesNo":1,"Samples":[{"SequenceNo":1530345639,"SourceID":0,"SamplingRate":4096,"SamplePool":1938456576,"Drops":0,"Input":536,"Output":728,"RecordsNo":3,"Records":{"ExtRouter":{"NextHop":"115.131.251.90","SrcMask":24,"DstMask":14},"ExtSwitch":{"SrcVlan":0,"SrcPriority":0,"DstVlan":0,"DstPriority":0},"RawHeader":{"L2":{"SrcMAC":"58:00:bb:e7:57:6f","DstMAC":"f4:a7:39:44:a8:27","Vlan":0,"EtherType":2048},"L3":{"Version":4,"TOS":0,"TotalLen":1452,"ID":13515,"Flags":0,"FragOff":0,"TTL":62,"Protocol":6,"Checksum":8564,"Src":"10.1.8.5","Dst":"161.140.24.181"},"L4":{"SrcPort":443,"DstPort":56521,"DataOffset":5,"Reserved":0,"Flags":16}}}}],"IPAddress":"192.168.10.0","ColTime": 1646157296}

Decoded Netflow v5 data

{"AgentID":"114.23.3.231","Header":{"Version":5,"Count":3,"SysUpTimeMSecs":51469784,"UNIXSecs":1544476581,"UNIXNSecs":0,"SeqNum":873873830,"EngType":0,"EngID":0,"SmpInt":1000},"Flows":[{"SrcAddr":"125.238.46.48","DstAddr":"114.23.236.96","NextHop":"114.23.3.231","Input":791,"Output":817,"PktCount":4,"L3Octets":1708,"StartTime":51402145,"EndTime":51433264,"SrcPort":49233,"DstPort":443,"Padding1":0,"TCPFlags":16,"ProtType":6,"Tos":0,"SrcAsNum":4771,"DstAsNum":56030,"SrcMask":20,"DstMask":22,"Padding2":0},{"SrcAddr":"125.238.46.48","DstAddr":"114.23.236.96","NextHop":"114.23.3.231","Input":791,"Output":817,"PktCount":1,"L3Octets":441,"StartTime":51425137,"EndTime":51425137,"SrcPort":49233,"DstPort":443,"Padding1":0,"TCPFlags":24,"ProtType":6,"Tos":0,"SrcAsNum":4771,"DstAsNum":56030,"SrcMask":20,"DstMask":22,"Padding2":0},{"SrcAddr":"210.5.53.48","DstAddr":"103.22.200.210","NextHop":"122.56.118.157","Input":564,"Output":802,"PktCount":1,"L3Octets":1500,"StartTime":51420072,"EndTime":51420072,"SrcPort":80,"DstPort":56108,"Padding1":0,"TCPFlags":16,"ProtType":6,"Tos":0,"SrcAsNum":56030,"DstAsNum":13335,"SrcMask":24,"DstMask":23,"Padding2":0}]}

Decoded Netflow v9 data

{"AgentID":"10.81.70.56","Header":{"Version":9,"Count":1,"SysUpTime":357280,"UNIXSecs":1493918653,"SeqNum":14,"SrcID":87},"DataSets":[[{"I":1,"V":"0x00000050"},{"I":2,"V":"0x00000002"},{"I":4,"V":2},{"I":5,"V":192},{"I":6,"V":"0x00"},{"I":7,"V":0},{"I":8,"V":"10.81.70.56"},{"I":9,"V":0},{"I":10,"V":0},{"I":11,"V":0},{"I":12,"V":"224.0.0.22"},{"I":13,"V":0},{"I":14,"V":0},{"I":15,"V":"0.0.0.0"},{"I":16,"V":0},{"I":17,"V":0},{"I":21,"V":300044},{"I":22,"V":299144}]]}

Supported platform

  • Linux
  • Windows

Build

Given that the Go Language compiler (version 1.14.x preferred) is installed, you can build it with:

go get github.com/EdgeCast/vflow/vflow
cd $GOPATH/src/github.com/EdgeCast/vflow

make build
or
cd vflow; go build 

Installation

You can download and install pre-built debian package as below (RPM and Linux binary are available).

dpkg -i vflow-0.9.0-x86_64.deb

Once you installed you need to configure the below files, for more information check configuration guide:

/etc/vflow/vflow.conf
/etc/vflow/mq.conf

You can start the service by the below:

service vflow start

Kubernetes

kubectl apply -f https://github.com/EdgeCast/vflow/blob/master/kubernetes/deploy.yaml

Docker

docker run -d -p 2181:2181 -p 9092:9092 spotify/kafka
docker run -d -p 4739:4739 -p 4729:4729 -p 6343:6343 -p 8081:8081 -e VFLOW_KAFKA_BROKERS="172.17.0.1:9092" mehrdadrad/vflow

License

Licensed under the Apache License, Version 2.0 (the "License")

Contribute

Welcomes any kind of contribution, please follow the next steps:

  • Fork the project on github.com.
  • Create a new branch.
  • Commit changes to the new branch.
  • Send a pull request.

vflow's People

Contributors

akshah avatar alexeystolyarov avatar antongulenko avatar awillis avatar besdollma avatar changkyu-kim avatar dbardbar avatar glowa001 avatar jpercivall avatar jrossi avatar kkxue avatar leoluk avatar lwsbox avatar mehrdadrad avatar mmilgram avatar mmlb avatar newlandk avatar rcarrillocruz avatar satheeshravir avatar shift avatar testsgmr avatar testwill avatar thekvs avatar tinselcity avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vflow's Issues

IPFIX: Interpret Type Conversions Unsafe?

While debugging an interpret error related to #30, I noticed that the Interpret function returns have different behavior than I would have expected:

  1. Why create another value after converting the byte array to Uint64? i.e. why not return binary.BigEndian.Uint64(*b) I'm new to Go, so this may make complete sense and I'm just unaware
  2. Won't converting from an uint to an int put you at risk of going out of range?

godoc is treating license header as docs

The current situation is:

// Apache license blurb
// ...
//
package store
...

but all the go tools are setup to parse comment immediately before package statement as the package docs. This situation is not very useful. I've verified the following does the right thing.

// Apache license blurb
// ...
//

// Package store stores stuff
// more docs
package store

I can submit a PR this is wanted.

Netflow v9 not forwarding to Kafka when v9 element not recognized.

I am using a Cisco ASA to test this out with latest software and flow exporter in v9 format. It looks to me like the IPFIX decoder uses non-fatal error on unidentified element
--- ipfix/decoder.go lines 490-494
if !ok {
return nil, nonfatalError(fmt.Errorf("IPFIX element key (%d) not exist",
tr.FieldSpecifiers[i].ElementID))
}
-- in netflow (below) it seems like it is not a "Nonfatal Error" - causing the exporter to ignore.
-- netflow/v9/decoder.go lines 337-341
if !ok {
return nil, fmt.Errorf("Netflow element key (%d) not exist",
tr.FieldSpecifiers[i].ElementID)
}

Can you please update this and roll out to a new deb package so I can test. I have other suggestions for the software but they are not as much bugs as features I will put them in a different issue.

The setup I have is quiet simple Cisco ASA 5512X

[(outside) Cisco ASA (inside)]=>[Linux server vflow daemon]=>[Kafka]

Relevant cisco ASA configuration:
policy-map global_policy
class flow_export_class
flow-export event-type all destination 172.22.0.1
class-map flow_export_class
match access-list flow_export_vpn
access-list flow_export_vpn extended permit ip any4 xx.xx.5.0 255.255.255.0

Relevant Linux setup:
vijay@linux: more /etc/vflow/vflow.conf
netflow9-workers: 50
ipfix-tpl-cache-file: /usr/local/vflow/vflow.templates
netflow9-tpl-cache-file: /usr/local/vflow/netflow.templates
netflow9-topic: kafka.vflow.netflow
vijay@linux: more /etc/vflow/mq.conf
brokers:
- 172.22.0.1:9092
retry-max: 2
retry-backoff: 10
verify-ssl: false

When I test with vflow_stress everything is working just right, no problem. Kafka streamer is seeing the data and I can subscribe to it no problem.
...snip..{"I":152,"V":1485886990569},{"I":153,"V":1485886990569},{"I":136,"V":1},{"I":243,"V":0},{"I":245,"V":0}],[{"I":8,"V":"72.21.81.253"},{"I":12,"V":"167.21.142.42"},{"I":5,"V":0},{"I":4,"V":6},{"I":7,"V":80},{"I":11,"V":4814},{"I":32,"V":0},{"I":10,"V":939},{"I":58,"V":0},{"I":9,"V":24},{"I":13,"V":17},{"I":16,"V":4200000000},{"I":17,"V":30641},{"I":15,"V":"4.68.71.197"},{"I":6,"V":"0x10"},{"I":14,"V":1630},{"I":1,"V":7500},{"I":2,"V":5},{"I":52,"V":63},{"I":53,"V":63},{"I":152,"V":1485886571990},{"I":153,"V":1485887041099},{"I":136,"V":2},{"I":243,"V":0},{"I":245,"V":0}]]}

Add HDFS Support and RFC 5655

What do you think about adding these new features? Do you think these features are within the context and scope of this repo? I may be able to contribute on this front.

Debian and RPM package feature request.

First of all for this good framework and nice software to get IPFIX data into big data like systems in a native and simple way. These are more feature requests I have to help see this in a production system at an ISP. My desire is to use these to create tools for DDOS early warning, malware detection like use cases. Here are the feature requests

  1. vflow_stress links to a file that is local or ssystem specific
  2. vflow running as a non root user.

1: vflow_stress problem
vflow_stress from rpm or debian seems to be linked to a GOPATH or package and cannot execute without a specific link to this file, see error below:

I can fake it to run but this will be a good one to fix.

vijay@linux: vflow_stress
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x78 pc=0x4a490e]

goroutine 4 [running]:
main.main.func1(0xc420010590, 0xc4200105a0, 0x10, 0x10)
/root/go/src/github.com/VerizonDigital/vflow/stress/stress.go:73 +0x7e
created by main.main
/root/go/src/github.com/VerizonDigital/vflow/stress/stress.go:80 +0xcc

2: vflow as a non-root user

The directories /usr/local/vflow and /var/log/vflow can all be owned by a user called "vflow" - in most systems that I manage we try to minimize things running as root, so it can be good to create a "vflow" user and chown the files and pid directories to be all with "vflow" user.

Completing Netflow v9 implementation

Hello!

I need netflow v9 parser to extract data from nfcapd files. Namely, getting some information about bytes, destination and source addresses. I have used nfdump to run it from golang application. Unfortunately, it is not very fast when files have size more than 1 GB. The only way to fix it is to read nfcapd files directly with golang using goroutines and semaphores.

All I want is to proceed on working with current implementation of netflow v9 packet. I will wonder if someone helps me. If you have chat in slack/IRC/gitter, then please tell me where it is possible to find you. If someone is already started to implement, please tell me. Maybe we could collaborate and finish our goal quickly.

Some words about going implementation. I want to implement it based on nfdump tool by parsing fields relating to bytes and IP addresses. I checked out post from CISCO and want to use it too.

Any suggestions and tips will be highly appreciated.

BR,
Mehti

I get error

C:\Users\59401>go get github.com/VerizonDigital/vflow/vflow

github.com/VerizonDigital/vflow/mirror

d:\users\go\src\github.com\VerizonDigital\vflow\mirror\mirror.go:65: undefined: syscall.IPPROTO_RAW
d:\users\go\src\github.com\VerizonDigital\vflow\mirror\mirror.go:88: cannot assign syscall.Handle to conn.fd (type int) in multiple assignment
d:\users\go\src\github.com\VerizonDigital\vflow\mirror\mirror.go:95: cannot use c.fd (type int) as type syscall.Handle in argument to syscall.Sendto
d:\users\go\src\github.com\VerizonDigital\vflow\mirror\mirror.go:100: cannot use c.fd (type int) as type syscall.Handle in argument to syscall.Close

github.com/VerizonDigital/vflow/producer

d:\users\go\src\github.com\VerizonDigital\vflow\producer\kafka.go:78: undefined: sarama.CompressionLZ4

Hard coded topic prefix

The existing producer codes, forces topic names via (https://github.com/VerizonDigital/vflow/blob/master/producer/producer.go#L75, https://github.com/VerizonDigital/vflow/blob/master/vflow/sflow.go#L120 & https://github.com/VerizonDigital/vflow/blob/master/vflow/ipfix.go#L131.

This is not preferred for us due to the below reasons:

  • Due to the way kafka normalises topics, both '.' and underscore can be interpreted the same way (https://issues.apache.org/jira/browse/KAFKA-2337). To avoid any issues with this, we have an internal standard of not using '.' or '_' in topic names... which unfortunately this breaks.
  • We also have an internal naming standard to associate topics to teams/owners.
  • It is not possible to run desecrate instances of vflow on the same kafka cluster (for testing/dev environments etc)

Do you have any objects to abstracting this into the config, so the topic names can be specified at runtime.

I can't see a valid reason for this being hard coded, the default could just be set to the existing names to maintain compatibility.

vflow with JunOS and netflow

Hello,

We are facing problem parsing netflow flow from Juniper MX960 running JunOS 13.3R9.13
Error:

[vflow] 2018/01/31 15:41:52 181.41.211.9 unknown netflow template id# 257

vFlow runs on CentOS 7.4 3.10.0-693.17.1.el7.x86_64.
vFlow 0.4.1 was installed from package by https://github.com/VerizonDigital/vflow/blob/master/docs/quick_start_kafka.md

vFlow configuration left default:
ipfix-workers: 100 sflow-workers: 100 netflow9-workers: 50 log-file: /var/log/vflow.log ipfix-tpl-cache-file: /usr/local/vflow/vflow.templates

We can't use any other flow because of other systems
Can you point me to right direction?

We have dump if needed

ipfix multicast join - setsockopt: invalid argument

This one has been plaguing me for a while. On both Solaris (joyent) and MacOSX (high sierra), vflow fails to start unless you disable ipfix. It works fine on Linux. The error message is just "setsockopt: invalid argument".

The particular setsockopt is a MCAST_JOIN_GROUP called in ipfix/memcache_rpc.go:Discovery.mConn() at the JoinGroup()..

After a bit of fiddling, I reckon it's related to the blank address in
addr := net.JoinHostPort("", strconv.Itoa(d.port))

If you replace:

-       addr := net.JoinHostPort("", strconv.Itoa(d.port))
+       addr := net.JoinHostPort(d.group.String(), strconv.Itoa(d.port))

it seems to work but, given I've never seen a go program before I wouldn't trust my suggestion too far.

Any thoughts on what's really wrong here?

RAW IPFIX/Netflow to Kafka

I think sending RAW IPFIX/Netflow data to Kafka would be extremely beneficial instead of decoding it to JSON. Is this possible?

NetFlow V9 support

Hi, I was wondering if there is any plan on doing sFlow NetFlow v9 support.

Thanks!

Choosing right back end

Hello,

I and my company are testing vFlow for few weeks. For now we have about 45K inserts per second and we using MySQL back end and we struggling for performance.

  • Have you done any back end storage performance comparison?
  • Can you share with best practices choosing back end storage?
  • What back end storage you are using?

dependency management

Running go get ./... strikes me as not the greatest thing to be doing, any thoughts on using a package manager to handle the deps?

I ran dep init . and found many of the dependencies to have sane versions released.

vFlow failing to publish topic in Kafka: name unresolved

Hello,

Using the containers indicated in the documentation to test vFlow, my colleague and I have encountered the following error:

[vflow] 2018/05/25 13:15:42 kafka: Failed to produce message to topic vflow.sflow: dial tcp: lookup <containerID> on <IPGateway>:53: no such host

This is occurring even if the IP address of the target Kafka is given in the flow.conf file and in the VFLOW_KAFKA_BROKERS environment variable.

This can be solved by adding the target container in the etc/hosts file, but this can be an issue on cases that go beyond testing.

Netflow: index out of range [Huawei]

we are receiving netflow v9 flows from a huawei device. vflow immediately errors out:

panic: runtime error: index out of range

goroutine 66 [running]:
github.com/VerizonDigital/vflow/ipfix.Interpret(0xc4202ceb20, 0xf, 0xc4202ceac0, 0xc420126918)
        /root/go/src/github.com/VerizonDigital/vflow/ipfix/interpret.go:67 +0x63c
github.com/VerizonDigital/vflow/netflow/v9.(*Decoder).decodeData(0xc4202ceeb0, 0x160523, 0xc4203b6000, 0x16, 0x20, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /root/go/src/github.com/VerizonDigital/vflow/netflow/v9/decoder.go:345 +0x1f3
github.com/VerizonDigital/vflow/netflow/v9.(*Decoder).decodeSet(0xc4202ceeb0, 0xc4203702c0, 0x20, 0x2a, 0xc4202d6140, 0x0, 0x0)
        /root/go/src/github.com/VerizonDigital/vflow/netflow/v9/decoder.go:463 +0x422
github.com/VerizonDigital/vflow/netflow/v9.(*Decoder).Decode(0xc4202ceeb0, 0xc4203702c0, 0x20, 0x2a, 0x1, 0xb7c620, 0xc4202de000)
        /root/go/src/github.com/VerizonDigital/vflow/netflow/v9/decoder.go:399 +0x156
main.(*NetflowV9).netflowV9Worker(0xc420121b60, 0xc4203400c0)
        /root/go/src/github.com/VerizonDigital/vflow/vflow/netflow_v9.go:204 +0x394
main.(*NetflowV9).run.func1(0xc420121b60)
        /root/go/src/github.com/VerizonDigital/vflow/vflow/netflow_v9.go:107 +0x89
created by main.(*NetflowV9).run
        /root/go/src/github.com/VerizonDigital/vflow/vflow/netflow_v9.go:108 +0x180

can send a pcap file in a private mail if you prefer.

we are running the binary downloaded from the release page on redhat 7.2

CRC error while pushing data to kafka

Hello

I have used vFlow in production and notices that logs like

kafka: Failed to produce message to topic chipmunk.vflow: kafka server: Message contents does not match its CRC.

have appeared in vFlow logs. How I can determinate that issue is in kafka or in vFlow side?

I'm running:

vFlow version: 0.5.0
go version go1.10.1 linux/amd64
kafka_2.11-1.1.0
ZooKeeper 3.4.11

why do octetDeltaCount and packetDeltaCount show up as hex strings in Netflow v9 and integers in IPFIX?

More of a questions than issue.. In the JSON output for Netflow v9, the fields for octetDeltaCount(1) and packetDeltaCount(2) show up as hex strings, whereas in IPFIX they're integers. Is this supposed to be the case?

The marhsal code for v9 uses the same element definitions from ipfix/rfc5102_model.go .. or is it related to how the template defines these values (the RFC says they're nominal length is 4 bytes, which is different to the firm definition of unsigned64 in IPFIX). In the end, these fields have a type of []uint8.

cannot compile clickhouse/main.go

when I trying
go build main.go

Ive got

[master] 
# github.com/kshvakov/clickhouse/lib/data
../../../src/github.com/kshvakov/clickhouse/lib/data/block.go:74: invalid variable name column in type switch
../../../src/github.com/kshvakov/clickhouse/lib/data/block.go:89: undefined: column.ReadArray
../../../src/github.com/kshvakov/clickhouse/lib/data/block.go:96: undefined: column.Read
../../../src/github.com/kshvakov/clickhouse/lib/data/block.go:115: invalid variable name column in type switch
../../../src/github.com/kshvakov/clickhouse/lib/data/block.go:117: undefined: column.WriteArray
../../../src/github.com/kshvakov/clickhouse/lib/data/block.go:126: undefined: column.Write


solution I have found:
cd <src/kshvakov/clickhouse> && make git checkout 3d7bd11
(this rollbacks codebase to date when last commit was made in your repo)

Ipfix from Vmware

I have an ESX Server configured to send IPFIX flows to vflow server (installed on a Centos using the latest RPM version: 0.6.5), but no flows are sent to the MQ,

I can see the following errors in Vflow log:

  • IPFIX element key (890) not exist
  • IPFIX element key (890) not exist
    [vflow] 2018/09/17 11:47:17 IPFIX element key (890) not exist
    [vflow] 2018/09/17 11:47:17 Multiple errors:
  • IPFIX element key (890) not exist
  • IPFIX element key (890) not exist
  • IPFIX element key (890) not exist
    [vflow] 2018/09/17 11:47:17 Multiple errors:
  • IPFIX element key (890) not exist
  • IPFIX element key (890) not exist
  • IPFIX element key (890) not exist
    [vflow] 2018/09/17 11:47:17 IPFIX element key (890) not exist
    [vflow] 2018/09/17 11:47:17 IPFIX element key (890) not exist
    [vflow] 2018/09/17 11:47:17 Multiple errors:
  • IPFIX element key (890) not exist
  • IPFIX element key (890) not exist
  • IPFIX element key (890) not exist

Looking at the code it looks like there are no fatal errors so I am not sure why other fields are not being parsed/sent to the MQ.

attached is the pcap file from the Vmware server
ipfix.zip

Multiple message broker destinations

Currently there is a single publisher that can be specified.

In certain setups, it would be nice to have multiple publishers, including those of the same type.

An example of this would be where there are distinct kafka clusters for redundancy purposes (dual write, dual read), or purpose specific clusters (general vs security for example).

Would it be possible to have multiple instances of publishers, with distinct configs under a single instance? I'm not sure how complex this would be in the current code structure.

An alternate solution is running multiple instances with a proxy in-front (nfacctd etc) or forwarding enabled. Both of these have issues due to #5 and potentially run into local buffer issues.

Config flags don't work

The -config flag is documented @ https://github.com/VerizonDigital/vflow/blob/master/docs/config.md, as well as in --help.

Using it to specify an alternate config file doesn't appear to function as expected;

# vflow -config test.conf
[vflow] 2017/04/24 08:37:27 open /usr/local/vflow/etc/vflow.conf: no such file or directory

Strace confirms the binary isn't attempting to open the test file;

openat(AT_FDCWD, "/proc/sys/net/core/somaxconn", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/local/vflow/etc/vflow.conf", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/zoneinfo//:/etc/localtime", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/lib/zoneinfo//:/etc/localtime", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/locale/TZ//:/etc/localtime", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib64/go/lib/time/zoneinfo.zip", O_RDONLY) = -1 ENOENT (No such file or directory)
[vflow] 2017/04/24 08:38:17 open /usr/local/vflow/etc/vflow.conf: no such file or directory
openat(AT_FDCWD, "/var/run/vflow.pid", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/dev/null", O_RDONLY|O_CLOEXEC) = 3
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=2017, si_status=1, si_utime=0, si_stime=0} ---
openat(AT_FDCWD, "/var/run/vflow.pid", O_WRONLY|O_CREAT|O_CLOEXEC, 0666) = 3
+++ exited with 1 +++

The message broker config appears to exhibit the same behaviour;
(/usr/local/vflow/etc/vflow.conf has been created in this example)

openat(AT_FDCWD, "/proc/sys/net/core/somaxconn", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/local/vflow/etc/vflow.conf", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/var/run/vflow.pid", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/dev/null", O_RDONLY|O_CLOEXEC) = 3
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=11094, si_status=1, si_utime=0, si_stime=0} ---
openat(AT_FDCWD, "/var/run/vflow.pid", O_WRONLY|O_CREAT|O_CLOEXEC, 0666) = 3
[vflow] 2017/04/24 08:40:08 sflow has been disabled
[vflow] 2017/04/24 08:40:08 starting stats web server ...
[vflow] 2017/04/24 08:40:08 ipfix is running (workers#: 200)
[vflow] 2017/04/24 08:40:08 ipfix RPC enabled
[vflow] 2017/04/24 08:40:09 kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
+++ exited with 1 +++

This has been tested against the 0.3.0 tag and the current master.

Ipfix data samples with custom PENS

How will the data look like when decoding Information element(IE)s that belong to a custom PEN?

A simple tuple of I, V wouldn't suffice as its quite likely that an IE can belong to a custom PEN with same IE id.

For example when decoding custom IEs tuple can be I, V, E where I is IE id, V is the value and E is the enterprise number, I belongs to.

Reference: https://tools.ietf.org/html/rfc7011#section-3.2

Unified configuration and flags

Currently vflow uses a few different files for configuration, and some of it's not super easy to keep track of. Nor allow some features to be excluded (mirror on windows #16 )

Would this project be willing to accept something a little more complete and unified? Some examples of ideas:

I happen to like the go-flags library as it's easy and works with files, flags, and environment variables.

Supervisord Stdout and Stderr are lost when running in a docker container

Currently, there are two places where logs end up when running in a docker container, the docker logs (seen by running docker logs <container name>) and the vFlow logs (set in the config value log-file). This does not capture the Supervisord stdout and stderr.

I ran into this when I set the wrong volume path for the docker container. I continually kept seeing the following:

2017-12-19 00:37:23,626 INFO spawned: 'vflow' with pid 9
2017-12-19 00:37:24,415 INFO exited: vflow (exit status 1; not expected)
2017-12-19 00:37:25,417 INFO spawned: 'vflow' with pid 17
2017-12-19 00:37:26,211 INFO exited: vflow (exit status 1; not expected)
2017-12-19 00:37:28,218 INFO spawned: 'vflow' with pid 27
2017-12-19 00:37:29,005 INFO exited: vflow (exit status 1; not expected)
2017-12-19 00:37:32,010 INFO spawned: 'vflow' with pid 36
2017-12-19 00:37:32,816 INFO exited: vflow (exit status 1; not expected)
2017-12-19 00:37:33,818 INFO gave up: vflow entered FATAL state, too many start retries too quickly

And since my volume path was messed up vFlow never properly started and read my config file so I never got logs in my vflow log-file.

It wasn't until I set the supervisord stdout and stderr log files using the following in vflow.supervisor did I see my error:

stdout_logfile=/etc/vflow/stdout.log
stderr_logfile=/etc/vflow/stderr.log

I'm not sure what the ramifications of hardcoding those values would be but that worked for my use-case.

sflow GATEWAY extention

Hi,
I did not see any reference to extensions and specifically to the gateway extension for BGP attributes.
Are these supported and if not could this be added?

vflow don't work with sFlow from Cisco Nexus 3000

Hello!

I'm trying to collect sFlow from Cisco Nexus 3000 and it's doesn't work.
vflow received packages, it can be seen in verbose log, but queue is empty.

>$ ./vflow -verbose
...
[vflow] 2018/02/22 11:30:35 rcvd sflow data from: 10.0.51.150:62255, size: 1312 bytes
[vflow] 2018/02/22 11:30:35 rcvd sflow data from: 10.0.51.150:62255, size: 572 bytes
[vflow] 2018/02/22 11:30:35 rcvd sflow data from: 10.0.51.150:62255, size: 1368 bytes

I added debug log and found that processing packages stop here
https://github.com/VerizonDigital/vflow/blob/master/sflow/decoder.go#L137-L139
In my case datagram.Version = 1 but my Cisco Nexus 3000 exactly sends sFlow v5.

I tried to use this https://github.com/Cistern/sflow go-library and it's doesn't work on my sFlow traffic too.

But this tool https://github.com/sflow/sflowtool works perfectly. Because I guess that bug in vflow.

I save my sflow traffic to pcap over tcpdump, may be it will be useful.

Error: invalid netflow version (10)

When trying to run stress test for vflow getting errors like:

invalid netflow version (10)

No traffic was parsed and send to kafka cluster.
vflow and strees builded from v0.5.0 version.

OS: ubuntu 18.04
Java: openjdk version "10.0.1" 2018-04-17
go version: go version go1.10.3 linux/amd64

NetflowV9: Kafka server message too large

I'm running the 0.4.1 vFlow docker image and pushing to a Kafka instance within the same docker network that was created using docker-compose. I can run fine under very low throughput but when I attempt to increase it (using the exact same netflow message) I run into the issue that vFlow fails to push to Kafka because the message was too large.

TLDR: I believe I'm running into this issue: IBM/sarama#959 and can't use the proposed solution.

Setup:

My docker compose yaml file.

version: '2'

services:
  zookeeper:
    image: zookeeper:3.4.11
    restart: always
    container_name: 'zookeeper'
    hostname: zookeeper
    ports:
      - "2181:2181"

  kafka:
    image: wurstmeister/kafka:0.11.0.1
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_CREATE_TOPICS: "vflow.netflow9:1:1"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    depends_on:
      - zookeeper

  vflow:
    image: 'mehrdadrad/vflow:v0.4.1'
    container_name: 'vflow'
    hostname: 'vflow'
    volumes:
      - ./vflow/:/etc/vflow/
    ports:
      - "4729:4729/udp" # netflowV9
      # - "4739:4739/udp" # ipfix
      # - "6343:6343/udp" # sflow
      - "9081:8081" # stats
    depends_on:
      - kafka

Within the "./vflow" directory I have two input config files:

vflow.conf:

ipfix-enabled: false
ipfix-workers: 0

sflow-enabled: false
sflow-workers: 0

netflow-enabled: false
netflow9-workers: 50

log-file: /etc/vflow/vflow.log
mq-name: kafka
mq-config-file: /etc/vflow/mq.conf
ipfix-tpl-cache-file: /usr/local/vflow/vflow.templates

mq.conf:

brokers:
    - kafka:9092
retry-max: 1
retry-backoff: 30

Starting up:

I am able to run all three services and produce/consume events from Kafka outside of vFlow without issue. vFlow starts up and lists the following in the logs:

[vflow] 2017/11/30 21:01:09 ipfix disabled
[vflow] 2017/11/30 21:01:10 vFlow has been shutdown
[vflow] 2017/11/30 21:01:10 netflow v9 has been shutdown
[vflow] 2017/11/30 21:01:20 Welcome to vFlow v.unknown Apache License 2.0
[vflow] 2017/11/30 21:01:20 Copyright (C) 2017 Verizon. github.com/VerizonDigital/vflow
[vflow] 2017/11/30 21:01:20 starting stats web server ...
[vflow] 2017/11/30 21:01:20 sflow has been disabled
[vflow] 2017/11/30 21:01:20 ipfix has been disabled
[vflow] 2017/11/30 21:01:20 netflow v9 is running (UDP: listening on [::]:4729 workers#: 50)
[vflow] 2017/11/30 21:01:20 start producer: Kafka, brokers: [kafka:9092], topic: vflow.netflow9

The issue:

Sending NetflowV9 works as expected just a few messages but when I start increasing the number of messages I send it I see the following message repeated over and over:

[vflow] 2017/11/30 22:30:37 kafka: Failed to produce message to topic vflow.netflow9: kafka server: Message was too large, server rejected it to avoid allocation error.

As stated before, I believe this is because of a issue identified in this thread: IBM/sarama#959

Essentially Kafka changed what "max.message.bytes" means with version 0.11 leading to issues Sarama has set as default. Since I can't (as far as I know) configure "sarama.MaxRequestSize" within the vFlow Kafka producer, it will always hit this with Kafka v0.11 on any normal throughput.

Proposed resolution:

Offer more configurations around the Kafka producer, particularly in relation to Sarama so that "sarama.MaxRequestSize" can be reduced to 1000000.

Align the dockerfile with best practices

There are a number of improvements that could be done to the Dockerfile to better align it with best practices.

  • Don't use supervisord and instead allow the user to utilize the docker restart policy
  • Container should not stay alive when the vflow process ends
  • Logs should be output as part of the docker logs
  • Use a multistage build to reduce the size of the final image
  • Use the official Golang and Alpine base images instead of Ubuntu

unknown ipfix template

Hello,

I try to run vflow for collecting IPFIX data. When vflow receives the ipfix data I get the follow error
"unknown ipfix template id# 15599" and nothing is put onto kafka

regards,

Monitoring with influxdb

How can i use Grafana and InfluxDb for monitoring vflow? I could not find any documentation for enabling monitoring.
I am running vflow and influxdb on a separate dockers.

netflow v9: can not read the data

Hi mehrdadrad!

Cisco Catalyst 6509-E streams netflow v9 to vflow.
On some flows it writes to log: cannot read the data. After some time (may pass 1 hr or 1 day) vflow crahes.
Config , log and pcap dump attached.

Thanks!

WBR
Sultan
vflow.zip

Question: Netflow v5 traffic and verifying vflow operation

Does the collector listening for v9 traffic on 4729 also listen for v5? That is our main need: any netflow traffic, we'd like to send through the same socket. For IPFIX, we'll shoot through your other socket. We're using kafka within docker, which I have verified working with other traffic. I used the VFLOW_KAFKA_BROKERS env var you referenced in your docs.

I tried a POC using fprobe on Mac OS X Sierra (10.12.6). I used homebrew to install it. I didn't see anything hit the kafka topic "vflow.netflow9". I verified that fprobe was working by sending it to netcat as a first test. Are there any logs I can see in vflow to verify it sees traffic hitting it?

We're using your vflow docker image, v0.4.1.

Thank you!

Can not decode sflow data

  • I build with the git master branch

run command

./vflow -config vflow.conf -mqueue-conf mq.conf  \
                 -sflow-max-udp-size 100000 -sflow-port 6343

vflow.conf

cat vflow.conf
sflow-workers: 1
log-file: /var/log/vflow.log
verbose: true
mq-name: kafka
ipfix-enabled: false
netflow9-enabled: false
sflow-topic: vflow

mq.conf

brokers:
  - xxxxx.cn:9092

output log

[vflow] 2018/04/26 00:50:49 rcvd sflow data from: xxxx:6343, size: 1396 bytes
[vflow] 2018/04/26 00:50:49 rcvd sflow data from: xxxx:6343, size: 1268 bytes
[vflow] 2018/04/26 00:50:49 rcvd sflow data from: xxxx:6343, size: 1220 bytes
[vflow] 2018/04/26 00:50:49 rcvd sflow data from: xxxx:6343, size: 1312 bytes

monitor

{
    "UDPQueue": 0,
    "MessageQueue": 0,
    "UDPCount": 750,
    "DecodedCount": 0,
    "MQErrorCount": 0,
    "Workers": 1
}

tcpdump of sflow sending

01:09:45.347017 IP xxxxx.sflow > xxxxx.sflow: sFlowv5, IPv4 agent xxxxx.com, agent-id 8, length 1216
01:09:45.482522 IP xxxxx.com.sflow > 1xxxxw: sFlowv5, IPv4 agent xxxx, agent-id 8, length 1372

Problem

  • No one sflow msg was decoded, and I cann't tell the reason.

Crash when started with a lot netflow/ipfix traffic being received

panic: runtime error: index out of range

goroutine 143 [running]:
github.com/VerizonDigital/vflow/ipfix.MemCache.getShard(0x0, 0x0, 0x0, 0x100, 0xc42095600c, 0x10, 0x10, 0x0, 0x0)
/home/xxx/go/src/github.com/VerizonDigital/vflow/ipfix/memcache.go:91 +0x186
github.com/VerizonDigital/vflow/ipfix.MemCache.retrieve(0x0, 0x0, 0x0, 0x100, 0xc42095600c, 0x10, 0x10, 0x0, 0x0, 0x0, ...)
/home/xxx/go/src/github.com/VerizonDigital/vflow/ipfix/memcache.go:102 +0xb4
github.com/VerizonDigital/vflow/ipfix.(*Decoder).decodeSet(0xc42084ce90, 0x0, 0x0, 0x0, 0xc420922040, 0x7f03e0054e58, 0x0)
/home/xxx/go/src/github.com/VerizonDigital/vflow/ipfix/decoder.go:164 +0x5e6
github.com/VerizonDigital/vflow/ipfix.(*Decoder).Decode(0xc42084ce90, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/home/xxx/go/src/github.com/VerizonDigital/vflow/ipfix/decoder.go:122 +0x176
main.(*IPFIX).ipfixWorker(0xc4200b0150, 0xc420090480)
/home/xxx/go/src/github.com/VerizonDigital/vflow/vflow/ipfix.go:244 +0x395
main.(*IPFIX).run.func1(0xc4200b0150)
/home/xxx/go/src/github.com/VerizonDigital/vflow/vflow/ipfix.go:129 +0x79
created by main.(*IPFIX).run
/home/xxx/go/src/github.com/VerizonDigital/vflow/vflow/ipfix.go:126 +0x188

mq config examples

Hello all,

Only kafka publisher examples available, but nothing for NATS and NSQ.

ipfix/netflow on on different ports

Before I go and code things up. ipfix and netflow run on different ports and this is not required. A switch/if could be used on the first 2 bytes of the packet.

Code to do this be accepted?

latest version from git "operation not supported" when running

Hi
Just trying to install and run
configs:
cat /etc/vflow/vflow.conf
ipfix-workers: 600
sflow-workers: 300
sflow-port: 9999
stats-http-port: 10080
log-file: /var/log/vflow.log

cat /etc/vflow/mq.conf
brokers:
- 127.0.0.1:9092
retry-max: 1
retry-backoff: 30

netstat -na | grep 9092
netstat -na | grep 9092
tcp6 0 0 :::9092 :::* LISTEN

logs for vflow with -verbose show no errors - last in log:
[vflow] 2017/11/06 19:51:20 netflow v9 is running (UDP: listening on [::]:4729 workers#: 200)
[vflow] 2017/11/06 19:51:20 ipfix is running (UDP: listening on [::]:4739 workers#: 600)

Visualizing flow information...

Not a real issue, more a question... (I saw vflow mentioned on the NANOG list the other day.)

vflow sounds really impressive for big flow traffic. I am looking at a slightly different application space, and wonder if vflow might work.

I'm writing a series of blog posts about netflow collectors that are useful for home networks. (Even though you're a little guy, you still need good tools...) You can read them at http://richb-hanover.com/netflow-collectors-for-home-networks/)

If I'm reading the documentation correctly, vflow collects the flow data and puts it into a database. Do you have any recommendations about visualizing the data? I'd love to be able to recommend some kind of graphical front end for my readers. Many thanks!

Question about listeners/setup (espeicially in Docker)

I am looking to run this in Apache Mesos (great setup by the way) based on the architecture page, it looks like I can run multiple instances of this to sync up together for HA etc, however, because they use multicast, I'd have to run my docker daemons in host mode for that to work? (Please correct me if I am wrong).

Thanks!

Netflow: fatal error: runtime: out of memory

Hello All,

i get "fatal error: runtime: out of memory" after some minutes vflow is running
and this is my vflow.conf:

netflow9-workers: 50
log-file: /var/log/vflow.log
ipfix-tpl-cache-file: /usr/local/vflow/vflow.templates
netflow9-topic: anomaly
ipfix-enabled: false
sflow-enabled: false
dynamic-workers: false

Layer-2 Headers of Spanning-Tree

I noticed an edge case in sampled STP layer-2 header, where the source and destination mac addresses are followed by length instead of ethertype (because STP doesn't have an ethertype), and as a result, length (usually 105) is reported as the ethertype.
I guess one way to get around it is to use the source or destination mac address, which s 01:80:c2:00:00:00 for STP, to identify this edge case.

netflow v9 some templates not being parsed - Multiple errors: can not read the data

I'm having trouble with some v9 templates not being parsed from a Juniper SRX.. Some are, and some aren't.. As an example below - template id# 261 seems to fail to be defined, even though its definition gets transmitted 60 seconds by the sending router. First some background:

  • sending device - SRX210HE JUNOS Software Release [12.1X46-D67]
  • receiving devices (similar results from both)
    • smartos - go version go1.8 solaris/amd64 - joyent_20170928T144204Z
    • debian stretch - go version go1.7.4 linux/amd64 - Linux version 4.9.0-4-amd64 ([email protected]) (gcc version 6.3.0 20170516 (Debian 6.3.0-18) ) #1 SMP Debian 4.9.51-1 (2017-09-28)
  • built from git (today), VERSION=0.4.1 in Makefile (although interestingly that version doesn't populate into the version [vflow] 2017/12/03 14:31:45 Welcome to vFlow v.unknown Apache License 2.0)

Router config:

set services flow-monitoring version9 template template_1 ipv4-template
set services flow-monitoring version9 template template_2 ipv6-template
set forwarding-options sampling input rate 1
set forwarding-options sampling family inet output flow-inactive-timeout 30
set forwarding-options sampling family inet output flow-active-timeout 60
set forwarding-options sampling family inet output flow-server 10.232.6.89 port 4729
set forwarding-options sampling family inet output flow-server 10.232.6.89 version9 template template_1
set forwarding-options sampling family inet output inline-jflow source-address 10.232.4.5
set forwarding-options sampling family inet6 output flow-inactive-timeout 30
set forwarding-options sampling family inet6 output flow-active-timeout 60
set forwarding-options sampling family inet6 output flow-server 10.232.6.89 port 4729
set forwarding-options sampling family inet6 output flow-server 10.232.6.89 version9 template template_2
set forwarding-options sampling family inet6 output inline-jflow source-address 10.232.4.5

From the vflow.log file.. The "can not read data" appears every minute.

[vflow] 2017/11/28 11:10:16 Multiple errors:
- 10.232.4.5 unknown netflow template id# 261
- can not read the data
- can not read the data
- can not read the data
[vflow] 2017/11/28 11:10:16 rcvd netflow v9 data from: 10.232.4.5:63651, size: 144 bytes
[vflow] 2017/11/28 11:10:16 Multiple errors:
- can not read the data
- can not read the data
- can not read the data
- can not read the data
- can not read the data
- can not read the data
- can not read the data
- can not read the data
- can not read the data
- can not read the data
- can not read the data
- can not read the data
[vflow] 2017/11/28 11:10:18 rcvd netflow v9 data from: 10.232.4.5:58938, size: 144 bytes
[vflow] 2017/11/28 11:10:18 10.232.4.5 unknown netflow template id# 261

(After a few days it also crashes with a backtrace that I haven't really had a look at yet)

To me it looks like the definition of template 261 is being sent every minute, along with a few flows using the same template id.. Here's a packet.

Frame 16: 618 bytes on wire (4944 bits), 618 bytes captured (4944 bits) on interface 0
    Interface id: 0 (net0)
    Encapsulation type: Ethernet (1)
    Arrival Time: Dec  3, 2017 15:02:37.876916000 AEDT
    [Time shift for this packet: 0.000000000 seconds]
    Epoch Time: 1512273757.876916000 seconds
    [Time delta from previous captured frame: 2.002214000 seconds]
    [Time delta from previous displayed frame: 2.002214000 seconds]
    [Time since reference or first frame: 19.046828000 seconds]
    Frame Number: 16
    Frame Length: 618 bytes (4944 bits)
    Capture Length: 618 bytes (4944 bits)
    [Frame is marked: False]
    [Frame is ignored: False]
    [Protocols in frame: eth:ethertype:ip:udp:cflow]
Ethernet II, Src: JuniperN_cb:2f:01 (80:71:1f:cb:2f:01), Dst: 82:c5:5d:98:41:4a (82:c5:5d:98:41:4a)
    Destination: 82:c5:5d:98:41:4a (82:c5:5d:98:41:4a)
        Address: 82:c5:5d:98:41:4a (82:c5:5d:98:41:4a)
        .... ..1. .... .... .... .... = LG bit: Locally administered address (this is NOT the factory default)
        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)
    Source: JuniperN_cb:2f:01 (80:71:1f:cb:2f:01)
        Address: JuniperN_cb:2f:01 (80:71:1f:cb:2f:01)
        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)
    Type: IPv4 (0x0800)
Internet Protocol Version 4, Src: 10.232.4.5, Dst: 10.232.6.89
    0100 .... = Version: 4
    .... 0101 = Header Length: 20 bytes (5)
    Differentiated Services Field: 0x00 (DSCP: CS0, ECN: Not-ECT)
        0000 00.. = Differentiated Services Codepoint: Default (0)
        .... ..00 = Explicit Congestion Notification: Not ECN-Capable Transport (0)
    Total Length: 604
    Identification: 0x7864 (30820)
    Flags: 0x00
        0... .... = Reserved bit: Not set
        .0.. .... = Don't fragment: Not set
        ..0. .... = More fragments: Not set
    Fragment offset: 0
    Time to live: 63
    Protocol: UDP (17)
    Header checksum: 0xe0ff [validation disabled]
    [Header checksum status: Unverified]
    Source: 10.232.4.5
    Destination: 10.232.6.89
User Datagram Protocol, Src Port: 54060, Dst Port: 4729
    Source Port: 54060
    Destination Port: 4729
    Length: 584
    Checksum: 0xfb15 [unverified]
    [Checksum Status: Unverified]
    [Stream index: 0]
Cisco NetFlow/IPFIX
    Version: 9
    Count: 10
    SysUptime: 2613.569000000 seconds
    Timestamp: Dec  3, 2017 15:02:37.000000000 AEDT
        CurrentSecs: 1512273757
    FlowSequence: 372
    SourceId: 142
    FlowSet 1 [id=261]
        FlowSet Id: (Data) (261)
        FlowSet Length: 184
        Data (180 bytes), no template found
            [Expert Info (Warning/Malformed): Data (180 bytes), no template found]
                [Data (180 bytes), no template found]
                [Severity level: Warning]
                [Group: Malformed]
    FlowSet 2 [id=1] (Options Template): 256
        FlowSet Id: Options Template(V9) (1)
        FlowSet Length: 24
        Options Template (Id = 256) (Scope Count = 1; Data Count = 2)
            Template Id: 256
            Option Scope Length: 4
            Option Length: 8
            Field (1/1) [Scope]: System
                Scope Type: System (1)
                Length: 0
            Field (1/2): SAMPLING_ALGORITHM
                Type: SAMPLING_ALGORITHM (35)
                Length: 1
            Field (2/2): SAMPLING_INTERVAL
                Type: SAMPLING_INTERVAL (34)
                Length: 4
        Padding: 0000
    FlowSet 3 [id=256] (1 flows)
        FlowSet Id: (Data) (256)
        FlowSet Length: 12
        [Template Frame: 16]
        Flow 1
            Sampling algorithm: Random sampling (2)
            Sampling interval: 1
        Padding: 000000
    FlowSet 4 [id=0] (Data Template): 261
        FlowSet Id: Data Template (V9) (0)
        FlowSet Length: 92
        Template (Id = 261, Count = 21)
            Template Id: 261
            Field Count: 21
            Field (1/21): IP_SRC_ADDR
                Type: IP_SRC_ADDR (8)
                Length: 4
            Field (2/21): IP_DST_ADDR
                Type: IP_DST_ADDR (12)
                Length: 4
            Field (3/21): IP_TOS
                Type: IP_TOS (5)
                Length: 1
            Field (4/21): PROTOCOL
                Type: PROTOCOL (4)
                Length: 1
            Field (5/21): L4_SRC_PORT
                Type: L4_SRC_PORT (7)
                Length: 2
            Field (6/21): L4_DST_PORT
                Type: L4_DST_PORT (11)
                Length: 2
            Field (7/21): ICMP_TYPE
                Type: ICMP_TYPE (32)
                Length: 2
            Field (8/21): INPUT_SNMP
                Type: INPUT_SNMP (10)
                Length: 4
            Field (9/21): SRC_MASK
                Type: SRC_MASK (9)
                Length: 1
            Field (10/21): DST_MASK
                Type: DST_MASK (13)
                Length: 1
            Field (11/21): SRC_AS
                Type: SRC_AS (16)
                Length: 4
            Field (12/21): DST_AS
                Type: DST_AS (17)
                Length: 4
            Field (13/21): BGP_NEXT_HOP
                Type: BGP_NEXT_HOP (18)
                Length: 4
            Field (14/21): TCP_FLAGS
                Type: TCP_FLAGS (6)
                Length: 1
            Field (15/21): OUTPUT_SNMP
                Type: OUTPUT_SNMP (14)
                Length: 4
            Field (16/21): IP_NEXT_HOP
                Type: IP_NEXT_HOP (15)
                Length: 4
            Field (17/21): BYTES
                Type: BYTES (1)
                Length: 4
            Field (18/21): PKTS
                Type: PKTS (2)
                Length: 4
            Field (19/21): FIRST_SWITCHED
                Type: FIRST_SWITCHED (22)
                Length: 4
            Field (20/21): LAST_SWITCHED
                Type: LAST_SWITCHED (21)
                Length: 4
            Field (21/21): IP_PROTOCOL_VERSION
                Type: IP_PROTOCOL_VERSION (60)
                Length: 1
    FlowSet 5 [id=261] (4 flows)
        FlowSet Id: (Data) (261)
        FlowSet Length: 244
        [Template Frame: 16]
        Flow 1
            SrcAddr: xxx.xxx.xxx.xxx
            DstAddr: xxx.xxx.xxx.xxx
            IP ToS: 0x00
            Protocol: TCP (6)
            SrcPort: 443 (443)
            DstPort: 17776 (17776)
            ICMP Type: 0x0000
            InputInt: 539
            SrcMask: 32
            DstMask: 32
            SrcAS: 0
            DstAS: 0
            BGPNextHop: 0.0.0.0
            TCP Flags: 0x1b, ACK, PSH, SYN, FIN
                00.. .... = Reserved: 0x0
                ..0. .... = URG: Not used
                ...1 .... = ACK: Used
                .... 1... = PSH: Used
                .... .0.. = RST: Not used
                .... ..1. = SYN: Used
                .... ...1 = FIN: Used
            OutputInt: 0
            NextHop: 0.0.0.0
            Octets: 6166
            Packets: 9
            [Duration: 1.320000000 seconds (switched)]
                StartTime: 2609.845000000 seconds
                EndTime: 2611.165000000 seconds
            IPVersion: 4
        Flow 2
            SrcAddr: xxx.xxx.xxx.xxx
            DstAddr: xxx.xxx.xxx.xxx
            IP ToS: 0x00
            Protocol: TCP (6)
            SrcPort: 17776 (17776)
            DstPort: 443 (443)
            ICMP Type: 0x0000
            InputInt: 536
            SrcMask: 32
            DstMask: 32
            SrcAS: 0
            DstAS: 0
            BGPNextHop: 0.0.0.0
            TCP Flags: 0x1b, ACK, PSH, SYN, FIN
                00.. .... = Reserved: 0x0
                ..0. .... = URG: Not used
                ...1 .... = ACK: Used
                .... 1... = PSH: Used
                .... .0.. = RST: Not used
                .... ..1. = SYN: Used
                .... ...1 = FIN: Used
            OutputInt: 539
            NextHop: 0.0.0.0
            Octets: 1441
            Packets: 13
            [Duration: 1.319000000 seconds (switched)]
                StartTime: 2609.816000000 seconds
                EndTime: 2611.135000000 seconds
            IPVersion: 4
        Flow 3
            SrcAddr: xxx.xxx.xxx.xxx
            DstAddr: xxx.xxx.xxx.xxx
            IP ToS: 0x00
            Protocol: UDP (17)
            SrcPort: 53 (53)
            DstPort: 12494 (12494)
            ICMP Type: 0x0000
            InputInt: 539
            SrcMask: 32
            DstMask: 29
            SrcAS: 0
            DstAS: 0
            BGPNextHop: 0.0.0.0
            TCP Flags: 0x00
                00.. .... = Reserved: 0x0
                ..0. .... = URG: Not used
                ...0 .... = ACK: Not used
                .... 0... = PSH: Not used
                .... .0.. = RST: Not used
                .... ..0. = SYN: Not used
                .... ...0 = FIN: Not used
            OutputInt: 536
            NextHop: 10.232.4.3
            Octets: 76
            Packets: 1
            [Duration: 0.000000000 seconds (switched)]
                StartTime: 2552.968000000 seconds
                EndTime: 2552.968000000 seconds
            IPVersion: 4
        Flow 4
            SrcAddr: xxx.xxx.xxx.xxx
            DstAddr: xxx.xxx.xxx.xxx
            IP ToS: 0x00
            Protocol: UDP (17)
            SrcPort: 12494 (12494)
            DstPort: 53 (53)
            ICMP Type: 0x0000
            InputInt: 536
            SrcMask: 29
            DstMask: 32
            SrcAS: 0
            DstAS: 0
            BGPNextHop: 0.0.0.0
            TCP Flags: 0x00
                00.. .... = Reserved: 0x0
                ..0. .... = URG: Not used
                ...0 .... = ACK: Not used
                .... 0... = PSH: Not used
                .... .0.. = RST: Not used
                .... ..0. = SYN: Not used
                .... ...0 = FIN: Not used
            OutputInt: 539
            NextHop: 0.0.0.0
            Octets: 60
            Packets: 1
            [Duration: 0.000000000 seconds (switched)]
                StartTime: 2552.811000000 seconds
                EndTime: 2552.811000000 seconds
            IPVersion: 4

Some other templates are working fine.. I haven't worked out what the relationship is between the successful and failing ones yet.

[vflow] 2017/12/03 14:52:29 {"AgentID":"10.232.4.5","Header":{"Version":9,"Count":7,"SysUpTime":1708483,"UNIXSecs":1512272852,"SeqNum":427,"SrcID":142},"DataSets":[[{"I":35,"V":2},{"I":34,"V":1},{"I":1,"V":"0x"}],[{"I":35,"V":0},{"I":34,"V":0},{"I":1,"V":"0x"}],[{"I":35,"V":1},.......
Any thoughts on how to debug this futher?

question about configure

Hello,
i install and run docker. so i have some question
1- ok is there any interface or log file for see the output?
2- i just installed on ubuntu 16 does i need cisco or juniper router to send data to ubuntu?
3-what configure should i change to send traffic to vflow?
and lot's of more question thanks :D for great project!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.