Giter Club home page Giter Club logo

veracode-research / svrwb-fuzz-benchmark-suite Goto Github PK

View Code? Open in Web Editor NEW
8.0 5.0 1.0 48.67 MB

Single version, Real World (Dead) Bug Fuzzer Benchmark Suite (Work-in-Progress)

Makefile 4.00% M4 0.74% Shell 8.41% C 72.06% C++ 6.87% Python 0.23% Batchfile 0.06% HTML 1.50% Pascal 0.03% Roff 1.85% Objective-C 0.61% Perl 0.02% CMake 0.30% Awk 0.11% Tcl 1.77% Yacc 0.21% TSQL 0.01% PLpgSQL 0.01% Assembly 1.19% Lex 0.03%
fuzzing fuzzer comparison-benchmarks

svrwb-fuzz-benchmark-suite's People

Contributors

roachspray avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

daydayup40

svrwb-fuzz-benchmark-suite's Issues

Ensure proper flags to gcc, clang when using ASan

For Perl and libpcap, there is some usage of link errors that will show up due to -Wl,--no-undefined and -fsanitize=address.

(1)For GCC, I use "-Wl,--no-undefined" and "-fsanitize=address" both, if the linker
fails, add "-lasan" for linker flag.
(2)For Clang Ubuntu, I use "-Wl,--no-undefined" by default. If "-fsanitize=address"
is set, turn off the flag "-Wl,--no-undefined".

(from google sanitizers issue 380)

cases/lame -- autoconf patch for SSE compilation issues

Include the following patch to deal with SSE inlining issues (based on FreeBSD PR 206620):

--- configure.in.orig   2012-02-28 19:50:27.000000000 +0100
+++ configure.in        2016-01-25 20:15:46.034842000 +0100
@@ -96,9 +96,19 @@
                 sys/soundcard.h \
                 sys/time.h \
                 unistd.h \
-                xmmintrin.h \
                 linux/soundcard.h)

+dnl Checks for actually working SSE intrinsics
+AC_MSG_CHECKING(working SSE intrinsics)
+AC_COMPILE_IFELSE(
+    [AC_LANG_PROGRAM(
+       [[#include <xmmintrin.h>]],
+       [[_mm_sfence();]])],
+    [AC_DEFINE([HAVE_XMMINTRIN_H], [1], [Define if SSE intrinsics work.])
+     ac_cv_header_xmmintrin_h=yes],
+   [ac_cv_header_xmmintrin_h=no])
+AC_MSG_RESULT(${ac_cv_header_xmmintrin_h})
+
 dnl Checks for typedefs, structures, and compiler characteristics.
 AC_C_CONST
 AC_C_INLINE
--- configure.orig      2012-02-28 19:54:37.000000000 +0100
+++ configure   2016-01-25 20:16:07.429512000 +0100
@@ -11922,7 +11918,6 @@
                 sys/soundcard.h \
                 sys/time.h \
                 unistd.h \
-                xmmintrin.h \
                 linux/soundcard.h
 do :
   as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh`
@@ -11937,6 +11932,31 @@
 done


+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking working SSE intrinsics" >&5
+$as_echo_n "checking working SSE intrinsics... " >&6; }
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <xmmintrin.h>
+int
+main ()
+{
+_mm_sfence();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+
+$as_echo "#define HAVE_XMMINTRIN_H 1" >>confdefs.h
+
+     ac_cv_header_xmmintrin_h=yes
+else
+  ac_cv_header_xmmintrin_h=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: ${ac_cv_header_xmmintrin_h}" >&5
+$as_echo "${ac_cv_header_xmmintrin_h}" >&6; }
+
 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for an ANSI C-conforming const" >&5
 $as_echo_n "checking for an ANSI C-conforming const... " >&6; }
 if ${ac_cv_c_const+:} false; then :

Investigate and add possible apps from Redqueen fuzzer repo

Fuzzer repo is: https://github.com/rub-syssec/redqueen and there are a handful of real world applications (with versions) that they fuzzed. Some of these may be useful for bootstrapping their addition to the benchmark. For those being added, supposedly they should have associated CVE numbers..try to find them.

AC

  • Possibly the addition of applications from redqueen cves.tar.gz file.
  • If unable to add any, keep notes as to why they were not sufficient.

Address issue of being a static benchmark suite

A fundamental issue with a static benchmark suite for comparing general purpose fuzzers is that it is not changing. This is a key benefit to the Rode0day "contests". Determine if there is a good process for doing refreshes on these -- e.g. a yearly release of typical code and typical vulns, or some such. So, each app still is passing the criteria for addition, but it is newer.

Remove duplicate source trees

In the steps leading to what apps are in the suite now, we did the following process:

  • Gather known bugs related to a version(s) or date-time/commit
  • If there seems to be more than 1 version that holds the bugs, compile a few likely versions and run the inputs through to determine which of those versions has the most known bugs.
  • Use the version with the most known bugs as the starting point.

In my (ridiculous and perhaps unwarranted) haste to push this to GitHub to coincide with the Japan summit, I left duplicate trees here and committed them. Fix this so there is only one.

Realize it may also be worthwhile to document the above known-bugs version vetting.

Add ChakraCore 1.4.1

ChakraCore 1.4.1 (iirc) has been used by a few papers for fuzzer comparison. Make use of other people's prior work to include that code here.

Create commentary for recommended use practices

Similar to the question of addressing the reality that this is a static suite of applications (#4 ), there is the question of what is the appropriate use of this suite. Any suite can (obviously?) be abused, so there should be some set of best practices that users of the suite would want to follow. Part of this list of practices should be inspired from documents like [1] [2] and others (there are some genetic/evol. algorithm comparison papers).

Things to be addressed should include:

  • Selection of a subset of apps
  • Number of trials
  • Duration of each trial
  • Statements on trial system's hardware, OS, etc
  • Acceptable seeds
  • ...I am sure this list is probably incomplete

[1] Klees, et al., "Evaluating Fuzz Testing", https://www.cs.umd.edu/~mwh/papers/fuzzeval.pdf
[2] Berger, et al., "A Checklist Manifesto for Empirical Evaluation: A Preemptive Strike Against a Replication Crisis in Computer Science", https://www.sigarch.org/a-checklist-manifesto-for-empirical-evaluation-a-preemptive-strike-against-a-replication-crisis-in-computer-science/

Improved details about each bug

This could be on-going, but we would want to know:

  • type of bug
  • patch to fix
  • is the code hidden behind any rarely taken branch paths
  • ...other?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.