Giter Club home page Giter Club logo

f4de's Introduction

Framework for Detection Evaluation (F4DE)

Version: 3.5.0

Date: July 6, 2017

Table of Content

Overview

Setup

Usage

Supported Evaluations

Report a bug

Authors

Copyright

Overview

This directory contains the Framework for Detection Evaluation (F4DE) software package. It contains a set of evaluation tools for detection evaluations and for specific NIST-coordinated evaluations.

Setup

F4DE consists of a set of Perl scripts that can be run under a shell terminal.

F4DE's source files are publicly available on GitHub, see: https://github.com/usnistgov/F4DE

It has been confirmed to work under Linux, OS X and Cygwin.

After confirming the prerequisites are met, F4DE does not have to be installed as the tools can be run directly from their base location. An installation method is made available if you intend to add some of more common tools to your path. See the installation subsection for more details.

A docker release for F4DE is available, please see: https://hub.docker.com/r/martialnist/docker-f4de/

Unless using the docker release of F4DE, some prerequisites need to be fulfilled.

Prerequisites

  • a version of Perl, above 5.14.

    • If you do not have Modules installation privileges on your host for the version of Perl you are running, it is recommended to install an HOME directory perl installation using Perlbrew.

    • If using Perl 5.18 (or above), the tools will use it by forcing PERL_PERTURB_KEYS=0 and PERL_HASH_SEED=0 in order to have repeatable results, as Perl 5.18 and above use a new Hash algorithm that will not always produce results comparable with runs done using Perl 5.16 and below.

    • For Perl before 5.20, an installed and configured version of cpanp. For Perl 5.20 and above, an installed and configured version of cpanm.

  • a recent version of gnuplot (at least 4.4 with ligdb's png support) to create plots for DETCurves among others. Of note, F4DE does not currently support libcairo's pngcairo terminal type.

  • a recent version of xmllint (at least 2.6.30) (part of libxml2) to validate XML files against their corresponding schema files.

  • a recent version of SQLite (at least 3.6.12) to use all the SQLite based tools (including DEVA)

  • the rsync tool available in your PATH (used in the installation process)

  • Some Perl Modules installed: Text::CSV, Text::CSV_XS, Math::Random::OO::Uniform, Math::Random::OO::Normal, Statistics::Descriptive, Statistics::Descriptive::Discrete, Statistics::Distributions, DBI, DBD::SQLite, File::Monitor, File::Monitor::Object, Digest::SHA, YAML, Data::Dump.

    • Automatic installation (using the cpanp/cpanm tools) can be executed from the main directory by using the 'make perl_install' command line. Note that you need to be able to install Modules within your Perl installation, which might require administrative privileges.

    • Availability of those will can be tested using make check from F4DE main directory.

Installation

All of the F4DE tools are made so that they can be run from the directory they are uncompressed from, and therefore installation is optional, but useful if you want to launch some tools from anywhere. Also, a reminder that a docker release for F4DE is available at: https://hub.docker.com/r/martialnist/docker-f4de/

If installing on a cygwin system, please read the cygwin pre-installation notes first.

Installation is a 4 step process:

  1. Make sure all the steps in Prerequisites are done.

  2. Run make to get a list of the check options available.

  • At minimum, run make mincheck followed by the appropriate check for your tool set you intend to use (i.e. if you intend to use DEVA, run make DEVAcheck) to make sure all required libraries and executables are available on the system. Note that each tool's individual test can take from a few seconds to a few minutes to complete. * We recommend running make check to run a full check to confirm that all software checks pass. * If one of the Tools tests fails, please follow the bug reports submission instructions detailed in the test case bug report section.
  1. Execute the make install command to make symbolic links from the executables into the F4DE uncompression directory's bin and man directories

  2. Add F4DE uncompression directory's bin directory to your PATH environment variable and its man directory to your MANPATH environment variable.

Cygwin pre-installation notes

The tools have been confirmed to work under windows when running cygwin. After downloading the latest setup.exe from http://www.cygwin.com/, make sure to add the following when in the Select Packages: in Archive select unzip, in Database select sqlite3, in Devel select gcc, gcc4 and make, in Libs select libxml2, in Math select gnuplot, in Net select rsync, in Perl select perl, perl-ExtUtils-Depends and perl-ExtUtils-PkgConfig.

After installation, from shell, run cpan from which you will want to install first the ExtUtils::CBuilder modules and then the modules listed in the prerequisites section of the setup instructions.

After this, refer to the rest of the installation section.

Usage

A manual page can be printed by each command by executing the command with the --man command line argument. For example:

%  TV08Scorer --man

Some manual pages contain command line examples for the tool in question, but each NIST evaluation specific set of tools should be more detailed in its evaluation plan.

Command line examples

Some command line examples are provided as part of the test directory relative to the evaluation tool you are trying to test (for example CLEAR/test/<TOOLNAME> or TrecVid08/test/<TOOLNAME>) and use the [[COMMANDLINE]] line listed in the first entry of the res*.txt test case files. For example, in TrecVid08/test/TV08ViperValidator/res_test0.txt, the first line contains:

[[COMMANDLINE]] ../../tools/TV08ViperValidator/TV08ViperValidator.pl -X

the command line to try is:

../../tools/TV08ViperValidator/TV08ViperValidator.pl -X

which will run the TV08ViperValidator.pl tool with its -XMLbase command line option, which will result in: Print a ViPER file with an empty section and a populated section, and exit (to a file if one provided on the command line). The expected result of the command line can be found in the test file below the [[STDOUT]] section.

Supported Evaluations

Advanced Video and Signal Based Surveillance (AVSS): 2009 2010

Classification of Events, Activities, and Relationships (CLEAR): 2007

KeyWord Search (KWS / OpenKWS): KWS 2015

TRECVID Multimedia Event Detection(MED): 2010 2011 2012 2013 2015 2016 2017

TRECVID Multimedia Event Recounting (MER): 2013

TRECVID Surveillance Event Detection (SED): 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017

2007 CLEAR Evaluation:

  • Domains: Broadcast News, Meeting Room, Surveillance and UAV
  • Measures: Area and Point
  • Detection and Tracking (DT) tools:
    • CLEARDTScorer - The main DT evaluation script.
    • CLEARDTViperValidator - A syntactic and semantic validator for both system output ViPER files and reference annotation files.
  • Text Recognition (TR) tools:
    • CLEARTRScorer - The main TR evaluation script.
    • CLEARTRViperValidator - A syntactic and semantic validator for both system output ViPER files and reference annotation files.

2008 TRECVID SED Evaluation:

  • TV08Scorer - The main evaluation script.
  • TV08ViperValidator - A syntactic and semantic validator for both system output ViPER files and reference annotation files.
  • TV08MergeHelper - A TRECVID '08 ViPER-formatted file merging program.
  • TV08_BigXML_ValidatorHelper - A helper program (that relies on TV08ViperValidator and TV08MergeHelper) to perform syntactic and semantic validation on ViPER-formatted files containing a large number of event observations.
  • TV08ED-SubmissionChecker - A tool designed to help confirm submission archives before transmitting them to NIST.

2009 AVSS Evaluation:

  • AVSS09Scorer - The main evaluation script.
  • AVSS09ViperValidator - A syntactic and semantic validator for both system output ViPER files and reference annotation files.
  • AVSS09-SubmissionChecker - A tool designed to help confirm submission archives before transmitting them to NIST.

2009 TRECVID SED Evaluation:

  • Same tools as the 2008 TRECVID SED Evaluation (TV08Scorer, TV08ViperValidator, TV08MergeHelper, TV08_BigXML_ValidatorHelper)
  • TV09ED-SubmissionChecker - A tool designed to help confirm submission archives before transmitting them to NIST.

2010 AVSS Evaluation:

  • Same tools as the 2009 AVSS Evaluation (AVSS09Scorer, AVSS09ViperValidator, AVSS09-SubmissionChecker)

2010 TRECVID MED Evaluation

  • DEVA_cli - The main evaluation script.

2010 TRECVID SED Evaluation:

  • Same tools as the 2009 TRECVID SED Evaluation (TV08Scorer, TV08ViperValidator, TV08MergeHelper, TV08_BigXML_ValidatorHelper, TV09ED-SubmissionChecker)
  • TV10SED-SubmissionChecker - A tool designed to help confirm submission archives before transmitting them to NIST.

2011 TRECVID MED Evaluation:

  • Same tool as the 2010 MED Evaluation (DEVA_cli).
  • TV11MED-SubmissionChecker - A tool designed to help confirm MED11 submission archives before transmitting them to NIST.
  • Scoring Primer: DEVA/doc/TRECVid-MED11-ScoringPrimer.html

2011 TRECVID SED Evaluation:

  • Same tools as the 2010 TRECVID SED Evaluation (TV08Scorer, TV08ViperValidator, TV08MergeHelper, TV08_BigXML_ValidatorHelper)
  • TV11SED-SubmissionChecker - A tool designed to help confirm SED11 submission archives before transmitting them to NIST.

KWS Evaluation:

  • KWSEval - The main KeyWord Search evaluation program derived from STDEval.
  • UTF-8 code set support.

2012 TRECVID MED Evaluation:

  • Same tool as the 2011 MED Evaluation (DEVA_cli).
  • TV12MED-SubmissionChecker - A tool designed to help confirm MED12 submission archives before transmitting them to NIST.
  • Scoring Primer: DEVA/doc/TRECVid-MED12-ScoringPrimer.html

2012 TRECVID SED Evaluation:

  • Same tools as the 2011 TRECVID SED Evaluation (TV08Scorer, TV08ViperValidator, TV08MergeHelper, TV08_BigXML_ValidatorHelper)
  • TV12SED-SubmissionChecker - A tool designed to help confirm SED12 submission archives before transmitting them to NIST.

2013 TRECVID MED Evaluation:

  • Same tool as the 2012 MED Evaluation (DEVA_cli).
  • TV13MED-SubmissionChecker - A tool designed to help confirm MED13 submission archives before transmitting them to NIST.
  • Scoring Primer: DEVA/doc/TRECVid-MED13-ScoringPrimer.html

2013 TRECVID MER Evaluation:

  • TV13MED-SubmissionChecker - A tool designed to help confirm MED13 and MER13 submission archives before transmitting them to NIST.

2013 TRECVID SED Evaluation:

  • Same tools as the 2012 TRECVID SED Evaluation (TV08Scorer, TV08ViperValidator, TV08MergeHelper, TV08_BigXML_ValidatorHelper)
  • TV13SED-SubmissionChecker - A tool designed to help confirm SED13 submission archives before transmitting them to NIST.

2014 TRECVID SED Evaluation:

  • Same tools as the 2013 TRECVID SED Evaluation (TV08Scorer, TV08ViperValidator, TV08MergeHelper, TV08_BigXML_ValidatorHelper)
  • TV14SED-SubmissionChecker - A tool designed to help confirm SED14 submission archives before transmitting them to NIST.

2015 OpenKWS Evaluation:

  • Participant's side of the BABEL Scorer

2015 TRECVID SED Evaluation:

  • Same tools as the 2014 TRECVID SED Evaluation (TV08Scorer, TV08ViperValidator, TV08MergeHelper, TV08_BigXML_ValidatorHelper)
  • TV15SED-SubmissionChecker - A tool designed to help confirm SED15 submission archives before transmitting them to NIST.

2015 TRECVID MED Evaluation:

  • Same tool as the 2013 MED Evaluation (DEVA_cli).
  • TV15MED-SubmissionChecker - A tool designed to help confirm MED15 submission archives before transmitting them to NIST.
  • Scoring Primer: DEVA/doc/TRECVid-MED15-ScoringPrimer.html

2016 TRECVID SED Evaluation:

  • Same tools as the 2014 TRECVID SED Evaluation (TV08Scorer, TV08ViperValidator, TV08MergeHelper, TV08_BigXML_ValidatorHelper)
  • TV16SED-SubmissionChecker - A tool designed to help confirm SED16 submission archives before transmitting them to NIST.

2016 TRECVID MED Evaluation:

  • Same tool as the 2013 MED Evaluation (DEVA_cli).
  • TV16MED-SubmissionChecker - A tool designed to help confirm MED16 submission archives before transmitting them to NIST.
  • Scoring Primer: DEVA/doc/TRECVid-MED16-ScoringPrimer.html

2017 TRECVID SED Evaluation:

  • Same tools as the 2014 TRECVID SED Evaluation (TV08Scorer, TV08ViperValidator, TV08MergeHelper, TV08_BigXML_ValidatorHelper)
  • TV17SED-SubmissionChecker - A tool designed to help confirm SED17 submission archives before transmitting them to NIST.

2017 TRECVID MED Evaluation:

  • Same tool as the 2013 MED Evaluation (DEVA_cli).
  • TV17MED-SubmissionChecker - A tool designed to help confirm MED17 submission archives before transmitting them to NIST.
  • Scoring Primer: DEVA/doc/TRECVid-MED17-ScoringPrimer.html

Misc tools:

  • VidAT (common/tools/VidAT): a suite of tools designed to overlay video with boxes, polygons, etc. on a frame-by-frame basis by using the output logs generated by CLEARDTScorer. Consult the README within the directory for special installation details and usage. VidAT's tools require FFmpeg, Ghostscript and ImageMagick.
  • SQLite_tools (common/tools/SQLite_tools): a suite of tools designed to help interface CSV files and SQLite databases.

Report a bug

Please send bug reports to [email protected]

For the bug report to be useful, please include the command line, files and text output, including the error message in your email.

Test case bug report

If the error occurred while doing a make check, go in the directory associated with the tool that failed (for example: CLEAR/test/<TOOLNAME>), and type make makecompcheckfiles. This process will create a file corresponding to each test number named res_test*.txt-comp. These file are (like their .txt equivalent) text files that can be compared to the original res_test*.txt files.

For information, for each of those tests, the command line that was run by the test can be found in the corresponding res*.txt file as the first line in the [[COMMANDLINE]] section.

When a test fails, please send us the res_test*.txt-comp file of the failed test(s) for us to try to understand what happened, as well as information about your system (OS, architecture, ...) that you think might help us.

Thank you for helping us improve F4DE.

Authors

Martial Michel <[email protected]>

David Joy <[email protected]>

Jonathan Fiscus <[email protected]>

Vladimir Dreyvitser

Vasant Manohar

Jerome Ajot

Bradford N. Barr

Copyright

Full details can be found at: http://nist.gov/data/license.cfm

This software was developed at the National Institute of Standards and Technology by employees of the Federal Government in the course of their official duties. Pursuant to Title 17 Section 105 of the United States Code this software is not subject to copyright protection within the United States and is in the public domain. F4DE is an experimental system. NIST assumes no responsibility whatsoever for its use by any party, and makes no guarantees, expressed or implied, about its quality, reliability, or any other characteristic.

We would appreciate acknowledgement if the software is used. This software can be redistributed and/or modified freely provided that any derivative works bear some notice that they are derived from it, and any modified versions bear some notice that they have been modified.

THIS SOFTWARE IS PROVIDED "AS IS." With regard to this software, NIST MAKES NO EXPRESS OR IMPLIED WARRANTY AS TO ANY MATTER WHATSOEVER, INCLUDING MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.