Giter Club home page Giter Club logo

flamegpu2's Introduction

FLAME GPU

http://www.flamegpu.com

Current version: 1.5.0

FLAME GPU (Flexible Large-scale Agent Modelling Environment for Graphics Processing Units) is a high performance Graphics Processing Unit (GPU) extension to the FLAME framework.

It provides a mapping between a formal agent specifications with C based scripting and optimised CUDA code. This includes a number of key ABM building blocks such as multiple agent types, agent communication and birth and death allocation. The advantages of our contribution are three fold.

  1. Agent Based (AB) modellers are able to focus on specifying agent behaviour and run simulations without explicit understanding of CUDA programming or GPU optimisation strategies.
  2. Simulation performance is significantly increased in comparison with desktop CPU alternatives. This allows simulation of far larger model sizes with high performance at a fraction of the cost of grid based alternatives.
  3. Massive agent populations can be visualised in real time as agent data is already located on the GPU hardware.

Documentation

The FLAME GPU documentation and user guide can be found at http://docs.flamegpu.com, with source hosted on GitHub at FLAMEGPU/docs.

Getting FLAME GPU

Pre-compiled Windows binaries are available for the example projects in the FLAME-GPU-SDK, available as an archive for each release.

Source is available from GitHub, either as a zip download or via git:

git clone https://github.com/FLAMEGPU/FLAMEGPU.git

Or

git clone [email protected]:FLAMEGPU/FLAMEGPU.git

Building FLAME GPU

FLAME GPU can be built for Windows and Linux. MacOS should work, but is unsupported.

Dependencies

  • CUDA 8.0 or later
  • Compute Capability 2.0 or greater GPU (CUDA 8)
    • Compute Capability 3.0 or greater GPU (CUDA 9)
  • Windows
    • Microsoft Visual Studio 2015 or later
    • Visualisation:
      • freeglut and glew are included with FLAME GPU.
    • Optional: make
  • Linux
    • make
    • g++ (which supports the cuda version used)
    • xsltproc
    • Visualistion:
      • GL (deb: libgl1-mesa-dev, yum: mesa-libGL-devel)
      • GLU (deb: libglu1-mesa-dev, yum: mesa-libGLU-devel)
      • GLEW (deb: libglew-dev, yum: glew-devel)
      • GLUT (deb: freeglut3-dev, yum: freeglut-devel)
    • Optional: xmllint

Windows using Visual Studio

Visual Studio 2015 solutions are provided for the example FLAME GPU projects. Release and Debug build configurations are provided, for both console mode and (optionally) visualisation mode. Binary files are places in bin/x64/<OPT>_<MODE> where <OPT> is Release or Debug and <MODE> is Console or Visualisation.

An additional solution is provided in the examples directory, enabling batch building of all examples.

make for Linux and Windows

make can be used to build FLAME GPU simulations under linux and windows (via a windows implementation of make).

Makefiles are provided for each example project examples/project/Makefile), and for batch building all examples (examples/Makefile).

To build a console example in release mode:

cd examples/EmptyExample/
make console

Or for a visualisation example in release mode:

cd examples/EmptyExample/
make visualisation

Debug mode executables can be built by specifying debug=1 to make, i.e make console debug=1.

Binary files are places in bin/linux-x64/<OPT>_<MODE> where <OPT> is Release or Debug and <MODE> is Console or Visualisation.

For more information on building FLAME GPU via make, run make help in an example directory.

Note on Linux Dependencies

If you are using linux on a managed system (i.e you do not have root access to install packages) you can provide shared object files (.so) for the missing dependencies.

I.e. libglew and libglut.

Download the required shared object files specific to your system configuration, and place in the lib directory. This will be linked at compile time and the dynamic linker will check this directory at runtime.

Alternatively, to package FLAME GPU executables with a different file structure, the .so files can be placed adjacent to the executable file.

Usage

FLAME GPU can be executed as either a console application or as an interactive visualisation. Please see the documentation for further details.

# Console mode
usage: executable [-h] [--help] input_path num_iterations [cuda_device_id] [XML_output_override]

# Interactive visualisation
usage: executable [-h] [--help] input_path [cuda_device_id]

For further details, see the documentation or see executable --help.

Running a Simulation on Windows

Assuming the GameOfLife example has been compiled for visualisation, there are several options for running the example.

  1. Run the included batch script in bin/x64/: GameOfLife_visualisation.bat
  2. Run the executable directly with an initial states file
    1. Navigate to the examples/GameOfLife/ directory in a command prompt
    2. Run ..\..\bin\x64\Release_Visualisation\GameOfLife.exe iterations\0.xml

Running a Simulation on Linux

Assuming the GameOfLife example has been compiled for visualisation, there are several options for running the example.

  1. Run the included bash script in bin/linux-x64/: GameOfLife_visualisation.sh
  2. Run the executable directly with an initial states file
    1. Navigate to the examples/GameOfLife/ directory
    2. Run ../../bin/linux-x64/Release_Visualisation/GameOfLife iterations/0.xml

How to Contribute

To report FLAME GPU bugs or request features, please file an issue directly using Github. If you wish to make any contributions, please issue a Pull Request on Github.

Publications

Please cite FLAME GPU using

@article{richmond2010high,
  doi={10.1093/bib/bbp073},
  title={High performance cellular level agent-based simulation with FLAME for the GPU},
  author={Richmond, Paul and Walker, Dawn and Coakley, Simon and Romano, Daniela},
  journal={Briefings in bioinformatics},
  volume={11},
  number={3},
  pages={334--347},
  year={2010},
  publisher={Oxford Univ Press}
}

For an up to date list of publications related to FLAME GPU and it's use, visit the flamegpu.com website.

Authors

FLAME GPU is developed as an open-source project by the Visual Computing research group in the Department of Computer Science at the University of Sheffield. The primary author is Dr Paul Richmond.

Copyright and Software Licence

FLAME GPU is copyright the University of Sheffield 2009 - 2018. Version 1.5.X is released under the MIT open source licence. Previous versions were released under a University of Sheffield End User licence agreement.

Release Notes

  • Documentation now hosted on readthedocs, http://docs.flamegpu.com and https://github.com/flamegpu/docs
  • Supports CUDA 8 and CUDA 9
    • Removed SM20 and SM21 support from the default build settings (Deprecated / Removed by CUDA 8.0 / 9.0)
  • Graph communication for agents with new example
  • Updated Visual Studio version to 2015
  • Improved linux support by upgraded Makefiles
  • Additional example projects
  • Template example has been renamed EmptyExample
  • tools/new_example.py to quickly create a new example project.
  • Various bugfixes
  • Adds step-functions
  • Adds host-based agent creation for init and step functions
  • Adds parallel reductions for use in init, step and exit functions
  • Additional command line options
  • Environmental variables can now be loaded from 0.xml
  • Adds the use of colour agent variable to control agent colour in the default visualisation
  • Additional controls for the default visualisation
  • Macro definitions for default visualisation colours
  • Macro definitions for message partitioning strategy
  • Adds instrumentation for simple performance measurement via preprocessor macros
  • Improved functions.xslt output for generating template functions files.
  • Improved state model diagram generator
  • Updated Circles Example
  • Purged binaries form history, reducing repository size
  • Updated Visual Studio Project files to 2013
  • Improved Visual Studio build customisation
  • Fixed double precision support within spatial partitioning
  • Compile-time spatial partition configuration validation
  • Added support for continuous agents reading discrete messages.
  • Minor bug fixes and added missing media folder
  • FLAME GPU 1.4 for CUDA 7 and Visual Studio 2012

##Problem reports

To report a bug in this documentation or in the software or propose an improvement, please use the FLAMEGPU GitHub issue tracker.

flamegpu2's People

Contributors

daniele-baccega avatar davidwilby avatar mileach avatar mondus avatar mozhgan-kch avatar ptheywood avatar robadob avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flamegpu2's Issues

Boost any usage does not work

What we have for memory vectors is actually an heterogeneous vector where every vector can store different types. Each element has type details stored with it. What we actually need is a hetrogeneous collection of homogenous vectors. This will require a special templated vector which stores the type info once. This is currently preventing the implementation of the GPU memory mapping.

Type check for setVariable and getVariable funcs

Type check is required for below functions :
template<typename T, unsigned int N> __device__ T FLAMEGPU_API::getVariable(const char(&variable_name)[N]);
and
template<typename T, unsigned int N> __device__ void FLAMEGPU_API::setVariable(const char(&variable_name)[N], T value)

Example: EmptyExample

Provide a hello-world esque example which can be used with some script(s) to create a new example.

Example: Circles Model

Implement the Circles model. In flame gpu 1 we have the following 4 versions:

  • CirclesBruteForce_double
  • CirclesBruteForce_float
  • CirclesPartitioning_double
  • CirclesPartitioning_float

How many agents aprox. FLAMEGPU2 will support?

Hi,

How many agents are you expecting to be able to run on the same simulation? In 2010 FLAME was designed for 10^6 agents ([http://www.ifaamas.org/Proceedings/aamas2010/pdf/04%20Demos/D-10.pdf]). Are you expecting to run more?

Regards,
Ezequiel

Handling multiple functions added to the same simulation layer

In FLAMEGPU2 we are allowed to add multiple functions to the same simulation layer. We need to check if these functions belong to the same agent or not. Are these functions independent of each other?

    SimulationLayer moveStay_layer(simulation, "move_layer");
    moveStay_layer.addAgentFunction("move");
    moveStay_layer.addAgentFunction("stay");
    simulation.addSimulationLayer(moveStay_layer);

Example: Boids

Implement the Boids example model.

FLAME GPU 1 provides 3 versions:

  • Boids_BruteForce
  • Boids_Partitioning
  • Boids_PartitioningVecTypes

We also need to address behaviour at the edge of the simualted region.

Traditionally F1 used wrapping, but agents did not interact with agents at the other end of continuous space.

Options could be:

  • Implement message wrapping
    • as a spatial message spetialisation?
    • Add an easy way for distance checks, which ideally could be applied to wrapped brute force comms too?
  • Bouncing?
    • Agents currently clamp within the bounds, but no changes are made to the force (so they continue to attempt to leave). Implementing bouncing (or nullifying their outwards movement) is a possible solution. Unsure what the general consensus is for boids implementations / in the literature.

Code Style / Linting

It may be worth using a linter such as cpplint to adhere to some code style rules, which can be factored in as a CI phase to enforce usage.

cpplint seems sensible as it supports cuda and is readily installable via pip. It can also be configured to suppress rules we do not wish to adhere to if required.

Feature: Agent State Reading

  1. Abstract State Reader Class
  2. Concrete XML Reader
  3. Concrete Binary Reader

Readers need to support reading of

  • Agent populations
  • Environment

Feature: Analytics Functions

Implement common analytics functions to be called from the host using cub. Including atleast the following

  • min
  • max
  • reduce
  • count

Feature: Layer - Host functions

Host functions as a part of layers.

Ideally these should be able to execute concurrently with agent functions in the same layer, though this would have to be clearly documented.

Not an urgent feature, complicates dependency analysis

Example: GameOfLife

Implement the GameOfLife example model. This may require discrete agents, or at least be a demonstration of implementing a discrete model with FLAME GPU 2

Feature: Instrumentation / Profiling tools

Add instrumentation tools for simple performance analysis, along with NVTX based markers for more advanced profiling. Introduce a profile build configuration which enables this

  • Instrumentation
    • --timing flag for simple timing
    • Make timing values programatically accessible, rather than just output to stdout? This would need to be recorded regardless of the presence of --timing
    • Simulation timing
    • Pre/Post Simulation timing (for transparency).
    • Timing per Iteration - there will be variance in some iterations due to buffer growth etc.
    • Timing per layer (per agent function not viable if concurrency is enabled)
  • NVTX
    • NVTX linked
    • NVTX utiltiity class (include/util/nvtx.h)
    • Tests (not sure how) - not programatically testable?
    • Doxygen docs (guarded by macros)
    • Better NVTX ranges (probably during #379).
      • Include a range for the call to simulate
      • Ideally capture Agent population generation - may have to go in the indiviudal model, or as part of #246
  • Output simulation/ensemble information to disk, (potentially/optionally) including timing info (total, per step etc?), population data, GPU executed on, Driver version, CUDA version, FLAMEGPU version, user-provided model verisoning? etc.

For Cmake, enabling NVTX could be handeld something along the lines of:
(but maybe just for the profile target, and make it so that if NVTX is not found it builds without)

# Switch to enable NVTX ranges for profiling
option(USE_NVTX "Build with NVTX" ON)
if(USE_NVTX)
  message("-- Using NVTX")
  find_library(NVTX_LIBRARY nvToolsExt HINTS /usr/local/cuda/lib64)
  target_link_libraries(flamegpu2 ${NVTX_LIBRARY})
  set(CUDA_NVCC_FLAGS  ${CUDA_NVCC_FLAGS}; -DUSE_NVTX)
endif(USE_NVTX)

Automatically Initializing Agents

Hi,

According to my understanding the agents of a simulation run must be specified individually in the XML file under the "iterations" directory. Is there a way to automatically generate agents within the code itself instead of manually specifying the agents, individually?

Thank you!
Chathika

Unit Test: Pop folder

  • Check agent population name
  • Check setting and getting variable functions
    • the variable type is correct?
    • accessing a wrong variable
  • maximum population size
  • accessing agents more than population size
  • access population data with multiple states for the same agent
  • accessing agent population that is not set

Unit Test: GPU folder

  • Data copied back from device without simulating any function
  • Data copied back from device after simulating a function
  • Data copied back from device after simulating functions concurrently (this is to test CUDA streams)
  • check function pointer signature

Feature: Agent State Writing

  1. Abstract State Writer Class
  2. Concrete XML Writer
  3. Concrete Binary Writer

Readers need to support reading of

  • Agent populations
  • Environment

runtime mapping of variables

Currently, the mapRuntimeVariables function, does the mapping for all the variables. It's been agreed to modify this later, so we only map the variables that are used within a function during the simulation.

Includes and circular dependancies

We should observe the following rules when using includes

  1. Only include what is absolutely required with a header! Includes in source are fine.
  2. For each folder their should be a hierarchy of dependencies for includes. I.e. ModelDescription should include AgentDescription, AgentDescription should include StateDescription and FunctionDescription. For dependencies the other way forward declarations should be used to avoid circular dependencies. This ensures that to use all the model classes only the top level class ModelDEscritpion needs to be improted by a user.
  3. Each class "should" be able to be built in a new project by including the header in a main source file. If it does not build then the dependencies are wrong.

Documenting code (doxygen template)

Doxygen template - header file:

#ifndef CLASSNAME_H
#define CLASSNAME_H

// TODO_add_include_files


/**
 * @file
 * @author  TODO_name, TODO_organization
 * @date    TODO_dd_mmm_yyyy  
 * @brief TODO_one_line_description_of_class.
 *
 * TODO_longer_description_of_class_meant_for_users._Developer_details_should_
 * be_put_in_the_.cpp_implementation
 * 
 * \todo
 */

class ClassName { 
    public:
        /** TODO_describe_ctor. */
        ClassName();
        /** TODO_describe_dtor. */
        virtual ~ClassName();

        /** TODO_describe_accessor.
         * @return TODO_describe_return value. */
        int getSomeMember() const { return m_someMember; }
        /** TODO_describe_mutator.
         * @param someMember TODO_describe_input_param. */
        void setSomeMember(int someMember) { m_someMember = someMember; }

    protected:
        /**
         * TODO_description_of_function.
         * @param someParam TODO_describe_parameter.
         * @return TODO_describe_return_value.
         */
        virtual int someFunction(int someParam);

    private:
       int m_someMember;   // TODO_purpose_and_units.
}; 

#endif // CLASSNAME_H

Doxygen template - cpp file:

/******************************************************************************
 * 
 ****************************************************************************** 
 * Author  TODO_name, TODO_organization
 * Date    TODO_dd_mmm_yyyy
 *
 * See <CLASS_NAME>.h for a description of this code 
 *****************************************************************************/


// TODO_add_include_files
#include "CppTemplate.h"


ClassName::ClassName() {
    // TODO_implement
}

ClassName::~ClassName() {
    // TODO_implement
}

// TODO_describe_implemetation_details_for_developers
int ClassName::someFunction(int someParam) {
    // TODO_implement
    return 0;
}

Taken from https://modelingguru.nasa.gov/docs/DOC-1684

Unit Test: Model folder

  • Check the agent name and size
  • Check agent variable type, number
  • Check function and message names
  • Check if a specific function exists

Unit Test: Sim folder

  • Verify the correction of simulation functions by printing a 'hello' only
  • Check if the function name exists by throwing an exception
    • checking the number of layers and functions per layers
  • access to the same variable by 2 functions from the same agent should not be possible

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.