Giter Club home page Giter Club logo

flamegpu2-docs's Introduction

FLAME GPU

http://www.flamegpu.com

Current version: 1.5.0

FLAME GPU (Flexible Large-scale Agent Modelling Environment for Graphics Processing Units) is a high performance Graphics Processing Unit (GPU) extension to the FLAME framework.

It provides a mapping between a formal agent specifications with C based scripting and optimised CUDA code. This includes a number of key ABM building blocks such as multiple agent types, agent communication and birth and death allocation. The advantages of our contribution are three fold.

  1. Agent Based (AB) modellers are able to focus on specifying agent behaviour and run simulations without explicit understanding of CUDA programming or GPU optimisation strategies.
  2. Simulation performance is significantly increased in comparison with desktop CPU alternatives. This allows simulation of far larger model sizes with high performance at a fraction of the cost of grid based alternatives.
  3. Massive agent populations can be visualised in real time as agent data is already located on the GPU hardware.

Documentation

The FLAME GPU documentation and user guide can be found at http://docs.flamegpu.com, with source hosted on GitHub at FLAMEGPU/docs.

Getting FLAME GPU

Pre-compiled Windows binaries are available for the example projects in the FLAME-GPU-SDK, available as an archive for each release.

Source is available from GitHub, either as a zip download or via git:

git clone https://github.com/FLAMEGPU/FLAMEGPU.git

Or

git clone [email protected]:FLAMEGPU/FLAMEGPU.git

Building FLAME GPU

FLAME GPU can be built for Windows and Linux. MacOS should work, but is unsupported.

Dependencies

  • CUDA 8.0 or later
  • Compute Capability 2.0 or greater GPU (CUDA 8)
    • Compute Capability 3.0 or greater GPU (CUDA 9)
  • Windows
    • Microsoft Visual Studio 2015 or later
    • Visualisation:
      • freeglut and glew are included with FLAME GPU.
    • Optional: make
  • Linux
    • make
    • g++ (which supports the cuda version used)
    • xsltproc
    • Visualistion:
      • GL (deb: libgl1-mesa-dev, yum: mesa-libGL-devel)
      • GLU (deb: libglu1-mesa-dev, yum: mesa-libGLU-devel)
      • GLEW (deb: libglew-dev, yum: glew-devel)
      • GLUT (deb: freeglut3-dev, yum: freeglut-devel)
    • Optional: xmllint

Windows using Visual Studio

Visual Studio 2015 solutions are provided for the example FLAME GPU projects. Release and Debug build configurations are provided, for both console mode and (optionally) visualisation mode. Binary files are places in bin/x64/<OPT>_<MODE> where <OPT> is Release or Debug and <MODE> is Console or Visualisation.

An additional solution is provided in the examples directory, enabling batch building of all examples.

make for Linux and Windows

make can be used to build FLAME GPU simulations under linux and windows (via a windows implementation of make).

Makefiles are provided for each example project examples/project/Makefile), and for batch building all examples (examples/Makefile).

To build a console example in release mode:

cd examples/EmptyExample/
make console

Or for a visualisation example in release mode:

cd examples/EmptyExample/
make visualisation

Debug mode executables can be built by specifying debug=1 to make, i.e make console debug=1.

Binary files are places in bin/linux-x64/<OPT>_<MODE> where <OPT> is Release or Debug and <MODE> is Console or Visualisation.

For more information on building FLAME GPU via make, run make help in an example directory.

Note on Linux Dependencies

If you are using linux on a managed system (i.e you do not have root access to install packages) you can provide shared object files (.so) for the missing dependencies.

I.e. libglew and libglut.

Download the required shared object files specific to your system configuration, and place in the lib directory. This will be linked at compile time and the dynamic linker will check this directory at runtime.

Alternatively, to package FLAME GPU executables with a different file structure, the .so files can be placed adjacent to the executable file.

Usage

FLAME GPU can be executed as either a console application or as an interactive visualisation. Please see the documentation for further details.

# Console mode
usage: executable [-h] [--help] input_path num_iterations [cuda_device_id] [XML_output_override]

# Interactive visualisation
usage: executable [-h] [--help] input_path [cuda_device_id]

For further details, see the documentation or see executable --help.

Running a Simulation on Windows

Assuming the GameOfLife example has been compiled for visualisation, there are several options for running the example.

  1. Run the included batch script in bin/x64/: GameOfLife_visualisation.bat
  2. Run the executable directly with an initial states file
    1. Navigate to the examples/GameOfLife/ directory in a command prompt
    2. Run ..\..\bin\x64\Release_Visualisation\GameOfLife.exe iterations\0.xml

Running a Simulation on Linux

Assuming the GameOfLife example has been compiled for visualisation, there are several options for running the example.

  1. Run the included bash script in bin/linux-x64/: GameOfLife_visualisation.sh
  2. Run the executable directly with an initial states file
    1. Navigate to the examples/GameOfLife/ directory
    2. Run ../../bin/linux-x64/Release_Visualisation/GameOfLife iterations/0.xml

How to Contribute

To report FLAME GPU bugs or request features, please file an issue directly using Github. If you wish to make any contributions, please issue a Pull Request on Github.

Publications

Please cite FLAME GPU using

@article{richmond2010high,
  doi={10.1093/bib/bbp073},
  title={High performance cellular level agent-based simulation with FLAME for the GPU},
  author={Richmond, Paul and Walker, Dawn and Coakley, Simon and Romano, Daniela},
  journal={Briefings in bioinformatics},
  volume={11},
  number={3},
  pages={334--347},
  year={2010},
  publisher={Oxford Univ Press}
}

For an up to date list of publications related to FLAME GPU and it's use, visit the flamegpu.com website.

Authors

FLAME GPU is developed as an open-source project by the Visual Computing research group in the Department of Computer Science at the University of Sheffield. The primary author is Dr Paul Richmond.

Copyright and Software Licence

FLAME GPU is copyright the University of Sheffield 2009 - 2018. Version 1.5.X is released under the MIT open source licence. Previous versions were released under a University of Sheffield End User licence agreement.

Release Notes

  • Documentation now hosted on readthedocs, http://docs.flamegpu.com and https://github.com/flamegpu/docs
  • Supports CUDA 8 and CUDA 9
    • Removed SM20 and SM21 support from the default build settings (Deprecated / Removed by CUDA 8.0 / 9.0)
  • Graph communication for agents with new example
  • Updated Visual Studio version to 2015
  • Improved linux support by upgraded Makefiles
  • Additional example projects
  • Template example has been renamed EmptyExample
  • tools/new_example.py to quickly create a new example project.
  • Various bugfixes
  • Adds step-functions
  • Adds host-based agent creation for init and step functions
  • Adds parallel reductions for use in init, step and exit functions
  • Additional command line options
  • Environmental variables can now be loaded from 0.xml
  • Adds the use of colour agent variable to control agent colour in the default visualisation
  • Additional controls for the default visualisation
  • Macro definitions for default visualisation colours
  • Macro definitions for message partitioning strategy
  • Adds instrumentation for simple performance measurement via preprocessor macros
  • Improved functions.xslt output for generating template functions files.
  • Improved state model diagram generator
  • Updated Circles Example
  • Purged binaries form history, reducing repository size
  • Updated Visual Studio Project files to 2013
  • Improved Visual Studio build customisation
  • Fixed double precision support within spatial partitioning
  • Compile-time spatial partition configuration validation
  • Added support for continuous agents reading discrete messages.
  • Minor bug fixes and added missing media folder
  • FLAME GPU 1.4 for CUDA 7 and Visual Studio 2012

##Problem reports

To report a bug in this documentation or in the software or propose an improvement, please use the FLAMEGPU GitHub issue tracker.

flamegpu2-docs's People

Contributors

dentarthur avatar langlevoi avatar mileach avatar mondus avatar ptheywood avatar robadob avatar wcastello avatar zeyus avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

flamegpu2-docs's Issues

Running simualtions via `step()`

Most users are expected to run simulations via simulate().

Alternatively, users can use step() manually, to run simulations step by step themselves, however they must then also handle other parts of the typical simulation themselves:

  • Init functions will not be executed for them. They must call CUDASimulation::initFunctions() themselves.
  • Any Logging from a previous Simulate() will exist, unless they call CUDASimulation::resetLog()
  • step() returns a boolean, set to true if the simulation loop is expected to continue, and false if any Exit Condition returned flamegpu::EXIT. It is up to the user to deal with this.
  • The user will/may need to call processStepLog after the last step they intend to run.
  • Exit functions will not be exectued for them, they must call CUDASimulation::exitFunctions().
    • Exit function logging will also need to be explicitly processed, via processExitLog
  • If they wish export logs to disk, they need to call exportLog as appropriate.

I.e. they need to handle anything in simulate().

There are also several things that will not work.

  • The Simulation time will not be recorded, and timing from any previous calls to simulate() on the same instance may be present / accessible.
  • There is potential for bugs if anything is too strongly assuming that simulate is the only option (per step allocations perhaps)

As this is not the intended / typical way users are expected to run simulations, it should be marked as such in some way.

Userguide index page

This is currently just the ToC, but should probbaly have some introductory text.

CI: Start with a fresh gh-pages branch periodically

The API docs generate a number of binary files which appear to change significantly between releases, adding bloat to the repository.

It might be worth tweaking the deploy CI to start from a fresh gh-pages branch and force pushing, to avoid the repository growing unneccesarily large.

Alternatively Adjusting the doxygen setup might be worthwhile.

Agent ID

This PR introduces agent id, will require documenting. Here's a brief summary of relevant info.

Had a brief look at docs master, but wasn't clear about the best way to fit the relevant info from this in.

Intro

Agents all contain an internal variable representing a unique identifier for the agent. This is automatically managed by FLAMEGPU2. IDs are assigned to agents at the start of a simulation, or at birth when new agents are created during the simulation. There is no method to change an agent's ID during a simulation's execution.

How IDs are handled at import/export

If agents with IDs are exported from a CUDASimulation to file or an AgentVector, their IDs will be retained when re-imported. If an ID conflict is detected when importing agents, an AgentIDCollision exception will be thrown. This can be resolved by using AgentVector::resetAllIDs() or AgentVector::Agent::resetID(), which will return IDs to an uninitialised state.

It is not currently possible to programmatically reset IDs when importing agents from file (as they cannot be loaded directly to an AgentVector). Therefore it is necessary to manually modify the xml or json file to replace any conflicting IDs.

Accessing agent IDs

Agent IDs can be accessed with the getID() method in all places where accessing an agent variable is possible (AgentVector::Agent, DeviceAPI, DeviceAPI::AgentOut, DeviceAgentVector, HostNewAgentAPI).

Type of ID

Within the C and CUDA interfaces the type of IDs is id_t, which likely maps to unsigned int. However, this may be mapped to other types in special builds which require a greater ID range. Within the python interface it is possible to add/use variables of id_t to messages, environment etc with newVariableID(). This will then update if someone rebuilds pyflamegpu with a different sized unsigned for IDs.

Checking for init ID

Agents which have not had their ID initialised will have and ID value of ID_NOT_SET (This is currently for internal usage, so may not be accessible via Python, but it evaluates to 0 and if changes will break alot of stuff.)

Submodels

Agents bound to a submodel will always have their ID bound to the submodel.

Agents within a submodel that are unbound, and therefore recreated each time the submodel is executed may have the same IDs between runs of the submodel, however this cannot be guaranteed so should not be relied on.

Tutorial Syntax highlighting

The tutorial pages include partial code blocks to show only the relevant snippets of code.

This means that in some cases, the code blocks are not valid python (or c++/cuda?) on their own.

I.e. the following highlights the wrong segment as a string, due to the missing opening """

image

Only a single code block fails to parse via pygments entriely, generating a warning and is not highligted.

Additionally, (in my opinion) the elipses indicating there is more code above below are unclear, and combined with snippets with mismatched braces aren't the easiest to read/follow in isolation

image

Installation Instructions

Clear Installation instructions are needed and readily avaialble.

  • C++/CUDA
    • Binary from github release
    • From source
    • Docker?
  • Python
    • Binary from github release whl.
    • From source
    • Docker?
    • Recommend the use of venvs when isntalling from whl/source.

Pre-requisite dependencies will need listing for each platform for each install method?

If visusalisation is the only option for binaries then additional steps may be required.

`.. code-tab::` indentation

Use of indentation across differnt files varies between 2 and 4 spaces, and sometimes is a mix.

We should try to standardise this (I'm not sure what the "correct" values is for this rst extension without looking)

The same also applies to if there should be a blank line prior to any code or not.

API rst file location

Files generated during building are currently placed into the src directory, not the current binary directory.

This is bad for out of tree builds, but sphinx/breathe/exhale does not make this easy to fix, as the api directory must be a child of the sphinx source directory.

In practice to resolve this, sphinx files will need copying into the build directory at build time (configure would not be good enough) but that feels like it will have overheads / be painful.

It might also be good to use cmake to generate index.rst, so the inclusion of api/library_root can be set according to the CMake varaible / location of the api directory.

Python arrays are wrong.

image
https://docs.flamegpu.com/guide/2-model-definition/2-environment.html

This block should be;

# Define environmental properties and their initial values
env.newPropertyFloat("f_prop", 12.0)              # Create float property 'f_prop', with value of 12
env.newPropertyArrayInt("ia_prop", 3, [1, 2, 3])  # Create int array property 'ia_prop', with value of [1, 2, 3]
env.newPropertyChar("c_prop", 'g', True)          # Create constant char property 'c_prop', with value 'g'

There are many issues in this code block (I'm not even sure about python type conversion for chars, cc @MILeach

Noticed by Carlos.

URIs / folder structure

The current url structure https://docs.flamegpu.com/guide/1-meta/2-creating-project.html is probably not ideal from an SEO perspective, and changing urls will break any bookmarks people make so sooner is probably better than later to change them.

I.e the 1-meta directory should probably just be first-steps or similar. This will make the order of directories on disk not match the order in the navigation, but grand scheme that's not an issue.

CUDA Syntax overview.

Simple page/section aimed at Python users explaining the CUDA C equivalent of common syntax vs python which they may need for defining agent functions.

E.g, defining variable, if-else, for, function define/call.

Could link to an external guide/s at the end for greater detail.

Theme

Need to adjust the theme to be clearly FLAME GPU 2, and match the website.

Hosted pyFLAMEGPU docs via CI

Currenlty C++ API docs are generated and hosted by CI in this repo, but this does not include any docs for pyflamegpu.

This is dependent upon the pyflamegpu docs being possible on the main repo in the first place (FLAMEGPU/FLAMEGPU2#455)

It will also probably require the pyflamegpu target to be built (or possibly a sub target) and therfore require CUDA to be installed on the CI (without very significant changes to CMake)

API Docs via CI

Once issues with breeze have been resolved we should be able to include the API docs in the hosted userguide (via breeze).

The main issue will be that the published docs are only updated when this repo is updated, rather than the FLAMEGPU/FLAMEGPU2 repo

Restructure Messaging Docs

As the number of messaging docs has grown, the splitting of output / input examples for each type seems less ideal / it is less clear how to use both input and output for a given type.

Instead I propose:

  1. A general description of messaging
    1. Include general output and general input instructions, with bruteforce as the examples
  2. A table of the availabel types, with links to subsequent sections
  3. For each messaging type:
    • Describe the shcheme and when it is appropriate for use / why it is a different type.
    • Demonstrate defining the message description in C++ and python
    • Demonstrate output (agent functions are all CUDA C++)
    • Demonstrate input (agent functions are all CUDA C++). Include variants such as .wrap() where appropriate

Timing

There should probably be some coverage of timing.

  • Performance trouble shooting should reference how timing can be collected within FGPU2.
  • Timing/performance metrics should be detailed in logging pages (emphasis perf metrics always logged in ensembles??).
  • Maybe some advanced discussion regarding accuracy/wddm, impact of cuda event timers etc.

Releases

We should document the releases available, where to access them and what the changelog(s) are if any. Possibly only if the root FLAMEGPU2 repo has been found / made available.

TODO

  1. pip install
sphinx>=2.0
breathe>=4.13.0
exhale
sphinx_rtd_theme
  1. sphinx-quickstart

  2. Tweak quick start files to add in exhale config

  3. make html

  4. push to RTD?


Step 1, Make auto install/check available
Step 2 & 3, run once locally commit to this repo
Step 4, Update CMake in main fgpu2 dev repo to target this (quickstart gets make.bat and makefile so simple CMake command make html should suffice)
conf.py will need to be updated to direct to doxygen output.

  breathe_projects = {
    "FLAME GPU 2": "./xml"
  }

Protected branch settings

We should protect master so that CI must pass before things can be merged / committed to it, as th CI then publishses the contents of this branch to gh-pages.

This will mean that we have to use PRs to get content into master, which will slow down rapid fixes, so probably better to do this once the docs are in a more complete state.

It would be nice if we can also prevent the gh-pages branch being deleted or PRs being made against it (not sure if possible, although we could make ci fail (and therefore block merging via a branch protection rule)

Enable always HTTPS

We can enforce HTTPS via github settings.

This should be done for this repo and for the main repo.

Python agent functions should use raw strings.

e.g. https://github.com/FLAMEGPU/FLAMEGPU2-docs/blob/master/src/guide/3-behaviour-definition/1-defining-functions.rst

  .. code-tab:: python

    # Define an agent function called agent_fn1
    agent_fn1_source = """
    FLAMEGPU_AGENT_FUNCTION(agent_fn1, MsgNone, MsgNone) {
        # Behaviour goes here
    }
    """

Should become

 .. code-tab:: python

   # Define an agent function called agent_fn1
   agent_fn1_source = r"""
   FLAMEGPU_AGENT_FUNCTION(agent_fn1, MsgNone, MsgNone) {
       # Behaviour goes here
   }
   """

API documentation missing links

In a number of places (atleast 6) there are links intending to link to API docs, which do not do so.

The ones i've found are of the pattern:

Full API documentation for the ``X``: link

Where X is the name of a class, such aas EnvironmentDescription.

We could hardcode links to the hosted API docs, but this feels brittle. There should be a breathe/exhale way of finding the correct link, although I'm not sure exactly how.

Additionally, as the api docs are an optional part of the docs project, these links will only exist if the API docs are being built, which is (suprisingly) expensive in terms or time.

  • src/guide/6-agent-birth-death/1-agent-birth-device.rst:149
  • src/guide/6-agent-birth-death/3-agent-death.rst:53
  • src/guide/2-model-definition/3-agent.rst:152
  • src/guide/2-model-definition/2-environment.rst:137
  • src/guide/2-model-definition/1-model.rst:31
  • src/guide/3-behaviour-definition/1-defining-agent-functions.rst:170

Document Versioning

  • Semver, release versioning
  • FLAMEGPU_VERSION macro, such as 2001004 which encodes 2.1.4. Does not include pre-release or build info (as these are alphanumeric).
  • flamegpu::VERSION_MAJOR and pyflamegpu.VERSION_MAJOR
  • flamegpu::VERSION_MINOR and pyflamegpu.VERSION_MINOR
  • flamegpu::VERSION_PATCH and pyflamegpu.VERSION_PATCH
  • flamegpu::VERSION_PRERELEASE and pyflamegpu.VERSION_PRERELEASE
  • flamegpu::VERSION_BUILDMETADATA and pyflamegpu.VERSION_BUILDMETADATA
  • flamegpu::VERSION_STRING and pyflamegpu.VERSION_STRING
  • flamegpu::VERSION_FULL and pyflamegpu.VERSION_FULL

Depends on FLAMEGPU/FLAMEGPU2#596

Messaging Diagram(s)

TODO: Diagram demonstrating messaging structure

From src/guild/3-behaviour-definition/4-agent-communication.rst

Wrapped Array Messaging

Array messagings now (soon) make 2 iterators avialable, a non-wrapped iterator by default, and a specialised wrapping iterator (in all dimensions) via .wrap. This change needs to be documented.

See FLAMEGPU/FLAMEGPU2#549

Getting Started

A simple getting started page should be provided, walking users through creating a new model based on the template repository.

We should probably (eventually?) provided getting started guides for C++/RTC, and Python.

Python Examples

Several sections require additional python examples.

Including (but possibly not limited to):

  • Ensembles

Model Debugging/Validation/Etc

Identified this as missing from this discussion.

My answer there is a very rough draft of the structure. It would obviously require clearer examples.

Proposed Structure

  • Help! How do I debug/validate my model?
    • Runtime Error Checking: What is Seatbelts? (This page already exists in isolation, might need a little tweaking to fit the new flow)
    • printf() (This is a fairly straight forward page, with a few examples)
    • Logging, this will mostly link back to the logging docs pages, but can provide some additional usage examples e.g. env macro property counters.
    • CUDA debugging. This would involve more images, and require some O/S specific stuff. Also need to do some testing/experimentation regarding Python debugging.

FLAMEGPU_DEVICE_FUNCTION

Usage of FLAMEGPU_DEVICE_FUNCTION and other utility function macros is not yet documented. Carlos had to clarify their usage via the Python API (as he found in James's PredPrey python version).

These are not particularly clear in the RTC version, where they have to be included with every agent function that will use them, rather than declared at global scope.

They're defined at the end of this file: https://github.com/FLAMEGPU/FLAMEGPU2/blob/master/include/flamegpu/runtime/AgentFunction_shim.h

RandomAPI

There doesn't appear to be any detail for the Random APIs in the documentation.

Required

  • HostAPI
  • DeviceAPI

CI warnings

Building on CI currently does not error on warnigns. Some of these warnigns should be errors ideally, due to the auto-generated nature of parts of the docs this is impractical to fix / a low priority.

At some point, it would be good to make the build CI fail on (at least some of) the warnings being output, and resolve as many as possible.

Environment variable array access syntax differs.

Accessing an environment variable array, doesn't require array length as template argument?

Agent variable arrays do.

(I think paul was also concerned that environment is get whilst agents are getVariable.

Determinism.

Everywhere flamegpu2 uses atomics (and possibly some cub/thrust), has the potential for non-determinism.

This should be clearly documented in the user guide, with any suggested workaround (e.g. if we eventually add a DETERMINISTIC CMake flag).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.