Giter Club home page Giter Club logo

ompi-collectives-tuning's Introduction

Copyright (c) 2020 Amazon.com, Inc. or its affiliates. All Rights reserved.

$COPYRIGHT$

Additional copyrights may follow

$HEADER$

===========================================================================

Collectives Tuning

===========================================================================

Prerequisites:

    Python3.6
    SGE scheduler/Slurm scheduler
    OSU Micro Benchmarks
    Intel Micro Benchmarks

===========================================================================

Installing OSU Micro Benchmarks:

Run these commands to install osu micro benchmarks. Change $INSTALL_PATH to be your desired install path. Change $MPI_INSTALL_PATH to your Open MPI install path.

wget http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.6.2.tar.gz
tar -xvf osu-micro-benchmarks-5.6.2.tar.gz
cd osu-micro-benchmarks-5.6.2
./configure --prefix=$INSTALL_PATH CC=$MPI_INSTALL_PATH/bin/mpicc CXX=$MPI_INSTALL_PATH/bin/mpicxx
make
make install

===========================================================================

Installing Intel Micro Benchmarks:

Run these commands to install IMB-MPI1. Change $MPI_INSTALL_PATH to your Open MPI install path.

git clone https://github.com/intel/mpi-benchmarks.git
cd mpi-benchmarks
make IMB-MPI1 CC=$MPI_INSTALL_PATH/bin/mpicc CXX=$MPI_INSTALL_PATH/bin/mpicxx

===========================================================================

This repository is intended to create scripts and analyze results of collectives to create a tuning decision file for Open MPI.

Currently, the only binaries supported are for OSU Micro Benchmarks and Intel Micro Benchmarks for the following collectives:

    allgather
    allgatherv
    allreduce
    alltoall
    alltoallv
    barrier
    bcast
    gather
    reduce
    reduce_scatter_block
    reduce_scatter
    scatter

Currently, you need to create a config file - see "./examples/config" in order to choose collectives, OMB collectives directory, IMB-MPI1 binary path, cluster sizes, number of ranks, number of nodes, number of ranks per node, and number of runs.

If you need to adjust the number of algorithms or exclude certain algorithms, please adjust the file "./collective_jobs/.job"

IMPORTANT NOTE: The number of algorithms differ between open mpi versions. Please make sure the algorithm count is correct. Algorithm counts were derived from OMPI 4.x.x branch. This can be found in: ompi/ompi/mca/coll/tuned/coll_tuned__decision.c under _algorithms[].

Use this scatter job for master:

number_of_algorithms : 3
exclude_algorithms :
two_proc_alg :

In order to run the scripts, please run inside this directory

./run_and_analyze.sh -c <your config file>

If you wish to run with slurm instead of SGE, you must pass the "--with-slurm" flag. It is recommended to run this flag inside tmux, screen, or similar software as the slurm -W flag is utilized.

./run_and_analyze.sh -c <your config file> --with-slurm

This script will run and analyze all collectives specified. The output will be saved under the ./output directory.

A decision file will be written under ./output/decision.file

Each collective will have a detailed output and a best output file under ./output//detail.out and ./output//best.out respectively.

ompi-collectives-tuning's People

Contributors

cniethammer avatar jiaxiyan avatar shijin-aws avatar wckzhang avatar wenduwan avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ompi-collectives-tuning's Issues

Discrepancy between message size in OSU and tuning files for allgatherv

I found that there is a discrepancy between the message size that the OSU benchmarks report and the size that is used by coll/tuned to make tuning decisions: the OSU benchmarks report the size of the message each rank sends while coll/tuned bases it's decision in allgatherv on the total amount of data to be received. This leads to nonsensical rules and likely suboptimal decisions. This should be fixed in the python scripts (when generating the decision file and ideally also when writing the best.out file).

OMPI version configuration

Need to add easier way to run collectives on different OMPI versions (They have different numbers of algorithms for example)

This will need a config file change specifying OMPI version, or at least a parameter in run_and_analyze, and separate collective_jobs directories for each ompi version we care about. I'll make one for 3.x.x, 4.x.x, 5.x.x, and master.

I toyed around with the idea of creating separate branches for each, but the amount of backporting would be stupid, so not going to do that.

Add basic CI checks

Need to hook up Travis to help get basic things out of the way. Maybe also tox and shellcheck, given the python and bash.

Report errors earlier

There is probably a better way to report that data is invalid due to errors when running mpirun rather than resorting to analyzing to detect it. I would like an error output file that lists every single file that contains an error. Needs more thought.

Need some process binding input

In order to get more targeted data, need to add some sort of mapping option. IE. Bind one process per node so we can tune based on internode communication only.

Select the proper python version

In newer versions, the usage of "python" has been deprecated. Now, python3 needs to be explicitly called. In addition, "python" may be linked to versions of python 2. Change calling "python" to calling "python3".

Reduce Scatter Block issues at large scale

Reduce Scatter Block hits lots of issues in regards to running out of memory and such at ranks 128+. Need to figure out a way to handle this...Possible to just ignore the failures and just revamp the parsing code a little. Needs more thought.

Non commutative dynamic collectives

Dynamic code does not currently have any safety features or fallbacks if an algorithm that cannot support non commutative ops is used during a non commutative op. Need to adjust decision file creation based on that.

Handle two proc algorithms better

Two proc algorithms can hit issues with the generated decision file (ie. if you have 2 proc and 4 proc tuning, 3 procs will have issues by using the 2 proc tuning).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.