Giter Club home page Giter Club logo

Comments (6)

SimeonEhrig avatar SimeonEhrig commented on September 17, 2024 2

Just for documentation purposes: We have discussed offline that it is possible to store different SM versions of the kernel code in one executable file. So only one CUDA package is needed.

from ptychography.

sk1p avatar sk1p commented on September 17, 2024

What are actually the concrete build parameters we need to care about? Just GPU arch × accel backend? How large would a "build matrix" be?

I think we have the following options:

  1. Complete build with pip → customized binary wheel
  2. Build with a combination of CMake and pip/setuptools → customized binary wheel
  3. Build and publish conda packages → "generic" conda packages
  4. Use conda just for installing the build and run-time requirements, using 2) for the actual build+install

1 and 2 can also be used to build a bunch of "generic" packages and publish them to PyPI.

1) Complete build with pip

There is a way to parametrize a pip build, for example by environment variables, but building the complete native module with Python tools will be a headache and also, IMHO that is quite a hack. This would depend on system installations for dependencies (boost etc.)

2) Build with a combination of CMake and pip/setuptools

One option that would work is: get the source code, build a native library with CMake for a specific environment (cuda version etc.) and package it into a pip-installable wheel. Like 1, this would depend on system libraries for dependencies.

3) Build and publish conda packages

I think with conda the problem would also be parametrizing the build - I don't think conda supports the "compile at install time" workflow that pip supports, so we would be limited to a small number of supported configurations with the published packages.

4) Use conda just for installing the build and run-time dependencies

This would mean we use conda packages for dependencies, but locally build and install a specialized, optimized version, with CMake and pip for example. That would mean users don't have to compile boost...

Use cases?

Can we maybe support two different use cases - one being the easy workstation/laptop/"casual" installation for trying things out, the other being the thoroughly optimized HPC installation? Then we can provide binary wheels for some common configurations and still allow optimized installation for the HPC case.

from ptychography.

sk1p avatar sk1p commented on September 17, 2024

/cc @ReimarBauer, who is a conda expert

from ptychography.

SimeonEhrig avatar SimeonEhrig commented on September 17, 2024

Theoretically, the build matrix can becomes really big. In real, I'm not sure, because we want to support consumer, workstation and server GPUs. CPU support could be also possible and maybe we have to add support for AMD GPUs. I want to avoid this restriction, because it could cause some restrictions in future systems.

In general, if we use a package manager, we should to try ship every dependency, which is possible, with the package manager, which means also boost.

I like your idea of two different ways, to get the application. But at first, we should try to realize a single parametrized installation with conda or pip. If it doesn't work or it is to complicated to use, we can do your idea and provide to different ways, to install the application. This means an easy way for common configurations over pip and a more complicated way for a optimized version.

Beside, for alpaka-based applications we use cmake arguments to enable backends and set compiler optimizations. So, we need a package manager, which supports cmake builds.

I will also talk with my colleagues, if we have experience to ship alpaka applications beside ugly cmake builds.

from ptychography.

uellue avatar uellue commented on September 17, 2024

@SimeonEhrig would it have to be compiled for each individual GPU model or does it work like CPUs where there are certain instruction sets that work on many different models?

from ptychography.

SimeonEhrig avatar SimeonEhrig commented on September 17, 2024

The instructions set of the nvidia GPUs are forward compatible. Means, a application code, which was compiled for SM60 also runs on GPUs, which supports SM70. But you can lose optimization potential.
For example the Tesla K80 (SM37) has the double number of registers of the Tesla K20 (SM35).

from ptychography.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.