Giter Club home page Giter Club logo

rocalution's Introduction

rocALUTION

rocALUTION is a sparse linear algebra library that can be used to explore fine-grained parallelism on top of the ROCm platform runtime and toolchains. Based on C++ and HIP, rocALUTION provides a portable, generic, and flexible design that allows seamless integration with other scientific software packages.

rocALUTION offers various backends for different (parallel) hardware:

  • Host
  • OpenMP: Designed for multi-core CPUs
  • HIP: Designed for ROCm-compatible devices
  • MPI: Designed for multi-node clusters and multi-GPU setups

Requirements

To use rocALUTION on GPU devices, you must first install the rocBLAS, rocSPARSE, and rocRAND libraries. You can install these from the ROCm repository, the GitHub 'releases' tab, or you can manually compile them.

Documentation

Documentation for rocALUTION is available at https://rocm.docs.amd.com/projects/rocALUTION/en/latest/.

To build our documentation locally, use the following code:

cd docs

pip3 install -r sphinx/requirements.txt

python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html

Build

You can compile rocALUTION using CMake 3.5 or later. Note that all compiler specifications are determined automatically.

# Clone rocALUTION using git
git clone https://github.com/ROCm/rocALUTION.git

# Go to rocALUTION directory, create and change to build directory
cd rocALUTION; mkdir build; cd build

# Configure rocALUTION
# Build options:
#   SUPPORT_HIP         - build rocALUTION with HIP support (ON)
#   SUPPORT_OMP         - build rocALUTION with OpenMP support (ON)
#   SUPPORT_MPI         - build rocALUTION with MPI (multi-node) support (OFF)
#   BUILD_SHARED_LIBS   - build rocALUTION as shared library (ON, recommended)
#   BUILD_EXAMPLES      - build rocALUTION examples (ON)
cmake .. -DSUPPORT_HIP=ON -DROCM_PATH=/opt/rocm/

# Build
make

To test your installation, run a CG solver on a Laplacian matrix:

cd rocALUTION; cd build
wget ftp://math.nist.gov/pub/MatrixMarket2/Harwell-Boeing/laplace/gr_30_30.mtx.gz
gzip -d gr_30_30.mtx.gz
./clients/staging/cg gr_30_30.mtx

General information

rocALUTION is based on a generic and robust design that allows expansion in the direction of new solvers and preconditioners with support for various hardware types. The library's design allows the use of all solvers as preconditioners in other solvers. For example, you can define a CG solver with a multi-elimination preconditioner, in which the last-block is preconditioned with another Chebyshev iteration method that itself is preconditioned with a multi-colored symmetric Gauss-Seidel scheme.

Iterative solvers

  • Fixed-point iteration schemes: Jacobi, (Symmetric) Gauss-Seidel, SOR, SSOR
  • Krylov subspace methods: CR, CG, BiCGStab, BiCGStab(l), GMRES, IDR, QMRCGSTAB, Flexible CG/GMRES
  • Mixed-precision defect correction scheme
  • Chebyshev iteration scheme
  • Multigrid: Geometric and algebraic

Preconditioners

  • Matrix splitting schemes: Jacobi, (multi-colored) (symmetric) Gauss-Seidel, SOR, SSOR
  • Factorization schemes: ILU(0), ILU(p) (based on levels), ILU(p,q) (power(q)-pattern method), multi-elimination ILU (nested/recursive), ILUT (based on threshold), IC(0)
  • Approximate Inverses: Chebyshev matrix-valued polynomial, SPAI, FSAI, TNS
  • Diagonal-based preconditioner for Saddle-point problems
  • Block-type of sub-preconditioners/solvers
  • Additive Schwarz (restricted)
  • Variable type of preconditioners

Sparse matrix formats

  • Compressed Sparse Row (CSR)
  • Modified Compressed Sparse Row (MCSR)
  • Dense (DENSE)
  • Coordinate (COO)
  • ELL
  • Diagonal (DIA)
  • Hybrid ELL+COO (HYB)

Portability

All code based on rocALUTION is portable and hardware-independent. It compiles and runs on any supported platform. All solvers and preconditioners are based on a single source code implementation that delivers portable results across all backends (note that variations are possible due to different hardware rounding modes). The only visible difference between hardware is performance variation.

rocalution's People

Contributors

amdkila avatar angeloo01 avatar arvindcheru avatar cgmb avatar dependabot[bot] avatar dgaliffiamd avatar doctorcolinsmith avatar dorianrudolph avatar eidenyoshida avatar evetsso avatar jlgreathouse avatar jsandham avatar lawruble13 avatar lisadelaney avatar ntrost57 avatar nunnikri avatar pruthvistony avatar raramakr avatar rocmamd avatar saadrahim avatar samjwu avatar sergiykostrov avatar urmbista avatar yoichiyoshida avatar yvanmokwinski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rocalution's Issues

PairwiseAMG crash in parallel

Hello, I am experiencing a crash with PairwiseAMG when used as a CG preconditioner in parallel (I am using f72a391 rocALUTION version on CPU).

I reproduced the crash with the cg-amg_mpi sample and the gr_30_30.mtx matrix (it works with less MPI ranks):

$ mpirun -np 31 /export/home/catA/pl254994/trust/amgx_openmp/lib/src/LIBROCALUTION/clients/staging/cg-amg_mpi gr_30_30.mtx
No OpenMP support
rocALUTION ver 3.0.3-59debfadc-dirty
rocALUTION platform is initialized
Accelerator backend: None
No OpenMP support
MPI rank: 0
MPI size: 31
ReadFileMTX: filename=gr_30_30.mtx; reading...
ReadFileMTX: filename=gr_30_30.mtx; done
double free or corruption (out)
double free or corruption (out)
double free or corruption (out)
double free or corruption (out)
double free or corruption (out)

On my matrix (2 592 000 rows) in my code, it crashes above 7 MPI ranks (C-amg and SA-amg works fine):

....
[rocALUTION] Time to convert TRUST matrix: 0.509774
[rocALUTION] Build a matrix with local N= 324001 and local nnz=1598762
[rocALUTION] Time to build matrix: 0.046605
GlobalMatrix name=mat; rows=2592000; cols=2592000; nnz=12867840; prec=64bit; format=CSR(32,32)/COO; subdomains=8; host backend={CPU}; accelerator backend={None}; current=CPU
[rocALUTION] Time to copy matrix on device: 1.4e-05
munmap_chunk(): invalid pointer

Thanks, for you help.

ghost part of global matrix

Hi,
I am trying to initialize a global matrix for multiple GPUs computation with:
void SetDataPtrCSR(int **local_row_offset, int **local_col, ValueType **local_val, int **ghost_row_offset, int **ghost_col, ValueType **ghost_val, std::string name, int local_nnz, int ghost_nnz).

At this movement, I have a matrix in distributed CSR format (which includes the ghost information, as used in PETSC). The equation indexing is pretty simple:
node0: 1->N
node1: N+1->2N
node2: 2N+1->3N
......

I am confusing with the ghost part of the inputs.

I wonder where can I get more information about it.

Thank you very much.

Does "set_device_rocalution(1);" works?

Clearly I do not know how to apply: I want to have some control of the default GPU.

simple-spmv: /home/paolo/FastMM/Epyc/rocALUTION/src/base/backend_manager.cpp:265: void rocalution::set_device_rocalution(int): Assertion `_get_backend_descriptor()->init == false' failed.
Aborted (core dumped)

Is this project still active ?

overlap in Additive Schwarz

Currently the extended matrix of a subblock includes overlap rows above the subblock and overlap rows below the subblock. An alternative (which is also typical) approach is to include neighboring rows and columns that correspond to the ghost/boarder indices (in the corresponding graph). If in the future the distributed version of additive Schwarz is implemented, the suggested approach will share the same communication pattern as the SpMV.

CPU-GPU copies instead of GPU-GPU MPI communications in cg_mpi ?

Hello,

I am using the cg_mpi.ccp sample from rocALUTION to test GPU-GPU MPI communications.

When running cg_mpi with 2 MPI tasks and 1 GPU per task on a AMD MI250X node (with Trento CPU) I can see 2 arrays transfered from GPU to CPU (green) and back (purple) for each solver iteration step (see rocprof profiling below). Note that I modified the source code with ls.InitMinIter(10);.

rocprof_cg_mpi

By exporting ROCALUTION_LAYER=1, I can see that the data sizes correspond to the ghost cells. I assume that an MPI communication is then done to update gost cell values.

I was expecting to avoid this GPU-CPU ping pong since the MI250X node has everything needed (hardware + software stacks) to perform GPU to GPU MPI communications. Does rocALUTION MPI calls support GPU device addresses ? Or am I missing something here ?

The same pattern is observed on our application and we would like to understand (and if possible avoid) these costly data exchange.

Feel free to ask me any details if needed.

Thank you in advance.

CMake Error at CMakeLists.txt:79 (string):

cmake .. -DSUPPORT_HIP=ON
CMake Error at CMakeLists.txt:79 (string):
string sub-command REGEX, mode MATCH needs at least 5 arguments total to
command.

-- Configuring incomplete, errors occurred!
See also "/home/paolo/FastMM/Epyc/rocALUTION/build/CMakeFiles/CMakeOutput.log".
See also "/home/paolo/FastMM/Epyc/rocALUTION/build/CMakeFiles/CMakeError.log".

Sample cg-rsamg_mpi crashes in parallel (RugeStuebenAMG)

Hello, I want to use the RugeStuebenAMG as a global precontioner in my code, but the sample in /clients/staging directory seems not working:

mpirun -np 2 ./cg-rsamg_mpi gr_30_30.mtx
No OpenMP support
rocALUTION ver 3.0.3-59debfadc-dirty
rocALUTION platform is initialized
Accelerator backend: None
No OpenMP support
MPI rank: 0
MPI size: 2
ReadFileMTX: filename=gr_30_30.mtx; reading...
ReadFileMTX: filename=gr_30_30.mtx; done
Fatal error - the program will be terminated 
File: /export/home/catA/pl254994/trust/amgx_openmp/ThirdPart/src/LIBROCALUTION/rocALUTION/src/base/global_matrix.cpp; line: 132

It is OK on 1 core:

mpirun -np 1 ./cg-rsamg_mpi gr_30_30.mtx
No OpenMP support
rocALUTION ver 3.0.3-59debfadc-dirty
rocALUTION platform is initialized
Accelerator backend: None
No OpenMP support
MPI rank: 0
MPI size: 1
ReadFileMTX: filename=gr_30_30.mtx; reading...
ReadFileMTX: filename=gr_30_30.mtx; done
Building took: 0.002036 sec
GlobalMatrix name=mat; rows=900; cols=900; nnz=7744; prec=64bit; format=CSR(32,32)/CSR; subdomains=1; host backend={CPU}; accelerator backend={None}; current=CPU
PCG solver starts, with preconditioner:
AMG solver
AMG number of levels 4
AMG Ruge-Stuben using PMIS coarsening with Ext+i interpolation
AMG coarsest operator size = 6
AMG coarsest level nnz = 36
AMG with smoother:
Fixed Point Iteration solver, with preconditioner:
Jacobi preconditioner
IterationControl criteria: abs tol=1e-08; rel tol=1e-08; div tol=1e+08; max iter=10000
IterationControl initial residual = 33.2866
IterationControl iter=1; residual=4.00468
IterationControl iter=2; residual=0.398823
IterationControl iter=3; residual=0.0550094
IterationControl iter=4; residual=0.00882675
IterationControl iter=5; residual=0.00128995
IterationControl iter=6; residual=0.000164629
IterationControl iter=7; residual=2.62883e-05
IterationControl iter=8; residual=4.17153e-06
IterationControl iter=9; residual=6.8288e-07
IterationControl iter=10; residual=1.03234e-07
IterationControl RELATIVE criteria has been reached: res norm=1.03234e-07; rel val=3.10135e-09; iter=10
PCG ends
Solver took: 0.000405 sec
||e - x||_2 = 2.67775e-08

Thanks for any advice,

Pierre

Global

Hello,

I have a question about global matrix/vector. Is it possible to use these objects even in sequential mode (1 MPI rank) ? All the samples warn about 2 MPI processes. I would like to avoid using local matrix for sequential computation and global matrix for parallel computation... I see no tips in the documentation to use global matrix/vector with less than 2 MPI ranks.

Thanks !

HIP error: invalid device function

I am implementing rocalution into an application.

It works fine using the OpenMP backend but when I set the device number for the GPU using: export HIP_VISIBLE_DEVICES=0; export ROCR_VISIBLE_DEVICES=0 , I get the following HIP error message:

Number of CPU cores: 16
Host thread affinity policy - thread mapping on every core
Number of HIP devices in the system: 1
Matrix+Vectors loading:5.7e-05 sec
HIP error: invalid device function
File: /long_pathname_so_that_rpms_can_package_the_debug_info/src/extlibs/rocALUTION/src/base/hip/hip_vector.cpp; line: 845

My small helloGPU device query program works fine:

#include <hip/hip_runtime.h>
#include <cstdio>

__global__ void helloGpu(int gpuId) {
    printf("device %d Gpu thread %d\n", gpuId, (int)threadIdx.x);
}

int main(int agrc, const char * argv[]){
    int device = 0, gpuId = device;
    
    hipSetDevice(device);
    
    hipLaunchKernelGGL((helloGpu), dim3(1), dim3(64), 0, 0, gpuId);
    
    hipDeviceSynchronize();
    
    return 0;
}

What could be the problem?

EDIT:
export HIP_VISIBLE_DEVICES=0; export ROCR_VISIBLE_DEVICES=0; HSA_OVERRIDE_GFX_VERSION=10.3.0

adding HSA_OVERRIDE_GFX_VERSION=10.3.0 fixes the problem.

Maybe rocalution (I am using 5.7.1 RHEL version) is not compiled to support my gfx1031 card?

Performance issue with cg_mpi running on 2 GPU or more

Hello,

I send you a modified test (cg_mpi.cpp) and data to investigate some performance issue we have here on a node with AMD EPYC 7452 and 8 MI100.

On 1 GPU, the performance is good with strong speed up compared to 1 CPU (x30):
HIP_VISIBLE_DEVICES=1 srun --gres=gpu:2 --threads-per-core=1 -n 1 ./cg_mpi cg_mpi.mtx
Number of HIP devices in the system: 1
No OpenMP support
rocALUTION ver 2.1.2-985838b
rocALUTION platform is initialized
Accelerator backend: HIP
No OpenMP support
rocBLAS ver 2.41.0.
rocSPARSE ver 1.22.2-
Selected HIP device: 0

Device number: 0
Device name:
totalGlobalMem: 32752 MByte
clockRate: 1502000
compute capability: 9.0

MPI rank:0
MPI size:1
ReadFileMTX: filename=cg_mpi.mtx; reading...
ReadFileMTX: filename=cg_mpi.mtx; done
GlobalMatrix name=mat; rows=3068917; cols=3068917; nnz=35209033; prec=64bit; format=CSR/CSR; subdomains=1; host backend={CPU}; accelerator backend={HIP}; current=HIP
PCG solver starts, with preconditioner:
Jacobi preconditioner
IterationControl criteria: abs tol=1e-15; rel tol=1e-06; div tol=1e+08; min iter=100; max iter=1000000
IterationControl initial residual = 6.69104e+13
IterationControl iter=1; residual=0.0168304
IterationControl iter=2; residual=0.00471349
...
IterationControl iter=99; residual=0.000182203
IterationControl iter=100; residual=0.000192498
IterationControl RELATIVE criteria has been reached: res norm=0.000192498; rel val=2.87696e-18; iter=100
PCG ends
Solving: 0.158445 sec
||e - x||_2 = 1593.63

On 2 GPU, the performance is not there (whereas with a C++ code using OpenMP offload, we have a ~2 speedup):
HIP_VISIBLE_DEVICES=1,2 srun --gres=gpu:3 --threads-per-core=1 -n 2 ./cg_mpi cg_mpi.mtx
Number of HIP devices in the system: 2
No OpenMP support
rocALUTION ver 2.1.2-985838b
rocALUTION platform is initialized
Accelerator backend: HIP
No OpenMP support
rocBLAS ver 2.41.0.
rocSPARSE ver 1.22.2-
Selected HIP device: 0

Device number: 0
Device name:
totalGlobalMem: 32752 MByte
clockRate: 1502000
compute capability: 9.0

Device number: 1
Device name:
totalGlobalMem: 32752 MByte
clockRate: 1502000
compute capability: 9.0

MPI rank:0
MPI size:2
ReadFileMTX: filename=cg_mpi.mtx; reading...
ReadFileMTX: filename=cg_mpi.mtx; done
GlobalMatrix name=mat; rows=3068917; cols=3068917; nnz=35209033; prec=64bit; format=CSR/COO; subdomains=2; host backend={CPU}; accelerator backend={HIP}; current=HIP
PCG solver starts, with preconditioner:
Jacobi preconditioner
IterationControl criteria: abs tol=1e-15; rel tol=1e-06; div tol=1e+08; min iter=100; max iter=1000000
IterationControl initial residual = 6.69104e+13
IterationControl iter=1; residual=0.0193062
IterationControl iter=2; residual=0.00469472
...
IterationControl iter=99; residual=0.000188159
IterationControl iter=100; residual=0.000191187
IterationControl RELATIVE criteria has been reached: res norm=0.000191187; rel val=2.85736e-18; iter=100
PCG ends
Solving: 0.562395 sec
||e - x||_2 = 1593.63

Is there something I miss to run/configure cg_mpi on several GPU ?

I can't provide you the cg_mpi.mtx file (file size too big). Could I send you by another way ?

Thanks,

issues being closed without fixing why???

I identified the problem in the documentation and expresses need to fix the doc. Why you closing this bug? The bug is not closed but it (doc) gives misleading build information?! Please do not close this bug 137 without addressing the last comment in 133.
#133

ifdef SUPPORT_MULTINODE

Hello,

Just point out for your interest a typo in line 117 of this file.

#ifdef SUPPORT_MULITNODE
Instead of:
#ifdef SUPPORT_MULTINODE

[Clarification Needed] Accessing linear systems data already stored on GPU(s) in rocThrust data structures

Hi, for clarification:

Is it possible to use rocALUTION to solve a linear system that's already on the GPU stored in rocThrust data structures. The application is based on the concept to upload the data for a simulation in timestep one, keep it there throughout the simulation (hence there are no intermediate device - host - device data transfers) and manipulate the data using rocThrust, only the final result of the simulation is moved back to the host after hundreds or thousands of timesteps. At some point of the on GPU manipulations using rocThrust, linear systems need to be solved which could probably be faster using rocALUTION due to it's potentially better preconditioners if the on GPU data could be accessed with rocALUTION.

Would this be feasible?

How to use a BCSR matrix?

I'm trying to use LocalMatrix::SetDataPtrBCSR(), but it's not working as expected.
The macro #if 0 is used to switch between using ReadFileMTX() and the defined vectors.
I expect both implementations to have the same output.
What is the problem with the row offset pointers?
This difference in output is very small, but for a bigger 3x3 system, the ReadFileMTX() converges normally, whereas the SetDataPtrBCSR() doesn't at all.
I have a linear system:

4.5  0.10  0.5  0.12   x1   10
0.11 3     0.13 0.51 * x2 = 20
           5    0.14   x3   30
           0.15 7      x4   40
$ cat small2.mtx
%%MatrixMarket matrix coordinate real general
% ISTL_STRUCT blocked 2 2
4 4 12
1 1 4.5
1 2 0.10
2 1 0.11
2 2 3
1 3 0.50
1 4 0.12
2 3 0.13
2 4 0.51
3 3 5
3 4 0.14
4 3 0.15
4 4 7
$ cat small2.ltx
10
20
30
40
/* ************************************************************************
 * Copyright (c) 2018-2020 Advanced Micro Devices, Inc.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 * THE SOFTWARE.
 *
 * ************************************************************************ */

#include <cstdlib>
#include <iostream>
#include <rocalution.hpp>

using namespace rocalution;

int main(int argc, char* argv[])
{
    // Initialize rocALUTION
    init_rocalution();

    // Print rocALUTION info
    info_rocalution();

    // rocALUTION objects
    LocalVector<double> x;
    LocalVector<double> rhs;
    LocalVector<double> e;
    LocalMatrix<double> mat;

    std::string mtx("small2.mtx");
    std::string ltx("small2.ltx");

#if 0
    // Read matrix from MTX file
    mat.ReadFileMTX(mtx);
#else
    const int block_size = 2;
    int Nb = 2;
    int nnzb = 3;

    std::vector<int> h_rows{0, 2, 3};
    std::vector<int> h_cols{0, 1, 1};
    std::vector<double> h_vals{4.5, 0.1, 0.11, 3, 0.50, 0.12, 0.13, 0.51, 5, 0.14, 0.15, 7};

    int *t1 = new int[Nb+1];
    int *t2 = new int[nnzb];
    double *t3 = new double[nnzb*block_size*block_size];

    for(int i = 0; i < Nb+1; ++i){
        t1[i] = h_rows[i];
    }
    for(int i = 0; i < nnzb; ++i){
        t2[i] = h_cols[i];
    }
    for(unsigned i = 0; i < nnzb*block_size*block_size; ++i){
        t3[i] = h_vals[i];
    }

    mat.SetDataPtrBCSR(
        &t1,
        &t2,
        &t3,
        "matrix A", nnzb, Nb, Nb, block_size);
#endif

    std::cout << "matrix check: " << mat.Check() << "\n";

    // Move objects to accelerator
    mat.MoveToAccelerator();
    x.MoveToAccelerator();
    rhs.MoveToAccelerator();
    e.MoveToAccelerator();

    // Allocate vectors
    x.Allocate("x", mat.GetN());
    rhs.Allocate("rhs", mat.GetM());
    e.Allocate("e", mat.GetN());

    rhs.ReadFileASCII(ltx);

    // Linear Solver
    BiCGStab<LocalMatrix<double>, LocalVector<double>, double> roc_solver;

    // Preconditioner
    ILU<LocalMatrix<double>, LocalVector<double>, double> roc_prec;

    roc_solver.Init(1e-15, 1e-3, 1e8, 50);

    // Initial zero guess
    x.Zeros();

    // Set solver operator
    roc_solver.SetOperator(mat);
    // Set solver preconditioner
    roc_solver.SetPreconditioner(roc_prec);

    // Build solver
    roc_solver.Build();

    // Verbosity output
    roc_solver.Verbose(2);

    // Print matrix info
    mat.Info();

    // Start time measurement
    double tick, tack;
    tick = rocalution_time();

    // Solve A x = rhs
    roc_solver.Solve(rhs, &x);

    // Stop time measurement
    tack = rocalution_time();
    std::cout << "Solver execution:" << (tack - tick) / 1e6 << " sec" << std::endl;

    // Clear solver
    roc_solver.Clear();

    // Compute error L2 norm
    e.ScaleAdd(-1.0, x);
    double error = e.Norm();
    std::cout << "||e - x||_2 = " << error << std::endl;

    // Stop rocALUTION platform
    stop_rocalution();

    return 0;
}

Using ReadFileMTX():

ReadFileMTX: filename=small2.mtx; reading...
ReadFileMTX: filename=small2.mtx; done
matrix check: 1
ReadFileASCII: filename=small2.ltx; reading...
ReadFileASCII: filename=small2.ltx; done
LocalMatrix name=small2.mtx; rows=4; cols=4; nnz=12; prec=64bit; format=CSR; host backend={CPU(OpenMP)}; accelerator backend={HIP}; current=HIP
PBiCGStab solver starts, with preconditioner:
ILU(0) preconditioner
ILU nnz = 12
IterationControl criteria: abs tol=1e-15; rel tol=0.001; div tol=1e+08; max iter=50
IterationControl initial residual = 54.7723
IterationControl iter=1; residual=1.2326e-32
IterationControl ABSOLUTE criteria has been reached: res norm=1.2326e-32; rel val=2.2504e-34; iter=1
PBiCGStab ends
Solver execution:0.062163 sec
||e - x||_2 = 9.81892

Using defined vectors:

matrix check: *** error: Matrix CSR:Check - problems with matrix row offset pointers
*** warning: LocalMatrix::Check() is performed in CSR format
0
ReadFileASCII: filename=small2.ltx; reading...
ReadFileASCII: filename=small2.ltx; done
LocalMatrix name=matrix A; rows=4; cols=4; nnz=12; prec=64bit; format=BCSR; host backend={CPU(OpenMP)}; accelerator backend={HIP}; current=HIP
PBiCGStab solver starts, with preconditioner:
ILU(0) preconditioner
ILU nnz = 12
IterationControl criteria: abs tol=1e-15; rel tol=0.001; div tol=1e+08; max iter=50
IterationControl initial residual = 54.7723
IterationControl iter=1; residual=9.15304e-33
IterationControl ABSOLUTE criteria has been reached: res norm=9.15304e-33; rel val=1.67111e-34; iter=1
PBiCGStab ends
Solver execution:0.028771 sec
||e - x||_2 = 9.82691

installation problem

Hi, I try to install rocALUTION on our system using:
HIP version: 2.8.19361-cbe6b65e
HCC clang version 10.0.0
Got following error message:

[ 1%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_matrix_hyb.cpp.o
In file included from /public/home/keniz/smflow/rocALUTION-develop/src/base/hip/hip_matrix_hyb.cpp:40:
/public/home/keniz/smflow/rocALUTION-develop/src/base/hip/hip_sparse.hpp:141:38: error: unknown type name 'rocsparse_direction'; did you mean 'rocsparse_action'?
rocsparse_direction dir,
^~~~~~~~~~~~~~~~~~~
rocsparse_action
/opt/rocm/include/rocsparse-types.h:181:3: note: 'rocsparse_action' declared here
} rocsparse_action;
^
In file included from /public/home/keniz/smflow/rocALUTION-develop/src/base/hip/hip_matrix_hyb.cpp:40:
/public/home/keniz/smflow/rocALUTION-develop/src/base/hip/hip_sparse.hpp:282:40: error: unknown type name 'rocsparse_direction'; did you mean 'rocsparse_action'?
rocsparse_direction dir,
^~~~~~~~~~~~~~~~~~~
rocsparse_action
/opt/rocm/include/rocsparse-types.h:181:3: note: 'rocsparse_action' declared here
} rocsparse_action;
^
2 errors generated.
In file included from /public/home/keniz/smflow/rocALUTION-develop/src/base/hip/hip_matrix_hyb.cpp:40:
/public/home/keniz/smflow/rocALUTION-develop/src/base/hip/hip_sparse.hpp:141:38: error: unknown type name 'rocsparse_direction'; did you mean 'rocsparse_action'?
rocsparse_direction dir,
^~~~~~~~~~~~~~~~~~~
rocsparse_action
/opt/rocm/include/rocsparse-types.h:181:3: note: 'rocsparse_action' declared here
} rocsparse_action;
^
In file included from /public/home/keniz/smflow/rocALUTION-develop/src/base/hip/hip_matrix_hyb.cpp:40:
/public/home/keniz/smflow/rocALUTION-develop/src/base/hip/hip_sparse.hpp:282:40: error: unknown type name 'rocsparse_direction'; did you mean 'rocsparse_action'?
rocsparse_direction dir,
^~~~~~~~~~~~~~~~~~~
rocsparse_action
/opt/rocm/include/rocsparse-types.h:181:3: note: 'rocsparse_action' declared here
} rocsparse_action;
^
2 errors generated.
CMake Error at rocalution_hip_generated_hip_matrix_hyb.cpp.o.cmake:174 (message):
Error generating file
/public/home/keniz/smflow/rocALUTION-develop/build/src/CMakeFiles/rocalution_hip.dir/base/hip/./rocalution_hip_generated_hip_matrix_hyb.cpp.o

make[2]: *** [src/CMakeFiles/rocalution_hip.dir/build.make:156: src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_matrix_hyb.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:124: src/CMakeFiles/rocalution_hip.dir/all] Error 2

ruge_stueben_amg.hpp does not compile with hipcc (clang-14)

The following was reported by @dipietrantonio in #144. Breaking it out to a separate issue here.

rocALUTION/src/solvers/multigrid/ruge_stueben_amg.hpp:

#if defined(WIN32) || defined(_WIN32) || defined(__WIN32)
#else
        [[deprecated("This function will be removed in a future release. Use "
                     "SetStrengthThreshold() instead")]]
#endif

Had to remove the code in the else block because it wouldnt compile using hipcc (clang-14).

Integration and usage documentation with OpenFoam as ROCm alternative to PARALUTION

As of now I'm unable to integrate rocALUTION, rocSPARSE, and other required ROCm 4.3 libraries into any version of Open Foam, be it V2012, 2106, V9.3 etc.

No specific documentation is provided on the ROCm side for integration into OpenFoam. I've RTFM'ed

  1. OpenFOAM manual for adding external libs: https://cfd.direct/openfoam/user-guide/v9-compiling-applications/
  2. Paralution guide for integrating into OpenFOAM: https://usermanual.wiki/Document/paralutionusermanual.1846194347/html#pf50

Any guidance on integration would be greatly appreciated.

build fails in 4.5.2

Followed the instruction but cmake .. fails (below)

root@nonroot-SYS-7049GP-TRT:~/ROCm-4.5/rocALUTION/build# cmake .. -DSUPPORT_HIP=ON
-- hip::amdhip64 is SHARED_LIBRARY
-- hip::amdhip64 is SHARED_LIBRARY
-- hip::amdhip64 is SHARED_LIBRARY
-- hip::amdhip64 is SHARED_LIBRARY
CMake Error at src/CMakeLists.txt:138 (hip_add_library):
  Unknown CMake command "hip_add_library".


-- Configuring incomplete, errors occurred!
See also "/root/ROCm-4.5/rocALUTION/build/CMakeFiles/CMakeOutput.log".
See also "/root/ROCm-4.5/rocALUTION/build/CMakeFiles/CMakeError.log".
root@nonroot-SYS-7049GP-TRT:~/ROCm-4.5/rocALUTION/build# make
make: *** No targets specified and no makefile found.  Stop.
root@nonroot-SYS-7049GP-TRT:~/ROCm-4.5/rocALUTION/build# git branch
* (no branch)
root@nonroot-SYS-7049GP-TRT:~/ROCm-4.5/rocALUTION/build# git remote -v
rocm-swplat	https://github.com/ROCmSoftwarePlatform/rocALUTION (fetch)
rocm-swplat	https://github.com/ROCmSoftwarePlatform/rocALUTION (push)

Problem with distribute_matrix in common.hpp

Hello,

I have a problem with the distribute_matrix function (in common.hpp) that I integrated in my code (JADIM - issue rocALUTION for fortran #163 ) to be able to use rocALUTION on several mpi processes with 1 graphics card for the moment.
The function works for 1 mpi process but with 2 mpi processes, I have a segfault that appears as soon as the 1st mpi process arrives gmat->SetParallelManager(*pm);
I have made many printouts and I find that there are some things a bit strange.
Can you advise me to solve this bug ? Thanks for your help. Pierre

CMake Error at src/CMakeLists.txt:104 (rocm_set_soversion)

installed rocBLAS, rocSPARSE, rocRAND
....

paolo@fastmmw:~/FastMM/Epyc/rocALUTION$ ./install.sh -idc
Creating project build directory in: ./build
[sudo] password for paolo:
Hit:1 http://repo.radeon.com/rocm/apt/debian xenial InRelease
Hit:2 http://security.ubuntu.com/ubuntu bionic-security InRelease
Hit:3 http://us.archive.ubuntu.com/ubuntu bionic InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
~/FastMM/Epyc/rocALUTION ~/FastMM/Epyc/rocALUTION
Building googletest from source; installing into /usr/local
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:

BUILD_BOOST

-- Build files have been written to: /home/paolo/FastMM/Epyc/rocALUTION/build/deps
Scanning dependencies of target install
Built target install
~/FastMM/Epyc/rocALUTION
~/FastMM/Epyc/rocALUTION ~/FastMM/Epyc/rocALUTION
-- The CXX compiler identification is GNU 7.5.0
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.17.1")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- Found MPI_CXX: /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- Found HIP: /opt/rocm-3.8.0/hip (found version "3.8.20371-d1886b0b")
-- Looking for C++ include pthread.h
-- Looking for C++ include pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
CMake Error at src/CMakeLists.txt:104 (rocm_set_soversion):
Unknown CMake command "rocm_set_soversion".

-- Configuring incomplete, errors occurred!
See also "/home/paolo/FastMM/Epyc/rocALUTION/build/release/CMakeFiles/CMakeOutput.log".
See also "/home/paolo/FastMM/Epyc/rocALUTION/build/release/CMakeFiles/CMakeError.log".

hip_add_library

rocm-user@5398fc181728:~/rocALUTION/build$ cmake --debug-find ../ -DSUPPORT_HIP=ON -DSUPPORT_OMP=ON -DBUILD_EXAMPLES=ON 
-- Setting build type to 'Release' as none was specified.
-- The CXX compiler identification is GNU 7.5.0
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.17.1") 
-- Found OpenMP_CXX: -fopenmp (found version "4.5") 
-- Found OpenMP: TRUE (found version "4.5")  
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) 
-- Could NOT find MPI (missing: MPI_CXX_FOUND) 
-- MPI not found. Compiling WITHOUT MPI support.
-- Looking for C++ include pthread.h
-- Looking for C++ include pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- ROCclr at /opt/rocm/lib/cmake/rocclr
-- hip::amdhip64 is SHARED_LIBRARY
-- hip::amdhip64 is SHARED_LIBRARY
CMake Error at src/CMakeLists.txt:129 (hip_add_library):
  Unknown CMake command "hip_add_library".


-- Configuring incomplete, errors occurred!
See also "/home/rocm-user/rocALUTION/build/CMakeFiles/CMakeOutput.log".
See also "/home/rocm-user/rocALUTION/build/CMakeFiles/CMakeError.log".
rocm-user@5398fc181728:~/rocALUTION/build$ sudo apt install amdhip64 
Reading package lists... Done
Building dependency tree       
Reading state information... Done

E: Unable to locate package amdhip64

Please enable two factor authentication in your github account

@RaoKarter

We are going to enforce two factor authentication in (https://github.com/ROCmSoftwarePlatform/) organization on 29th April, 2022 .
Since we identified you as outside collaborator for ROCmSoftwarePlatform organization, you need to enable two factor authentication in your github account else you shall be removed from the organization after the enforcement.
Please skip if already done.

To set up two factor authentication, please go through the steps in below link:

https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/configuring-two-factor-authentication

Please email "[email protected]" for queries

Error while building rocALUTION for rocm5.1.x

Hi,
I am trying to build rocm5.1 from source, and this time I am blocked by a build error in rocALUTION.
Below is the build error. Here is the version I am using

ubuntu@cdp-rocmbuild:~/rocm-from-source/build/rocALUTION$ git log
commit 7c8611fd4e86fcfb7a73f2cb14c4a5f43db87765 (HEAD, tag: rocm-5.1.1, tag: rocm-5.1.0, rocm-swplat/release/rocm-rel-5.1, m/roc-5.1.x)
Installing rocALUTION ..
mkdir: cannot create directory ‘build’: File exists
Running command cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/opt/rocm-dev2  -DCMAKE_CXX_COMPILER=hipcc -DAMDGPU_TARGETS=gfx908 -DBUILD_CLIENTS_SAMPLES=OFF -DCMAKE_MODULE_PATH=/opt/rocm-dev2/hip/cmake;/opt/rocm-dev2 ..
-- Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES) 
-- Could NOT find OpenMP (missing: OpenMP_CXX_FOUND) 
-- OpenMP not found. Compiling WITHOUT OpenMP support.
-- Checking for module 'mpi-cxx'
--   No package 'mpi-cxx' found
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) 
-- Could NOT find MPI (missing: MPI_CXX_FOUND) 
-- MPI not found. Compiling WITHOUT MPI support.
-- Found HIP: /opt/rocm-dev2/hip (found version "5.1.20531-") 
-- hip::amdhip64 is SHARED_LIBRARY
-- hip::amdhip64 is SHARED_LIBRARY
-- hip::amdhip64 is SHARED_LIBRARY
-- hip::amdhip64 is SHARED_LIBRARY
CMake Warning at /opt/rocm-dev2/share/rocm/cmake/ROCMUtilities.cmake:50 (message):
  Could not determine the version of program rpmbuild.
Call Stack (most recent call first):
  /opt/rocm-dev2/share/rocm/cmake/ROCMCreatePackage.cmake:284 (rocm_find_program_version)
  src/CMakeLists.txt:221 (rocm_create_package)


INFOrocm_set_cpack_gen didn't find ROCM_PKGTYPE in environment
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ubuntu/rocm-from-source/build/rocALUTION/build
Running command make -j 8 install
[  1%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_vector.cpp.o
[  2%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_backend_hip.cpp.o
[  3%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_allocate_free.cpp.o
[  5%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_conversion.cpp.o
[  6%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_blas.cpp.o
[  7%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_matrix_bcsr.cpp.o
[  8%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_matrix_csr.cpp.o
[ 10%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_matrix_coo.cpp.o
[ 11%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_matrix_dense.cpp.o
[ 12%] Building HIPCC object src/CMakeFiles/rocalution_hip.dir/base/hip/rocalution_hip_generated_hip_matrix_dia.cpp.o
In file included from /home/ubuntu/rocm-from-source/build/rocALUTION/src/base/hip/hip_matrix_csr.cpp:53:
/home/ubuntu/rocm-from-source/build/rocALUTION/src/base/hip/hip_kernels_csr.hpp:53:33: error: invalid operands to binary expression ('std::complex<double>' and 'std::complex<double>')
                val[aj] = alpha * val[aj];
                          ~~~~~ ^ ~~~~~~~
/home/ubuntu/rocm-from-source/build/rocALUTION/src/base/hip/hip_matrix_csr.cpp:2880:33: note: in instantiation of function template specialization 'rocalution::kernel_csr_scale_diagonal<std::complex<double>, int>' requested here
            hipLaunchKernelGGL((kernel_csr_scale_diagonal<ValueType, int>),
                                ^
/opt/rocm-dev2/hip/include/hip/amd_detail/amd_hip_runtime.h:349:15: note: candidate function not viable: no known conversion from 'std::complex<double>' to '__HIP_Coordinates<__HIP_GridDim>::__X' for 1st argument
std::uint32_t operator*(__HIP_Coordinates<__HIP_GridDim>::__X,
              ^
/opt/rocm-dev2/hip/include/hip/amd_detail/amd_hip_runtime.h:355:15: note: candidate function not viable: no known conversion from 'std::complex<double>' to '__HIP_Coordinates<__HIP_BlockDim>::__X' for 1st argument
std::uint32_t operator*(__HIP_Coordinates<__HIP_BlockDim>::__X,
              ^
/opt/rocm-dev2/hip/include/hip/amd_detail/amd_hip_runtime.h:361:15: note: candidate function not viable: no known conversion from 'std::complex<double>' to '__HIP_Coordinates<__HIP_GridDim>::__Y' for 1st argument
std::uint32_t operator*(__HIP_Coordinates<__HIP_GridDim>::__Y,
              ^
/opt/rocm-dev2/hip/include/hip/amd_detail/amd_hip_runtime.h:367:15: note: candidat

rocALUTION for fortran

Hello, I would like to use rocALUTION for a CFD code to use it on the new machine of CINES (France) Adastra (AMD processors and graphics cards). I would like to solve a pseudo 3D fish equation with the CG solver and an algebraic multi grid preconditioner (SA-AMG). Would you have a fortran interface to use rocALUTION ? Thanks for your answer.

Files tagged with HIP_SOURCE_PROPERTY_FORMAT generate wrong HIP_CLANG_PATH

Hi! I'm packaging ROCm for Arch Linux. With HIP 5.5.1 I encountered the issue that HIP_CLANG_PATH was pointing to /opt/rocm/llvm (instead of the correct /opt/rocm/llvm/bin) as the env var with the same name is overwritten by the cmake macro HIP_PREPARE_TARGET_COMMANDS declared in FindHIP.cmake. If HIP_SOURCE_PROPERTY_FORMAT is set to TRUE, cmake modifies the env vars for hipcc in an inconsistent way. Removing the tags in src/CMakeLists.txt resolves this issue for me.

--- rocALUTION-rocm-5.5.1/src/CMakeLists.txt.bak	2023-05-27 18:12:40.853740486 +0200
+++ rocALUTION-rocm-5.5.1/src/CMakeLists.txt	2023-05-27 18:12:49.834025607 +0200
@@ -115,11 +115,6 @@

 # Create rocALUTION hip library
 if(SUPPORT_HIP)
-  # Flag source file as a hip source file
-  foreach(i ${HIP_SOURCES})
-    set_source_files_properties(${i} PROPERTIES HIP_SOURCE_PROPERTY_FORMAT TRUE)
-  endforeach()
-
   # HIP flags workaround while target_compile_options do not work
   # list(APPEND HIP_HIPCC_FLAGS "-O3 -march=native -Wno-unused-command-line-argument -fPIC -std=c++14")
   list(APPEND HIP_HIPCC_FLAGS "-O3 -Wno-unused-command-line-argument -fPIC -std=c++14")

I couldn't find any documentation for HIP_SOURCE_PROPERTY_FORMAT, and it isn't used in similar libraries like rocBLAS or rocSOLVER. It's only mentioned in a comment on building with rocBLAS with CUDA

# ########################################################################
# NOTE:  CUDA compiling path
# ########################################################################
# I have tried compiling rocBLAS library source with multiple methods,
# and ended up using the approach where we set the CXX compiler to hipcc.
# I didn't like using the HIP_ADD_LIBRARY or CUDA_ADD_LIBRARY approaches,
# for the reasons I list here.
# 1.  Adding header include directories is through HIP_INCLUDE_DIRECTORIES(), which
# is global to a directory and affects all targets
# 2.  You must add HIP_SOURCE_PROPERTY_FORMAT OBJ properties to .cpp files
# to get HIP_ADD_LIBRARY to recognize the file
# 3.  HIP_ADD_LIBRARY invokes a call to add_custom_command() to compile files,
# and rocBLAS does the same.  The order in which custom commands execute is
# undefined, and sometimes a file is attempted to be compiled before it has
# been generated.  The fix for this is to create 'PHONY' targets, which I
# don't desire.

rocALUTION build following ROCm versions

Thanks for all the updates of rocALUTION library. We would be glad to use the new features and take advantage of bug fixes. But the fact that rocALUTION releases follow closely ROCm versions is really annoying. Our HPC cluster is still on ROCm 5.2.x... Any hope rocALUTION main branch is rebased on older ROCm version? Thanks

RPATH is missing from ROCm 5.0.0 release

Please correct me if I'm wrong since I'm not familiar with C++ build processes / systems. I'm using the Ubuntu ROCm 5.0.0 release. After some investigation about some software not running with ROCm, I noticed that RPATH or DT_RUNTIME (I'm not familiar with their differences) for rocALUTION is not set for the Ubuntu release.

I'm also not sure if this is the correct repository to report this issue. Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.