Giter Club home page Giter Club logo

rapidcfd-dev's People

Contributors

aokomoriuta avatar daniel-jasinski avatar didzis-berenis avatar misieksajkowski avatar tonkomollc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rapidcfd-dev's Issues

problem with multi GPU execution

Hi,

I am trying to execute the multi GPU case provided in the RapidCFD test case github.
I am getting the following segmentation fault-

terminate called after throwing an instance of 'thrust::system::system_error'
what(): __copy:: H->D: failed: invalid argument
Aborted (core dumped)

I did not do anything special for the multi GPU execution. I ran the interFoam solver as I ran the icoFoam for single GPU. Are there some special instructions for executing with multiple GPU's?

Thanks and Regards,
Manasi

Instructions to implement another solver?

Hello,

are there any instructions or examples about how to add another solver, let's say an experimental version of the simpleFoam or pisoFoam solver?

Or at least, can someone direct my to the relevant code sections.

Klaus

PreConditioners

I tested RapidCFD for some test cases, where the Preconditioners are always AINV, instead of DILU and DIC from openfoam. Do you have any plan to include DILCU or DIC preconditioners to RapidCFD? Some of the tutorial cases from openfoam are not producing the exact results, when we compare the solution generated by openfoam and RapidCFD.

Your web page shows the test cases "LES". But what kind of problem is it? In OpenFOAM tutorials, LES has three subdirectories.

codeFixedValue // dynamicCode

Hello all,
I have successfully used the codeFixedValue boundary for the pitzDailyExptInlet tutorial in OF17 with the following code (http://www.wolfdynamics.com/wiki/programming_OF_BCs.pdf):

inlet
{
type codedFixedValue;
value uniform (0 0 0);
redirectType mappedValue;

codeOptions
#{
-I$(LIB_SRC)/finiteVolume/lnInclude
-I$(LIB_SRC)/meshTools/lnInclude
#};
codeInclude
#{
#include "fvCFD.H"
#include
#include
#};
code
#{
const fvPatch& boundaryPatch = patch();
const vectorField& Cf = boundaryPatch.Cf();
vectorField& field = this;
scalar U_0 = 2, p_ctr = 8, p_r = 8;
forAll(Cf, faceI)
{
field[faceI] = vector(U_0
(1-(pow(Cf[faceI].y()-p_ctr,2))/(p_r*p_r)),0,0);
}
#};
}

However when I tried to use a variation in RCFD:

"const vectorgpuField& Cf = boundaryPatch.Cf()" instead "const vectorField& Cf = boundaryPatch.Cf()"
"vectorgpuField& field = *this" instead "vectorField& field = *this;"

I have got the following message:

Using dynamicCode for patch inlet on field U at line 25 in "/home/segal/RapidCFD/segal-dev/run/pitzDailyExptInlet/0/U.boundaryField.inlet"
Creating new library in "dynamicCode/mappedValue/platforms/linux64NvccDPOpt/lib/libmappedValue_f4ffae99d5120b42d5ae02dd5cbf45b2d4ff17e9.so"
Invoking "wmake -s libso /home/segal/RapidCFD/segal-dev/run/pitzDailyExptInlet/dynamicCode/mappedValue"
wmakeLnInclude: linking include files to ./lnInclude
Making dependency list for source file fixedValueFvPatchFieldTemplate.C
/home/segal/RapidCFD/segal-dev/run/pitzDailyExptInlet/0/U.boundaryField.inlet(50): error: no operator "[]" matches these operands
operand types are: Foam::vectorgpuField [ Foam::label ]

/home/segal/RapidCFD/segal-dev/run/pitzDailyExptInlet/0/U.boundaryField.inlet(50): error: no operator "[]" matches these operands
operand types are: const Foam::vectorgpuField [ Foam::label ]

2 errors detected in the compilation of "/tmp/tmpxft_000015e1_00000000-6_fixedValueFvPatchFieldTemplate.cpp1.ii".
fixedValueFvPatchFieldTemplate.dep:656: fallo en las instrucciones para el objetivo 'Make/linux64NvccDPOpt/fixedValueFvPatchFieldTemplate.o'
make: *** [Make/linux64NvccDPOpt/fixedValueFvPatchFieldTemplate.o] Error 2

--> FOAM FATAL IO ERROR:
Failed wmake "dynamicCode/mappedValue/platforms/linux64NvccDPOpt/lib/libmappedValue_f4ffae99d5120b42d5ae02dd5cbf45b2d4ff17e9.so"

file: /home/segal/RapidCFD/segal-dev/run/pitzDailyExptInlet/0/U.boundaryField.inlet from line 25 to line 41.
From function codedBase::createLibrary(..)
in file db/dynamicLibrary/codedBase/codedBase.C at line 213.

FOAM exiting

Hope someone can help me to solve this, since at this point is the only solution I can figure out to impose a velocity field,which varies in space, at inlet. The timeVaryingMappedFixedValue boundary condition is not an option, even when it is included in source code at compiling time it crashes.

Best Regards
Daniel

Problems when installing RC

I downloaded the RapidCFD-dev and did the same things as you told in issue "how to install RapidCFD".

I tried to install it on CentOS, and problem happens as follows.

So what is the key issue and should I download a ThirdParty package?

/************************************************************************************
make: Nothing to be done for `all'.
no ThirdParty sources found - skipping

  • wmakePrintBuild -check
    no git description found
  • /bin/rm -f OpenFOAM/Make/linux64NvccDPOpt/global.C OpenFOAM/Make/linux64NvccDPOpt/global.o
  • wmakeLnInclude OpenFOAM
  • wmakeLnInclude OSspecific/POSIX
  • Pstream/Allwmake
  • wmake libso dummy
    '/home/liu-t11/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/lib/dummy/libPstream.so' is up to date.
  • case "$WM_MPLIB" in
  • set +x

Note: ignore spurious warnings about missing mpicxx.h headers

wmake libso mpi
SOURCE=UOPwrite.C ; nvcc -Xptxas -dlcm=cg -m64 -arch=sm_30 -Dlinux64 -DWM_DP -Xcompiler -Wall -Xcompiler -Wextra -Xcompiler -Wno-unused-parameter -Xcompiler -Wno-vla -Xcudafe "--diag_suppress=null_reference" -Xcudafe "--diag_suppress=subscript_out_of_range" -Xcudafe "--diag_suppress=extra_semicolon" -Xcudafe "--diag_suppress=partial_override" -Xcudafe "--diag_suppress=implicit_return_from_non_void_function" -Xcudafe "--diag_suppress=virtual_function_decl_hidden" -O3 -DNoRepository -D__RESTRICT__='restrict' -DOMPI_SKIP_MPICXX -I/home/liu-t11/RapidCFD/ThirdParty-dev/platforms/linux64Nvcc/openmpi-1.8.4/include -IlnInclude -I. -I/home/liu-t11/RapidCFD/RapidCFD-dev/src/OpenFOAM/lnInclude -I/home/liu-t11/RapidCFD/RapidCFD-dev/src/OSspecific/POSIX/lnInclude -Xcompiler -fPIC -x cu -D__HOST____DEVICE__='host device' -o Make/linux64NvccDPOptOPENMPI/UOPwrite.o -c $SOURCE
UOPwrite.C:29:17: 错误:mpi.h:没有那个文件或目录
[Translation: error: mpi.H: no such file or directory].

how large a test case is required to see speedup

Hi,

I am running various cases on RapidCFD and comparing it to executions by OpenFOAM on CPU's.
I have two Tesla K20's, Centos 6.10 and Cuda 9.2. On the CPU side, I have a processor with 16 cores.

I have run the following cases (both with single GPU)-

  1. Cavity case (400 cells), icoFoam solver -
    - OpenFOAM sequential : less than a second
    - RapidCFD : 5 seconds
  2. damBreak case (2200 cells), interFoam solver -
    - OpenFOAM sequential : 5 seconds
    - RapidCFD : 115 seconds
    I am currently running a large test case so that I can see speedup on multiple GPU's. It is a damBreak test case with 5.7M cells. It is mentioned in the Github repository for that case that I will be able to surely see a speedup for it. But are there no smaller test cases which can show me the improvement of using GPU's over CPU's for OpenFOAM solvers? If there are such cases please direct me towards them.

Moreover, am I doing something wrong that I am not able to see the speedup with the 2k cells tets case also?

Thanks and regards,
Manasi

__copy:: H->D: failed: invalid argument

Hello,

I installed without problems RapidCFD on Ubuntu 18.04, cuda 9.2.
GPU is Quadro K4200. On a simpleFoam case of a simple cylinder, (inlet with fixed velocity, outlet with fixed pressure) and number of cells 1.7M, rapidcfd works fine. The memory usage at this case is about 50% (2G out of 4). The identical case where i increase to 2M i get the following error:

Create time

Create mesh for time = 0

Reading field p

Reading field U

Reading/calculating face flux field phi

terminate called after throwing an instance of 'thrust::system::system_error'
what(): __copy:: H->D: failed: invalid argument

It seems like a memory issue to me although i hardly believe that the addition of 0.5 M cells make the memory usage from 2G to more than the limit of the GPU.

(I also tried to install another GPU dedicated for the video output and left as the only device for cuda the QK4200 GPU. Rapidcfd works again for the 1.5M but still the same error at 2M)

Do you have any idea?

Thank you very much for your help,
Nikos

Compile error

Hello,

I am trying to build RapidCFD on my Nvidia Quadro M2000 and I get this error message:

SOURCE=derivedFvPatchFields/wallFunctions/epsilonWallFunctions/epsilonWallFunction/epsilonWallFunctionFvPatchScalarField.C ; nvcc -Xptxas -dlcm=cg -m64 -arch=sm_30 -Dlinux64 -DWM_DP -Xcompiler -Wall -Xcompiler -Wextra -Xcompiler -Wno-unused-parameter -Xcompiler -Wno-vla -Xcudafe "--diag_suppress=null_reference" -Xcudafe "--diag_suppress=subscript_out_of_range" -Xcudafe "--diag_suppress=extra_semicolon" -Xcudafe "--diag_suppress=partial_override" -Xcudafe "--diag_suppress=implicit_return_from_non_void_function" -Xcudafe "--diag_suppress=virtual_function_decl_hidden" -O3 -DNoRepository -D__RESTRICT__='restrict' -I/opt/RapidCFD-dev/src/turbulenceModels -I/opt/RapidCFD-dev/src/transportModels -I/opt/RapidCFD-dev/src/finiteVolume/lnInclude -I/opt/RapidCFD-dev/src/meshTools/lnInclude -IlnInclude -I. -I/opt/RapidCFD-dev/src/OpenFOAM/lnInclude -I/opt/RapidCFD-dev/src/OSspecific/POSIX/lnInclude -Xcompiler -fPIC -x cu -D__HOST____DEVICE__='host device' -o Make/linux64NvccDPOpt/epsilonWallFunctionFvPatchScalarField.o -c $SOURCE
/usr/local/cuda-9.0/bin/../targets/x86_64-linux/include/thrust/system/cuda/detail/copy_if.h(352): error: calling a host function("Foam::incompressible::epsilonWallFunctionGraterThanToleranceFunctor::operator ()") from a device function("thrust::cuda_cub::__copy_if::CopyIfAgent< ::thrust::detail::normal_iterator< ::thrust::device_ptr > , ::thrust::detail::normal_iterator< ::thrust::device_ptr > , ::thrust::detail::normal_iterator< ::thrust::device_ptr > , ::Foam::incompressible::epsilonWallFunctionGraterThanToleranceFunctor, int, int *> ::impl::compute_selection_flags<(bool)0, ( ::thrust::cuda_cub::__copy_if::CopyIfAgent< ::thrust::detail::normal_iterator< ::thrust::device_ptr > , ::thrust::detail::normal_iterator< ::thrust::device_ptr > , ::thrust::detail::normal_iterator< ::thrust::device_ptr > , ::Foam::incompressible::epsilonWallFunctionGraterThanToleranceFunctor, int, int *> ::impl::ItemStencil)1, double> ") is not allowed

I am assuming there should have been successful attempts compiling the code.
Am I doing something wrong that I am having this issue?

NVvidia Quadro M2000
gcc (SUSE Linux) 4.8.5
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176

Thank you

Time step Continuity error and wrong result

Hi FOAMers,
I am new to rapidCFD and was trying to do a case which I did with CPU and compare the performances for implementing rapidCFD in my office.
I have the time step continuity error suddenly increasing after first iteration and the force results which I get are very big and unusual.

`Time = 1

smoothSolver: Solving for Ux, Initial residual = 1, Final residual = 159418, No Iterations 1000
smoothSolver: Solving for Uy, Initial residual = 1, Final residual = 0.0959547, No Iterations 9
smoothSolver: Solving for Uz, Initial residual = 1, Final residual = 0.164759, No Iterations 1000
GAMG: Solving for p, Initial residual = 1, Final residual = 6.27047e-07, No Iterations 28
GAMG: Solving for p, Initial residual = 0.00855865, Final residual = 6.45405e-07, No Iterations 13
GAMG: Solving for p, Initial residual = 0.000511022, Final residual = 7.92775e-07, No Iterations 8
time step continuity errors : sum local = 6.69426e-07, global = 2.53067e-09, cumulative = 2.53067e-09
smoothSolver: Solving for omega, Initial residual = 0.00559497, Final residual = 0.000797445, No Iterations 1000
bounding omega, min: -38.6975 max: 8345.56 average: 66.5402
smoothSolver: Solving for k, Initial residual = 1, Final residual = 381.871, No Iterations 1000
ExecutionTime = 498.55 s ClockTime = 509 s

--> FOAM Warning :
From function void Foam::forces::initialise()
in file forces/forces.C at line 190
Could not find U, p or rho in database.
De-activating forces.
Time = 2

smoothSolver: Solving for Ux, Initial residual = 1, Final residual = 0.090419, No Iterations 8
smoothSolver: Solving for Uy, Initial residual = 0.680948, Final residual = 0.160071, No Iterations 1000
smoothSolver: Solving for Uz, Initial residual = 0.140566, Final residual = 0.028275, No Iterations 1000
GAMG: Solving for p, Initial residual = 1, Final residual = 8.03951e-07, No Iterations 23
GAMG: Solving for p, Initial residual = 0.00500815, Final residual = 9.5685e-07, No Iterations 9
GAMG: Solving for p, Initial residual = 0.000165038, Final residual = 9.3511e-07, No Iterations 5
time step continuity errors : sum local = 1.14212e+63, global = 2.51796e+60, cumulative = 2.51796e+60
smoothSolver: Solving for omega, Initial residual = 0.00342166, Final residual = 0.000331776, No Iterations 11
smoothSolver: Solving for k, Initial residual = 0.00257118, Final residual = 1.62176, No Iterations 1000
ExecutionTime = 517.65 s ClockTime = 528 s

Time = 3

smoothSolver: Solving for Ux, Initial residual = 0.50522, Final residual = 0.0385634, No Iterations 5
smoothSolver: Solving for Uy, Initial residual = 0.473826, Final residual = 0.0283562, No Iterations 4
smoothSolver: Solving for Uz, Initial residual = 0.4935, Final residual = 0.0316168, No Iterations 5
GAMG: Solving for p, Initial residual = 1, Final residual = 8.14834e-07, No Iterations 28
GAMG: Solving for p, Initial residual = 0.00020453, Final residual = 8.63193e-07, No Iterations 5
GAMG: Solving for p, Initial residual = 5.92842e-06, Final residual = 5.38652e-07, No Iterations 2
time step continuity errors : sum local = 1.54934e+65, global = -8.29114e+62, cumulative = -8.26596e+62
smoothSolver: Solving for omega, Initial residual = 0.00432139, Final residual = 0.00066469, No Iterations 1000
smoothSolver: Solving for k, Initial residual = 0.00252876, Final residual = 2.81754, No Iterations 1000
ExecutionTime = 531.09 s ClockTime = 541 s `
Any clues on why this happening?
And is there any ways to decrease the time taken to start a simulation? ( It takes more than 30 minutes to start the first time step calculation for me :( )
Thanks for your valuable time.
[I have attached the fvschems,fvsolution,checkmesh log and controldict]

fv_files.zip

Build errors

I've seen these errors repeated over and over while building:

List.C: error: ‘class Foam::List<int>’ has no member named ‘v_’
VectorSpace.C: error: ‘const class Foam::Vector<double>’ has no member named ‘operator[]

Problem with fields near zerogradient boundary condition

Tutorial example wedge15 (compressible/rhoCentralFoam) have some problem with fealds of p, T, rho and U in the boundary cells with zerogradient boundary condition. Other cases have the problem, but if using OF 3.0 this cases work allright. I have used rhoCentralFoam solver RapidCFD. What are you thinking about it?

Accuracy RapidCFD vs. OpenFOAM2.3.1

Dear RapidCFD Community,

I recently started working with RapidCFD. My first test case is a straight pipe (Domain: D= 0.1 m, length 10 m; mesh: hexahedral O-grid, y+~1, ~10 mio. cells; incompressible, kOmega turbulence model). My aim was to test the performance of RapidCFD and compare the results to OpenFOAM (Version 2.3.1), analytical data and experimental data.

As expected, I gained a great speed-up using GPU, but unfortunately my results are less accurate compared to OpenFOAM. Using the same set up (exception: linear solvers, because AINV is not available in OpenFOAM) the calculated pressure drop is underestimated by RapidCFD compared to OpenFOAM and the velocity profile is less steep (both underestimate the analytical data (know ‘problem’ of simulation results)).

Does anyone else noticed a difference in simulation results (RapidCFD – OpenFOAM 2.3.1) Could it be the influence of the linear solver or am I missing something?

Thanks for help!

Best way to manipulate a symmTensorgpuField

Hello,
just comming to rapidCFD, I'm trying to update one of my lib ...
with openfoam given a symmTensorField R(x.size()) , I can fill it in a forAll loop with R[i] = symmTensor(Rxx, Rxy, Rxz, Ryy, Ryz, Rzz);

I suppose it's trivial but what's the easy and clean way to fill it using thrust tuples having already computed the R components?

Kind regards

Nicolas

Compilation error Ubuntu 16.04

Hello everyone,

I would like to compile RCFD on Ubuntu 16.04 with CUDA 8.0 and gcc 5.4.0 but I'm facing the following error when running ./Allwmake :

/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
return (char *) memcpy (__dest, __src, __n) + __n;

Does anyone was facing this error before?

Regards.

Anthony.

what's the optimal setting of GAMG in gpu?

Dear Daniel,
In sim-flow website it says that GAMG is faster than DIC/DILU preconditoner, but I in my testing it was slower than AINV preconditioner in RapidCFD by 1.27x in 100 time step, this my GAMG setting:
p
{
solver GAMG;
tolerance 1e-06;
relTol 0;
smoother GaussSeidel;
nPreSweeps 0;
nPostSweeps 2;
nFinestSweeps 2;
cacheAgglomeration true;
nCellsInCoarsestLevel 10;
agglomerator faceAreaPair;
mergeLevels 1;
}
it slower than AINV precondioner by 1.27x, Is it convenience for you to post your GAMG setting in this thread https://sim-flow.com/rapid-cfd-gpu/ (LES case)? Is there a different optimal setting in RC compare with openfoam?

Benjamin,
Best Reagrads.

Assigning jobs to different GPUs

I managed compiling my custom application and transport and turbulence models. However, I did not yet manage to compile swak4Foam with RapidCFD, which would be nice, too. But for now, this is fully sufficient and I am very happy with the performance on one GPU! =)

But as I have two Tesla K80 at hand I wanted to ask, if there is a possibility from within RapidCFD, with which I can assign a job to a certain GPU. I do not want to have simultaneous jobs running on the same GPU, but distribute them... Similiar to running a decomposed case with the normal OpenFOAM-Distribution and assigning certain machines on which to run the processes...
Is there any such thing implemented in RapidCFD or do I need to assign this before submitting a job from within RapidCFD?

I hope my question is clear enough?!

Thanks a lot for any hints!
Johannes

DynamicMesh

I and my colleague noticed that lnInclude directory is missing in dynamicFvMesh. Unfortunately due to this error, dynamicMesh is not compiled properly. Could you please add this directory?
*/opt/RapidCFD-dev/src/dynamicFvMesh has only 8 directories, whereas openfoam has 9

Thrust system error at the end

Hi,
I was able to run the cavity case with icoFoam solver. However, after convergence is reached, the following error occurs:
terminate called after throwing an instance of 'thrust::system::system_error'
what(): device free failed: driver shutting down

This happens after the complete simulation. Any idea why is this happening? any help would be appreciated.

Regards,
Manasi

How to use the icoFoam command

Hello,

I know that this is trivial, however I've managed to build RapidCDF and I'm now trying to run the icoFoam command. Could someone please advise me how to use the icoFoam command. I assume that I can start a parallel process on each gpu card -- is that correct? With standard OF I usually use 16 parallel processes for the benchmark, I've tried a few variations with no success so far, for example...

mpirun -n 2 icoFoam -devices 0x2000 0x8b00

Furthermore am I right to say that I should initially use OF to run "blockMesh" and "decomposePar"? Not being an OF user I'm a bit at sea

Best regards,
David.

const cache instead of texture cache

Hello,

In commit #41 we eliminated read-only data cache capabilities for CC>=3.5 in order to allow execution of applications with the -arch_sm_xx flag greater than xx=30. This speeds up application startup on GPU's with CC > 3.x.

In reviewing this PR, Daniel wrote the following idea:

I was thinking about this way of resolving the issue of CC >= 3.5 and I think that we could try to keep the const cache access (__ldg instruction) instead of using the texture cache. This would require checking CC version at run-time and skipping creation and destruction of texture object for CC >= 3.5.

I may have implemented part of this idea, but I have not thoroughly tested to warrant a pull request. But I am interested in discussing with anyone interested. The changes are at https://github.com/TonkomoLLC/RapidCFD-dev/commit/d48e6509891f84fee18f66cdf1ca505014bc8ea3

Implemented:

  • Use of texture cache is disabled in lduMatrix math (i.e., creation and destruction of the texture object is disabled)
  • Pointers qualified with "__ restrict __" to indicate they should be read through the read-only cache

Not implemented:

  • Use of texture object for CC <=3.5. I am first testing this without the conditional on texture vs. const cache depending on "__ CUDA_ARCH __" value
  • I am a little uncertain where to apply the intrinsic "__ ldg" function in place of a normal pointer dereference to force a load to go through the read-only data cache. Is the use of the "__ restrict __" qualified pointers sufficient?

So far the proposed edits compile and the resulting application executes.

I welcome advice and feedback.

Compilation Error Ubuntu 18.04 and CUDA 10.1

Hello everyone,

I'm facing a compilation issue with RapidCFD on Ubuntu 18.04 using CUDA10.1 maybe someone can help me.
Some files are compiling well, others with warnings and some are not compiling at all due to errors with an "unsigned int" class (see attached file Error1).

Also the applications libraries are not compiling at all due to lot of "undefined reference class" (see attach file Error2, sorry for the french error lines).

Looks like a little bit tricky to me, maybe should I using an other version of CUDA? Problem is on Nvidia Website only CUDA10 and CUDA10.1 are referenced for Ubuntu 18.04.

Thanks for your help.

Files:
Error1.txt
Error2.txt

Compiling custom application

Dear all,

I realise this project is not prevailing anymore, but, I am strongly interested in this.
If I manage to solve these issues, this project could be of tremendous help for our research...

I managed to compile RapidCFD and can use the included solvers. But I need to compile a custom application and did not manage to do this yet.
Here is the log with the errors I get, when compiling the solver with wmake
log_wmake.txt

These are the first few lines of the code.
newResistanceFoam.C_firstLines.txt
I thinkt the errors come because DataEntry.H (in line 37 from /opt/RapidCFD/RapidCFD-dev/src/OpenFOAM/primitives/functions/DataEntry/DataEntry) and sstream (in line 50) are not included properly.
Compiling the identical solver code with OpenFOAM2.3.1 works fine, so I am quite stuck here. To put RapidCFD to good use, I am strongly dependent on this.

Can anybody tell me how to fix this?
Thanks a lot!!

simFlow crash

I have installed OpenFoam v1612+ on CentOS7.4 successfully and on my server compute have four Tesla K80 GPU, so I install nvidia driver version 384.66 and cuda 8.0, But I get the following error after install simFlow 3.1. How can I fix this ?

[main] INFO simflow.core.properties.BasicProperties - Properties file '/root/.simflow/system.properties' not found.
[main] INFO simflow.core.properties.BasicProperties - Properties file '/root/.simflow/user.properties' not found.
[main] DEBUG simflow.core.updates.Updates - No updates were found.
extensionStr == null

DefaultRenderingErrorListener.errorOccurred:
CONTEXT_CREATION_ERROR: Renderer: Error creating Canvas3D graphics context
graphicsDevice = X11GraphicsDevice[screen=0]
canvas = simflow.java3D.engine3D.view.LightCanvas3D$InternalCanvas3D[canvas0,0,0,600x600,invalid]

Provide ThirdParty-*

Please provide ThirdParty-* to build Rapid-CFD.

I've built Rapid-CFD successfully with OpenFOAM's ThirdParty-2.3.1 because RapidCFD is fork from OpenFOAM-2.3.1. But I also needed to download OpenMPI 1.8.4 (ThirdParty-2.3.1's OpenMPI is 1.6.5).

We need the list which version of ThirParty libs we should use to build and use Rapid-CFD.

Thanks.

running RCFD in parallel

Dear all,

finally I got through to actually trying running RCFD in parallel on 2 gpus!
I loaded openmpi (vs 1.8.4), which is located in the /opt/RapidCFD/ThirdParty-dev/openmpi-1.8.4, where it should be installed according to here
I ran

mpirun -np 2 pisoFoam -parallel

and got the following error message:

--> FOAM FATAL ERROR:
Trying to use the dummy Pstream library.
This dummy library cannot be used in parallel mode

From function UPstream::init(int& argc, char**& argv)
in file UPstream.C at line 37.

FOAM exiting

--> FOAM FATAL ERROR:
Trying to use the dummy Pstream library.
This dummy library cannot be used in parallel mode

From function UPstream::init(int& argc, char**& argv)
in file UPstream.C at line 37.

FOAM exiting


Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.


mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

Process name: [[6656,1],0]
Exit code: 1

Can you tell me, what I have done wrong?

I fear it might be, that I have not considered the WM_NCOMPROCS variable in the build process of RapidCFD...?
Can this be fixed afterwards somehow without recompiling all?

Thanks a lot for all hints!!

Best regards
Johannes

PS: sorry, for the changes in font size, don't know, where this came from...

with which version of OF is compatible?

I´m using the OF6 right now, but I need to speed up my simulations, I don´t find any info about the base version of OF for the RapidCFD.
Rigth now i´m using. pimpleDyMFoam, or pimpleFoam(they combine the 2 solvers in the OF6), with 6dofsolidmotionsolver.
Is this supported in RapidCFD?

Cannot build RapidCFD-dev

Hello,

I'm trying to build RapidCFD on an Intel cluster, and I'm seeing a large number of errors. That is , errors relating to ambiguous functions. For example:

error: call of overloaded 'sin(double&)' is ambiguous

I'm building the package as advised in the web documentation. That is, using OpenMPI 1.8.4, GNU compilers 4.8.1 and cuda 6.5.14. Could someone please advise me how to eliminate these errors.

Best regards -- David.

Case crashed when mesh is refined locally

Dear All,

I have an impinging jet case with refined grids in the core region and coarse ones at the far-field (with OpenFOAM refineMesh Utilities). The mesh has no warnings or errors and everything is ok.

Any versions of OpenFOAM does not have a problem to run the case (with all the same parameters and settings). Could anyone explain why cases will get core dumped or blow up (Number of iterations will reach as high as 10^4 and residuals will be nan) when using RapidCFD?

Thank you in advance.

Kind regards,
Henry

How to install RapidCFD

I apologize for my level of ignorance beforehand.

I tried running the Allwmake script and it got me errors saying it wasn in the $WM_PROJECT_DIR folder, so I copied all the files there and ran it again:

xxx@x-ubuntu:/opt/openfoam231$ sudo ./Allwmake ./Allwmake: 4: ./Allwmake: wmakeCheckPwd: not found Error: Current directory is not $WM_PROJECT_DIR The environment variables are inconsistent with the installation. Check the OpenFOAM entries in your dot-files and source them.

Any idea where I go from here?

Am I suppose to compile this some other way beforehand?

Thanks for any help you might provide.

some error of implement temperature coupled solver into rapidCFD

Hi, daniel
I want to implement the MRconjugateHeatFoam into rapidCFD to solve fluid and solid temperature coupled (you can find MRconjugateHeatFoam source code here http://www.personal.psu.edu/dab143/OFW6/Training/OFW6_ConjugateHeatTransferTraining.tgz)
the MRconjugateHeatFoam use Dirichlet–Neumann method to solve the temperature coupled.

the interpolation code:
// Calculate interpolated patch field
const polyPatch& ownPatch = patch().patch();
const polyPatch& nbrPatch = coupleManager_.neighbourPatch().patch();
const Foam::fvPatchFieldFoam::scalar& Tnbr = coupleManager_.neighbourPatchField();
patchToPatchInterpolation interpolator(nbrPatch, ownPatch);
scalargpuField Tnbr2own = interpolator.faceInterpolate(Tnbr); //(line 118)

the error
coupledFvPatchFields/regionCoupleTemperature/regionCoupleTemperatureFvPatchScalarField.C(118): error: no instance of overloaded function "Foam::PatchToPatchInterpolation<FromPatch, ToPatch>::faceInterpolate [with FromPatch=Foam::PrimitivePatch<Foam::face, Foam::SubList, const Foam::pointField &, Foam::point>, ToPatch=Foam::PrimitivePatch<Foam::face, Foam::SubList, const Foam::pointField &, Foam::point>]" matches the argument list
argument types are: (const Foam::fvPatchFieldFoam::scalar)
object type is: Foam::patchToPatchInterpolation

I compile the MRconjugateHeatFoam success in openfoam, and I don't think the openfoam faceInterpolate function has the Foam::patchToPatchInterpolation object type.
is there any rapidCFD patchToPatchInterpolation.faceInterpolate function wrong or the way I implement the MRconjugateHeatFoam into rapidCFD wrong?

Run-time error occurred.

I downloaded the code, and finishing the task "Allmake" without error. And then I choose a case named "cavity", and finished the pre-process on my own computer, witch has installed openfoam[OpenFOAM-5.x (see www.OpenFOAM.org)], before uploading the case to the workstation. Then I started "icoFoam" in the case directory. When check the log file, I found the message:

**/---------------------------------------------------------------------------
| RapidCFD by simFlow (sim-flow.com) |
*---------------------------------------------------------------------------*/
Build : dev
Exec : icoFoam
Date : Jul 05 2018
Time : 10:57:03
Host : "ubunte1604-System-Product-Name"
PID : 12503
Case : /home/ubunte1604/cavity/cavity
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Allowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

Reading transportProperties

--> FOAM FATAL IO ERROR:
wrong token type - expected word, found on line 18 the punctuation token '['

file: /home/ubunte1604/cavity/cavity/constant/transportProperties.nu at line 18.

From function operator>>(Istream&, word&)
in file primitives/strings/word/wordIO.C at line 74.

FOAM exiting**

I have no idea how to handle it. Do you have any advice?

Compile linear Solver to RapidCFD

Dear all,

I am able to successfully run an impinging jet with either sonicFoam and rhoCentralFoam on each version of OpenFOAM with second order backward ddt scheme. However with same settings, it blows up with Rapid CFD but simulations with 1st order Euler wont blow up. Could everyone tell me the reason of that? The following is the log file when it blows up with sonicFoam. I am very curious about the reason.

Time = 0.0296009206
Courant Number mean: 9.08691529e-05 max: 0.847753188
deltaT = 1.52720907e-09
diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
PIMPLE: iteration 1
AINVPBiCG:  Solving for Ux, Initial residual = 1.80102291e-05, Final residual = 2.61356717e-09, No Iterations 1
AINVPBiCG:  Solving for Uy, Initial residual = 1.57130062e-05, Final residual = 1.985263e-09, No Iterations 1
AINVPBiCG:  Solving for Uz, Initial residual = 1.14092275e-06, Final residual = 8.6870831e-11, No Iterations 1
AINVPBiCG:  Solving for e, Initial residual = 2.14036921e-06, Final residual = 9.08431006e-09, No Iterations 2
AINVPBiCG:  Solving for p, Initial residual = 1.67301795e-06, Final residual = 2.60343881e-13, No Iterations 3
AINVPBiCG:  Solving for p, Initial residual = 2.10451776e-09, Final residual = 1.97094049e-13, No Iterations 2
AINVPBiCG:  Solving for p, Initial residual = 5.47285299e-11, Final residual = 6.61428924e-13, No Iterations 1
diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
time step continuity errors : sum local = 5.72853346e-13, global = 4.27003357e-15, cumulative = 8.38809636e-09
AINVPBiCG:  Solving for p, Initial residual = 1.06067463e-11, Final residual = 1.97521459e-13, No Iterations 1
AINVPBiCG:  Solving for p, Initial residual = 6.85430174e-13, Final residual = 6.85430174e-13, No Iterations 0
AINVPBiCG:  Solving for p, Initial residual = 6.85430174e-13, Final residual = 6.85430174e-13, No Iterations 0
diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
time step continuity errors : sum local = 5.93592627e-13, global = 6.81699179e-16, cumulative = 8.38809704e-09
PIMPLE: iteration 2
AINVPBiCG:  Solving for Ux, Initial residual = 2.70595393e-06, Final residual = 2.60834174e-10, No Iterations 1
AINVPBiCG:  Solving for Uy, Initial residual = 2.36108118e-06, Final residual = 2.29289954e-10, No Iterations 1
AINVPBiCG:  Solving for Uz, Initial residual = 1.71578376e-07, Final residual = 1.12537603e-11, No Iterations 1
AINVPBiCG:  Solving for e, Initial residual = 3.68813664e-07, Final residual = 7.83420425e-10, No Iterations 1
AINVPBiCG:  Solving for p, Initial residual = 1.01589758e-07, Final residual = 2.76297758e-13, No Iterations 4
AINVPBiCG:  Solving for p, Initial residual = 1.52272642e-10, Final residual = 7.77339879e-15, No Iterations 2
AINVPBiCG:  Solving for p, Initial residual = 4.53278794e-12, Final residual = 2.67769345e-13, No Iterations 1
diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
time step continuity errors : sum local = 2.32013374e-13, global = 1.45609034e-16, cumulative = 8.38809719e-09
AINVPBiCG:  Solving for p, Initial residual = 9.61071719e-13, Final residual = 9.61071719e-13, No Iterations 0
AINVPBiCG:  Solving for p, Initial residual = 9.61071757e-13, Final residual = 9.61071757e-13, No Iterations 0
AINVPBiCG:  Solving for p, Initial residual = 9.61071757e-13, Final residual = 9.61071757e-13, No Iterations 0
diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
time step continuity errors : sum local = 8.32165988e-13, global = 1.45548797e-16, cumulative = 8.38809733e-09
PIMPLE: iteration 3
AINVPBiCG:  Solving for Ux, Initial residual = 4.77860563e-07, Final residual = 5.88858595e-11, No Iterations 1
AINVPBiCG:  Solving for Uy, Initial residual = 4.16943119e-07, Final residual = 5.20094158e-11, No Iterations 1
AINVPBiCG:  Solving for Uz, Initial residual = 3.02972907e-08, Final residual = 2.45823239e-12, No Iterations 1
AINVPBiCG:  Solving for e, Initial residual = 8.49724449e-08, Final residual = 3.81324598e-10, No Iterations 1
AINVPBiCG:  Solving for p, Initial residual = 7.08564117e-08, Final residual = 2.22933184e-13, No Iterations 2
AINVPBiCG:  Solving for p, Initial residual = 2.24909466e-10, Final residual = 4.29480385e-13, No Iterations 1
AINVPBiCG:  Solving for p, Initial residual = 6.14030689e-12, Final residual = 4.5203987e-14, No Iterations 1
diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
time step continuity errors : sum local = 3.93184204e-14, global = 1.76842788e-16, cumulative = 8.38809751e-09
AINVPBiCG:  Solving for p, Initial residual = 1.02784387e-12, Final residual = 2.00518485e-14, No Iterations 1
AINVPBiCG:  Solving for p, Initial residual = 1.27177467e-13, Final residual = 1.27177467e-13, No Iterations 0
AINVPBiCG:  Solving for p, Initial residual = 1.27177467e-13, Final residual = 1.27177467e-13, No Iterations 0
diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
time step continuity errors : sum local = 1.10260591e-13, global = -9.73590278e-17, cumulative = 8.38809741e-09
ExecutionTime = 8597.66 s  ClockTime = 9915 s

fieldAverage fieldAverage1 output:
    Calculating averages

Time = 0.0296009221
Courant Number mean: 9.08692565e-05 max: 0.847419612
deltaT = 1.52720907e-09
diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
PIMPLE: iteration 1
AINVPBiCG:  Solving for Ux, Initial residual = 1.8010536e-05, Final residual = 2.63100383e-09, No Iterations 1
AINVPBiCG:  Solving for Uy, Initial residual = 1.5714143e-05, Final residual = 1.99442975e-09, No Iterations 1
AINVPBiCG:  Solving for Uz, Initial residual = 1.14100493e-06, Final residual = 8.66547158e-11, No Iterations 1
AINVPBiCG:  Solving for e, Initial residual = 2.14041355e-06, Final residual = 0.000193868013, No Iterations 1001
AINVPBiCG:  Solving for p, Initial residual = 7.461919e-05, Final residual = 7.5454373e-13, No Iterations 27
AINVPBiCG:  Solving for p, Initial residual = 2.08624256e-07, Final residual = 5.47671791e-13, No Iterations 17
AINVPBiCG:  Solving for p, Initial residual = 6.36598324e-09, Final residual = 8.12733975e-13, No Iterations 13
diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
time step continuity errors : sum local = 7.03596747e-13, global = -6.92814011e-14, cumulative = 8.38802813e-09
AINVPBiCG:  Solving for p, Initial residual = 2.1415655e-07, Final residual = 9.52304769e-13, No Iterations 797
AINVPBiCG:  Solving for p, Initial residual = 4.81949086e-09, Final residual = 9.64192016e-13, No Iterations 621
AINVPBiCG:  Solving for p, Initial residual = 3.77386935e-09, Final residual = 7.94685804e-13, No Iterations 542
diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
time step continuity errors : sum local = 6.87890841e-13, global = -1.01847101e-14, cumulative = 8.38801794e-09
PIMPLE: iteration 2
AINVPBiCG:  Solving for Ux, Initial residual = 2.61143693e-05, Final residual = 2.27350075e-09, No Iterations 4
AINVPBiCG:  Solving for Uy, Initial residual = 2.31913083e-05, Final residual = 1.96075628e-09, No Iterations 4
AINVPBiCG:  Solving for Uz, Initial residual = 3.87304971e-07, Final residual = 4.36549524e-09, No Iterations 3
AINVPBiCG:  Solving for e, Initial residual = 0.000189027569, Final residual = 6.87705471, No Iterations 1001
AINVPBiCG:  Solving for p, Initial residual = -nan, Final residual = -nan, No Iterations 1001
AINVPBiCG:  Solving for p, Initial residual = -nan, Final residual = -nan, No Iterations 1001

I was wondering if this is the problem of linear solver. So I tried to convert PBiCGStab to RapidCFD, but I face a problem with that.

matrices/lduMatrix/solvers/PBiCGStabgpu/PBiCGStabgpu.C(212): error: no instance of overloaded function "thrust::transform" matches the argument list
            argument types are: (const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, Foam::rAPlusBetapAMinusOmegaAyAFunctor)

matrices/lduMatrix/solvers/PBiCGStabgpu/PBiCGStabgpu.C(295): error: no instance of overloaded function "thrust::transform" matches the argument list
            argument types are: (const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, const thrust::detail::normal_iterator<thrust::device_ptr<Foam::scalar>>, Foam::psiPlusAlphayAPlusOmegazAFunctor)

/usr/local/cuda-8.0/bin/../targets/x86_64-linux/include/thrust/detail/type_traits.h(413): error: incomplete type is not allowed
          detected during:
            instantiation of class "thrust::detail::eval_if<true, Then, Else> [with Then=thrust::detail::result_of_adaptable_function<Foam::outerProductFunctor<Foam::scalar, Foam::scalar> (Foam::scalar), void>, Else=thrust::detail::identity_<thrust::use_default>]" 
/usr/local/cuda-8.0/bin/../targets/x86_64-linux/include/thrust/iterator/detail/iterator_adaptor_base.h(48): here
            instantiation of class "thrust::detail::ia_dflt_help<T, DefaultNullaryFn> [with T=thrust::use_default, DefaultNullaryFn=thrust::detail::result_of_adaptable_function<Foam::outerProductFunctor<Foam::scalar, Foam::scalar> (Foam::scalar), void>]" 
/usr/local/cuda-8.0/bin/../targets/x86_64-linux/include/thrust/iterator/detail/transform_iterator.inl(41): here
            instantiation of class "thrust::detail::transform_iterator_base<UnaryFunc, Iterator, Reference, Value> [with UnaryFunc=Foam::outerProductFunctor<Foam::scalar, Foam::scalar>, Iterator=thrust::detail::normal_iterator<thrust::device_ptr<const Foam::scalar>>, Reference=thrust::use_default, Value=thrust::use_default]" 
/usr/local/cuda-8.0/bin/../targets/x86_64-linux/include/thrust/iterator/transform_iterator.h(191): here
            instantiation of class "thrust::transform_iterator<AdaptableUnaryFunction, Iterator, Reference, Value> [with AdaptableUnaryFunction=Foam::outerProductFunctor<Foam::scalar, Foam::scalar>, Iterator=thrust::detail::normal_iterator<thrust::device_ptr<const Foam::scalar>>, Reference=thrust::use_default, Value=thrust::use_default]" 
/usr/local/cuda-8.0/bin/../targets/x86_64-linux/include/thrust/iterator/transform_iterator.h(333): here
            instantiation of "thrust::transform_iterator<AdaptableUnaryFunction, Iterator, thrust::use_default, thrust::use_default> thrust::make_transform_iterator(Iterator, AdaptableUnaryFunction) [with AdaptableUnaryFunction=Foam::outerProductFunctor<Foam::scalar, Foam::scalar>, Iterator=thrust::detail::normal_iterator<thrust::device_ptr<const Foam::scalar>>]" 
lnInclude/gpuFieldFunctions.C(541): here
            instantiation of "Foam::scalar Foam::sumSqr(const Foam::gpuList<Type> &) [with Type=Foam::scalar]" 
lnInclude/gpuFieldFunctions.C(655): here
            instantiation of "Foam::scalar Foam::gSumSqr(const Foam::gpuList<Type> &, int) [with Type=Foam::scalar]" 
matrices/lduMatrix/solvers/PBiCGStabgpu/PBiCGStabgpu.C(281): here

3 errors detected in the compilation of "/tmp/tmpxft_00003f76_00000000-7_PBiCGStabgpu.cpp1.ii".
make: *** [Make/linux64NvccDPOpt/PBiCGStabgpu.o] Error 2

This is the wrong place of line 212,

/* Openfoam code
                for (label cell=0; cell<nCells; cell++)
                {
                    pAPtr[cell] =
                        rAPtr[cell] + beta*(pAPtr[cell] - omega*AyAPtr[cell]);
                }
*/ My conversion to RapidCFD

scalargpuField AyA(PCGCache::AyA(matrix_.level(),nCells),nCells);

                thrust::transform
                (
                    rA.begin(),
                    rA.end(),
                    pA.begin(),
                    AyA.begin(),
                    pA.begin(),
                    rAPlusBetapAMinusOmegaAyAFunctor(beta,omega) //1 
                );

This is in PCGCache.C
PtrList<scalargpuField> PCGCache::AyACache(1);

This is in lduMatrixSolverFunctors.H

struct rAPlusBetapAMinusOmegaAyAFunctor
{
    const scalar beta;
    const scalar omega;

    rAPlusBetapAMinusOmegaAyAFunctor(scalar _beta,scalar _omega): beta(_beta),omega(_omega) {}

    __HOST____DEVICE__
    scalar operator()(const scalar& rA, const scalar& pA, const scalar& AyA)
    {
            return rA + beta*pA - beta*omega*AyA;
    }
};

Could anyone help me out? Thank you very much in advance!

How to compile compressibleInterFoam?

Hello!

I would like to use compressibleInterFoam, but I get the following error when running Allwmake:
"
twoPhaseMixtureThermo.H(246): error: return type is not identical to nor covariant with return type "Foam::tmpFoam::gpuFieldFoam::scalar" of overridden virtual function "Foam::basicThermo::kappa"
"
Is this a general compilation error during the installation?
Best regards,
Max

discretization Schemes: gradScheme

Dear all,
I am glad, my test Case with a custom Solver now produces consistent results with OF231 and RapidCFD!
However, I have the problem, that for my mesh I require a discretization scheme which seems to be missing in RapidCFD.
OF231 works fine if I set in fvSchemes
gradSchemes
{
default cellLimited leastSquares 1;//for example
}
With RapidCFD I get the error message, that only one scheme (Gauss) can be applied!
However with Gauss linear (and here it doesn't make any difference if I choose linear limited or anything else) my case does not converge...
Am I correct, that no other discretization schemes are implemented in RapidCFD apart from Gauss? Or is there another way to call these?
Thanks.

icoFoam Solver error

Hello,
I have just finished installing RapidCFD on my system. The build for RapidCFD has completed without any errors (There were a lot of warnings, though).
Now I am trying to use the icoFoam solver for the cavity application but I am getting the following error -
/---------------------------------------------------------------------------
| RapidCFD by simFlow (sim-flow.com) |
---------------------------------------------------------------------------/
Build : dev-f3775ac96129
Exec : icoFoam
Date : Oct 31 2018
Time : 17:40:28
Host : "marskepler"
PID : 12325
Error at lnInclude/gpuConfig.H:51
unknown error

Does anyone know how to fix this? Any help would be appreciated.

Regards,
Manasi

Multi-GPU for different nodes?

Hi, I am currently working on an HPC cluster. Is it possible in RapidCFD to run a simulation on multiple GPUs who are located on different nodes?

Slow startup / floating point error

Hello,

I've compiled RapidCFD on Ubuntu 18.04 using CUDA 8.0 and GCC 5.

When i try to launch any solver, it hangs for about 10 minutes, then continues to read the case and starts the computation, then produces no output for another 10 or so minutes and crashes with a floating point exception like so:

/*---------------------------------------------------------------------------*\
| RapidCFD by simFlow (sim-flow.com)                                          |
\*---------------------------------------------------------------------------*/
Build  : dev-43f7a961ae97
Exec   : pisoFoam
Date   : May 22 2018
Time   : 16:57:50
Host   : "barelinux"
PID    : 21094
Case   : /home/artifth/CFD/nos
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Allowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

10 minutes later...

Create time

Create mesh for time = 0

Reading field p

Reading field U

Reading/calculating face flux field phi

Selecting incompressible transport model Newtonian
Selecting turbulence model type LESModel
Selecting LES turbulence model Smagorinsky
Selecting LES delta type cubeRootVol
SmagorinskyCoeffs
{
    ce              1.048;
    ck              0.094;
}


Starting time loop

Time = 1e-05

Courant Number mean: 0.0152485 max: 0.19152

Another wait time...

GAMG:  Solving for Ux, Initial residual = 1, Final residual = -nan, No Iterations 1000
#0  Foam::error::printStack(Foam::Ostream&) at ??:?
#1  Foam::sigFpe::sigHandler(int) at ??:?
#2   in "/lib/x86_64-linux-gnu/libc.so.6"
#3   in "/home/artifth/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/bin/pisoFoam"
#4   in "/home/artifth/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/bin/pisoFoam"
#5   in "/home/artifth/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/bin/pisoFoam"
#6   in "/home/artifth/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/bin/pisoFoam"
#7  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#8   in "/home/artifth/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/bin/pisoFoam"
Исключение в операции с плавающей точкой (стек памяти сброшен на диск)

Hardware I'm currently testing it at is a GTX 1050 Ti, driver version is 390.48. Any hints on what could be not right?

Compilation error

Hi all,

I'm getting an error when reaching the solver compilation phase :

make[1]: Entering directory '/home/jony/RapidCFD/RapidCFD-dev/applications/solvers/basic/laplacianFoam' nvcc -Xptxas -dlcm=cg -m64 -arch=sm_30 -Dlinux64 -DWM_DP -Xcompiler -Wall -Xcompiler -Wextra -Xcompiler -Wno-unused-parameter -Xcompiler -Wno-vla -Xcudafe "--diag_suppress=null_reference" -Xcudafe "--diag_suppress=subscript_out_of_range" -Xcudafe "--diag_suppress=extra_semicolon" -Xcudafe "--diag_suppress=partial_override" -Xcudafe "--diag_suppress=implicit_return_from_non_void_function" -Xcudafe "--diag_suppress=virtual_function_decl_hidden" -O3 -DNoRepository -D__RESTRICT__='__restrict__' -I/home/jony/RapidCFD/RapidCFD-dev/src/finiteVolume/lnInclude -IlnInclude -I. -I/home/jony/RapidCFD/RapidCFD-dev/src/OpenFOAM/lnInclude -I/home/jony/RapidCFD/RapidCFD-dev/src/OSspecific/POSIX/lnInclude -Xcompiler -fPIC -cudart shared -Xlinker --add-needed -Xlinker --no-as-needed Make/linux64NvccDPOpt/laplacianFoam.o -L/home/jony/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/lib \ -lfiniteVolume -lOpenFOAM -ldl -lm -o /home/jony/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/bin/laplacianFoam /home/jony/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/lib/libtriSurface.so: undefined reference to yyFlexLexer::yywrap()'
collect2: error: ld returned 1 exit status
/home/jony/RapidCFD/RapidCFD-dev/wmake/Makefile:149: recipe for target '/home/jony/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/bin/laplacianFoam' failed
`
And the same error then occurs for all solvers.

my gcc version is gcc 5.4.0, my CUDA version is 8.0

Thank you for your help !

Porting to OpenCL

I was wondering if there is any chance that this might be ported to OpenCL or any other available vendor neutral API? Not everybody can afford a NVIDIA GPU. It seems against the spirit of the Free and Open Source software.

libfieldFunctionObjects.so: cellSource

Dear all,

I tried using a functionObject from the libfieldFunctionObjects library, which can be included from controlDict like this:

functions
{
crossSection1_volAverage
{
type cellSource; //this is what I am looking for!
functionObjectLibs ("libfieldFunctionObjects.so");
enabled true;//false;
outputControl timeStep;
outputInterval 1;
log false;
valueOutput true;
source cellZone;
sourceName crossSection1;
operation volAverage;
fields
(
c
);
}
}

I am looking for type cell Source. When I try and run the command I get the following error

--> FOAM FATAL ERROR:
Unknown function type cellSource

Valid functions are :

7
(
fieldAverage
fieldCoordinateSystemTransform
fieldValueDelta
patchProbes
probes
sets
surfaces
)

I looked for the libfieldFunctionObjects and realized this was compiled and tried recompiling it. However, the type cellSource (together with others, e.g. faceSource) is not compiled, when I run wclean libso; wmake libso in the directory /opt/RapidCFD/RapidCFD-dev/src/postProcessing/functionObjects/field.
I had a look at the file /opt/RapidCFD/RapidCFD-dev/src/postProcessing/functionObjects/field/Make/files
and realized, that the lines including cellSource.C, faceSource.C were commented out (together with more others)

/*
fieldValues/faceSource/faceSource.C
fieldValues/faceSource/faceSourceFunctionObject.C
fieldValues/cellSource/cellSource.C
fieldValues/cellSource/cellSourceFunctionObject.C

...

*/
LIB = $(FOAM_LIBBIN)/libfieldFunctionObjects

I simply uncommented them and ran wclean libso; wmake libso, however got an error message as soon as these files were to be compiled (error message attached
error_wmake_libso_cellSource.txt)

EDIT:
with the first error occurring

fieldValues/cellSource/cellSource.C(140): error: no instance of function template
"Foam::fieldValues::cellSource::filterField" matches the argument list
argument types are: (const Foam::DimensionedField<Foam::scalar, Foam::volMesh>)

fieldValues/cellSource/cellSource.C(140): error: no instance of overloaded function "Foam::gSum" matches the argument list
argument types are: (< error-type>)

Can you tell me how to fix this?

Thanks a lot for all hints!!

Best regards
Johannes

can you provide the file "Adapting OpenFOAM for massively parallel GPU architecture"

hello, daniel jasinski

when I serch the gpu implement in openfoam, I found your lecture "Adapting OpenFOAM for massively parallel GPU architecture" at this website https://www.esi-group.com/resources/abstract-openfoam2015-jasinski-atizar-adapting-openfoam-massively-parallel-gpu-architecture

however, I cannot found the whole file. Instead, only the abstract can found on the internet.
I cannot understand the RapidCFD clearly. Maybe the file can help me understand the RapidCFD, I would appreciate if you provide the whole file of the lecture.
thank you!

Error when starting applications

Hello everyone,

First of all I would like to thank you for all this great work, I think RapidCFD is going to be very useful for lot of researchers.

I recently manage to compile RapidCFD on my laptop and want to test it on a my Forward Facing Step flow case. But when I want to run simpleFoam, I got the following error :

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

terminate called after throwing an instance of 'thrust::system::system_error'
what(): function_attributes(): after cudaFuncGetAttributes: invalid device function
Aborted (core dumped)

Do you have any idea about what I did wrong?
I found on the Internet that can be due to my graphic card, but I wanted to ask the question here.
Also, I'm not an expert in OpenFOAM development or CUDA programming, I'm just an OF user.

By the way, here's my laptop configuration :
OS : Ubuntu 15.10
Graphic Card : Geforce GTX 670M
nvcc version : 6.5
gcc version : 5.2.1
CUDA version : 7.5

Thank you for your time.
Regards

Anthony

OpenCL porting?

I was wondering if there any chance that we might get an OpenCL porting? It is crossplatform and vendor natural, so it respects the values of Free and Open Source world.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.