Comments (20)
from elmer-elmag.
from elmer-elmag.
I believe that in the plain circuit case the only issue was
Linear System Complex = Logical False
which should be
Linear System Complex = Logical True
The harmonic cases on the CircuitBuilder examples tend to have this because this applies to MacOS (which is how I built the examples). If you're working on Linux, you should set it to Linear System Complex = Logical True. Somehow on MacOS you still get the results for the imaginary part. I don't remember the reason...it's been a long time since I created those examples. However, I just tested the harmonic 3D open coil example and I that switch needs to be kept as False on MacOS and the results are correct.
BR,
Jon
from elmer-elmag.
from elmer-elmag.
from elmer-elmag.
Linear System ILU Order = 0
Linear System Convergence Tolerance = 1.e-6
I'd recommend still adding the former of the above lines (not much help otherwise) + you can then change the convergene tolerance stricter, like in the latter line, and still have results in reasonable number of iterations (~100 iterations or so).
from elmer-elmag.
from elmer-elmag.
Regarding the case_layer-conductivity.sif:
Output without mpi:
(...)
MapBodiesAndBCs: Maximum initial boundary index: 8
CreateIntersectionBCs: Number of intersection BCs to determine: 1
CreateIntersectionBCs: Number of candidate intersection parents: 12324
CreateIntersectionBCs: Allocated for 12 new 1D boundary elements!
ReleaseMeshFaceTables: Releasing number of faces: 459653
(...)
Output with mpi:
(...)
MapBodiesAndBCs: Maximum initial boundary index: 8
CreateIntersectionBCs: Number of intersection BCs to determine: 1
CreateIntersectionBCs: Number of candidate intersection parents: 3153
CreateIntersectionBCs: Could not find any additional interface elements!
ReleaseMeshFaceTables: Releasing number of faces: 115241
(...)
from elmer-elmag.
Hi, this might not have been properly parallelized yet as it is very new feature. So this is what you get from partition 0. If you set Simulation::Max Output Partition = 32, for example, you should be able to see if something is created in other partitions.
from elmer-elmag.
Hi Peter, indeed, the IntersectionBCs are allocated correctly
(...)
CreateIntersectionBCs: Part2: Number of intersection BCs to determine: 1
CreateIntersectionBCs: Part2: Number of candidate intersection parents: 3079
CreateIntersectionBCs: Part0: Number of intersection BCs to determine: 1
CreateIntersectionBCs: Part0: Number of candidate intersection parents: 3153
CreateIntersectionBCs: Part1: Number of intersection BCs to determine: 1
CreateIntersectionBCs: Part1: Number of candidate intersection parents: 3051
CreateIntersectionBCs: Part3: Could not find any additional interface elements!
ReleaseMeshFaceTables: Part3: Releasing number of faces: 116869
CreateIntersectionBCs: Part1: Allocated for 12 new 1D boundary elements!
(...)
Nevertheless, I think there is a problem with this feature as I don't see any current in the result. I just attach the complete log, maybe you can read something from it: layercond_mpi_fulloutput.log
Have there been any recent changes to that function? I used a solver from October 7th to run the case.
from elmer-elmag.
Hi @raback, I tried to run the case again with a newer solver. The problem persists but I get a more elaborate error message:
(...)
OptimizeBandwidth: Part1: ---------------------------------------------------------
OptimizeBandwidth: Part1: Computing matrix structure for: heat equation...Part1: done.
OptimizeBandwidth: Part1: Half bandwidth without optimization: 10655
OptimizeBandwidth: Part1:
OptimizeBandwidth: Part1: Bandwidth Optimization ...Part1: done.
OptimizeBandwidth: Part1: Half bandwidth after optimization: 1085
OptimizeBandwidth: Part1: ---------------------------------------------------------
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_Init]
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_bulk]
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver]
ElmerSolver: Part1: Number of timesteps to be saved: 1
ListTagCount: Part1: Found number of normalized keywords: 1
CalculateEntityWeights: Part1: Computing weights for the mesh entities
ListSetParameters: Part1: Altered number of parameters: 1
BC weight: 1 4.6640245897726904E-002
BC weight: 2 0.41960528948285047
BC weight: 3 7.5012508973303668E-005
BC weight: 4 7.5001405342127765E-005
BC weight: 5 3.1059753213902308E-002
BF weight: 1 5.6520471742070003E-004
Body weight: 1 1.1257033199533879E-004
ListSetParameters: ListSetParameters: Body weight: 2 5.6520471742070003E-004
Body weight: 3 1.9945457442145341E-002
Mat weight: 1 1.1257033199533879E-004
Part3: Altered number of parameters: 1
Part2: Altered number of parameters: 1
Mat weight: 2 5.6520471742070003E-004
Mat weight: 3 1.9945457442145341E-002
ListSetParameters: Part0: Altered number of parameters: 1
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x7fab4119ad21 in ???
#1 0x7fab41199ef5 in ???
#2 0x7fab40fcb08f in ???
#3 0x7fab41862ecf in __parallelutils_MOD_parallelinitmatrix
at /home/elmer/elmerfem/fem/src/ParallelUtils.F90:145
#4 0x7fab416f72d3 in __mainutils_MOD_singlesolver
at /home/elmer/elmerfem/fem/src/MainUtils.F90:5246
#5 0x7fab4170c538 in __mainutils_MOD_solveractivate
at /home/elmer/elmerfem/fem/src/MainUtils.F90:5568
#6 0x7fab4170df7e in solvecoupled
at /home/elmer/elmerfem/fem/src/MainUtils.F90:3199
#7 0x7fab4170f35e in __mainutils_MOD_solveequations
at /home/elmer/elmerfem/fem/src/MainUtils.F90:2899
#8 0x7fab41985a13 in execsimulation
at /home/elmer/elmerfem/fem/src/ElmerSolver.F90:3079
#9 0x7fab4198ce4a in elmersolver_
at /home/elmer/elmerfem/fem/src/ElmerSolver.F90:607
#10 0x5609c50d93f5 in solver
at /home/elmer/elmerfem/fem/src/Solver.F90:57
#11 0x5609c50d915e in main
at /home/elmer/elmerfem/fem/src/Solver.F90:34
Here are the complete logs: layercond_mpi_allranks.log
Do you have any idea how I could fix that problem?
EDIT: I use a solver build from this commit: ElmerCSC/elmerfem@d3b6930
from elmer-elmag.
I posted this issue here: http://www.elmerfem.org/forum/viewtopic.php?t=7916
from elmer-elmag.
I seem to have issues with this even in serial. Does it work for you? Trying to narrow down where it broke down. Should add the feature set to the tests.
from elmer-elmag.
You're right, I didn't try that again. With an older version (ElmerCSC/elmerfem@0ff29c7, https://github.com/nemocrys/opencgs/blob/main/Docker/Dockerfile) it works well in serial but fails with mpi (as reported above: #10 (comment)).
from elmer-elmag.
Update: I tried again with ElmerCSC/elmerfem@893863e (13.03.2023)
- case_coil-solver.sif: works well
- case_scaled-conductivity.sif: works well
- case_circuit.sif: does not converge (neither with / without mpi)
- case_layer-conductivity.sif: gives trivial result with mpi (output here: layer-cond.log) but works without mpi
- case_circuit_impedance-bc.sif #13 works well
from elmer-elmag.
Amazing, good job! You are the man Juha!
Have a great weekend!
from elmer-elmag.
Thanks a lot for your comments! I changed the settings for case_circuits.sif, see #14. We've now got
- case_coil-solver.sif: works well
- case_scaled-conductivity.sif: works well
- case_circuit.sif: works well
- case_layer-conductivity.sif: gives trivial result with mpi (output here: layer-cond.log) but works without mpi
- case_circuit_impedance-bc.sif #13 works well
@juharu I don't know what's the reason for the case_layer-conductivity to fail with mpi. I tried the test cases for the impedance BC with mpi, works well. Would it make sense to try different solver settings?
from elmer-elmag.
I just commited to github "devel" a few small fixes, that seemingly allow the "case_layer_conductivity.sif" pass also in parallel.
In addition replacing
"Linear System Preconditioning = none"
with
"Linear System Preconditioning = ILU"
in the "MGDynamics" solver will speed things up considerably both serially and in parallel.
from elmer-elmag.
In addition replacing
"Linear System Preconditioning = none"
with
"Linear System Preconditioning = ILU"
in the "MGDynamics" solver will speed things up considerably both serially and in parallel.
Also "case_coil-solver.sif" speeds up considerably with the above change. As propably does "case_scaled-conductivity.sif",
didn't try it though.
from elmer-elmag.
Thanks a lot, Juha! I just compiled & tried again, works well also on my computer :)
from elmer-elmag.
Related Issues (4)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from elmer-elmag.