Giter Club home page Giter Club logo

princetonuniversity / spec Goto Github PK

View Code? Open in Web Editor NEW
21.0 18.0 3.0 288.99 MB

The Stepped-Pressure Equilibrium Code, an advanced MRxMHD equilibrium solver.

Home Page: https://princetonuniversity.github.io/SPEC/

License: GNU General Public License v3.0

Makefile 0.16% TeX 0.05% Fortran 25.46% SourcePawn 58.69% MATLAB 4.04% Jupyter Notebook 7.97% Python 2.65% Java 0.63% Shell 0.02% C++ 0.02% JavaScript 0.01% CMake 0.31%
mrxmhd plasma-physics stellarator tokamak nuclear-fusion

spec's Introduction

The Stepped Pressure Equilibrium Code

All relevant publications and presentations are given on the MRxMHD website.

A BibTex file is available: spec_refs.bib.

spec's People

Contributors

abaillod avatar arunav2123 avatar ejpaul avatar garrettwrong avatar jbreslau avatar jloizu avatar jonathanschilling avatar kseniaaleynikova avatar landreman avatar lazersos avatar liuk2020 avatar mbkumar avatar salomon73 avatar smiet avatar srhudson avatar zhisong avatar zhucaoxiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spec's Issues

Bug in SPEC (introduced by Stuart)

The routine jo00aa.h calculates the error in the beltrami fields (if Lcheck=1 in the input). In the SPEC1.0 version, this was used to verify the code up to machine precsion (see Phys Plasmas 23, 112505, 2016). However, I realized that this does not work properly in the new versions of SPEC, already since the initial import (the jo00aa.h routine was never modified within github).

We shall fix this problem by comparing the routine with that of SPEC1.0 - note that this is a post-diagnostic routine and does not affect the equilibrium calculation per se.

Profile SPEC performance

SPEC profiling

I ran ARM forge, a tracing/profiling tool for parallel computing.
I ran it on both xspec and dspec. The test case is a LHD high beta case with 5 volumes, resolutions are M=10, N=8, Lrad=6.
The test is based on the master branch 007be72, newest as date 6/12/2018.
The input and output file, the overall results are attached as .txt files and .pdf files to this thread.

A break down of time calling each subroutines. Green is computation, blue is MPI (mostly waiting for other cpus to finish). The most time consuming subroutine is ma00aa (as expected), contributing 77%.
image

Going into ma00aa, the most time consuming ones are radial integral ones.
image

And on each clause, this is the break down.
image

It seems a lot of time is spent on accessing memory.

xspec - Performance Report.pdf
dspec - Performance Report.pdf
finite_beta_V5_vmec1.sp.txt
ddt_output.txt

Execution error with dspec

@SRHudson has reported some execution error while running dspec on G3V02L1Fi.001.sp.

The error message is

forrtl: severe (408): fort: (2): Subscript #1 of the array DIAG has value
3 which is greater than the upper bound of 2

I cannot reproduce the error in the IPP workstation nor in the IPP cluster.

Reimplement helicity constraint

I did a naive attempt to enable helicity constraint. Without modifying the already contained subroutines, I just reversed the changes that disabled the helicity constraint. This is in branch "helicity-fix". One needs to set the option LBeltrami= 2.

I have also uploaded some random example to calculate the Taylor state in a large aspect ratio torus, in Inputfiles/Verification/helicity . There are two input files with the same helicity but different initial guess of mu and enforced symmetry, thus different configurations.

Frankly speaking, now I can have very little control on choosing the global minimum energy state or converging to a local one. Even setting the initial guess of mu to exactly the result after the computation does not guarantee the convergence to the same mu. I think we need to work further for a rigorous benchmark.

Bug in SPEC (in the de-naged tfft)

Stuart has realized that the subroutine tfft, and maybe also invfft, defined in numrec.h, is corrupted, most probably since the time these were modified (by Josh, July 25-26th 2017) in order to get rid of NAG calls.

The problem is apparent when running SPEC, e.g., for these two fixed-boundary inputs

M=4.N=0.Lcon=0.Lfree=0.sp.txt

M=4.N=1.Lcon=0.Lfree=0.sp.txt

and realizing that their outputs do not agree. In fact, since the equilibrium is axisymmetric, the output should be the same regardless of whether Ntor=0 or Ntor=1.

It seems that the tfft subroutine is not performing correctly unless Ntor=0 or Ntor=Mpol.

Bug in SPEC (for Lfindzero=1)

The option Lfindzero=1 (instead of the common Lfindzero=2) does not seem to work (not even in fixed-boundary). The error is

fcn1 : fatal : myid= 0 ; .true. ; illegal irevcm : C05P*F error ;

and is displayed while executing the fcn1 subroutine defined in newton.h. Somehow irevcm (which should be 0 or 1 according to fcn1) takes an illegal value and execution is aborted.

The subroutine fcn1 is used when calling hybrd, which is the replacement to NAG introduced by Caoxiang on Jul20,

commit 560c6b2
Author: Caoxiang Zhu [email protected]
Date: Thu Jul 20 22:01:40 2017 -0400

replace C05NDF & C05PDF with hybrd & hybrj in MINPACK

Reverting some files in the master and hotfix_Makefile branches

Somehow, the commit that @zhucaoxiang made some days ago regarding minipack.h and newton.h files, was merged into the master and is also present in the hotfix_Makefile branch.

We should not do this again. First work on a branch, then test, and only then merge with master after consensus.

Since these changes are giving problems (because of the fft, perhaps), we need to revert these files back to the original commit version, and only keep them in the NAG_replace branch.

Who's in charge?

In another thread, Caoxiang suggested that I be responsible for keeping the master version consistent, clean and so so; however, I think that this will slow things down. I don't have much time. I can participate, but I cannot be the central person.

We do need to agree how to coordinate activities.

I suggest the following, initial, partition of responsibilities.

Joaquim should be in charge of the master version and identifying flaws and problems that need to be addressed. Joaquim will also regularly test the new versions of the code on standard input files to ensure that no bugs are introduced. Joaqium can also, with advice from others, map out the general research direction (i.e., should we concentrate on the free-boundary application, or compute a fixed-boundary W7-X with many interfaces, or should we work on computing the eigenvalues/vectors of the hessian to get a linear stability code for MRxMHD).

Sam can primarily be responsible for the Makefile, porting to different architectures, streamlining input and output (one goal of this work is to get SPEC into STELLOPT), and so on. Sam can also identify and pursue research applications, such as perturbed tokamak ITER relevant cases.

I can work on detailed aspects of the source, such as exploiting symmetries in the TTsscc(l,p,i,j) = TTsscc(p,l,j,i) matrix elements (see http://w3.pppl.gov/~shudson/Spec/ma00aa.pdf). Joaquim and Sam can inform me of subroutines which are not performing well or need to be completed so that their research agenda can move forward, and I will either fix the issues or explain what needs to be done. I will also maintain the LaTeX documentation so that we all will get a better understanding of the internal structure of the code.

Caoxiang, and Josh Breslau, can participate whenever and however they choose, such as helping to eliminate the NAG dependence etc.

Note that the above is just a suggestion for getting things started and is based on my estimate of how much time we each have to spend on this project. As we get the code up and running with Git etc., we can all use the code to perform whatever calculations we like.

Free-boundary verification

I open this issue to discuss and report progress in free-boundary verification.

The "Freidberg-test" was successful (and cases are uploaded and reported on the verification folder) and we were trying to get VMEC and SPEC to agree for a tokamak case given same coils.

@zhucaoxiang and @zhisong can you please summarize the progress and upload here some testcase we can all work on together?

Is free-boundary SPEC assuming Right-Handedness of the surface geometry?

As you know, a cork screw is either left-handed or right-handed, and that is a property that is independent of coordinates and orientation.

Some stellarators are left-handed (e.g. W7-X) and some are right-handed (e.g. LHD). Since I am having trouble running SPEC free-boundary (in vacuum) for a left-handed classical stellarator, I am starting to wonder if that is the source of the problem.

Merging short-lived branch "profile" into "master"

I created the branch "profile" in order to include profiling options in the Makefile.

So far, these are only available for the IPP compilers. But we can add it to the pppl-compiler-option as well. I tested this new version and it compiles and runs well, with proper profiling of subroutines.

Please checkout and pull this branch, compile, and run the test file once on the pppl cluster. This should have absolutely no effect on your results. But just to be 100% sure.

Then, I can (1) merge this branch into master, or (2) add the profile options for pppl, ask for one more control test in both ipp and pppl, and then (1).

Igeometry=2 disabled in current version

I tried to run a fixed-boundary calculation for cylindrical coordinate (Igeometry=2). I set Lfreebound=0, meaning this is not a free boundary. Currently the code will return an error, stating that "free-boundary calculations not yet implemented in cylindrical geometry", though I am not using free-boundary at all.

I suspect an if condition should be added to "preset.h" line 1146, to skip the check for fixed-boundary. I am just not sure if the variables below 1146 will also be used in fix-boundary (very unlikely).

New branch created for the toroidal current constraint implementation

Hello,

This issue has been opened to keep you informed on a recently uploaded new branch called CurrentConstraint. I have been working on it for a few months now, and it is now running as expected in fixed boundary.

What remains to be done is:

  • Finish testing it with fixed boundary cases,
  • Optimize it
  • Implement the toroidal current constraint for free boundary (for now, it crashes on purpose). Test and optimize it as well.

In the next weeks, I will run the new current constraint on deneb1 (the EPFL cluster) to perform a weak and strong scaling to study the new constraint performance and scalability. As soon as I am confident that the new constraint does not impact strongly the code performance, I will start looking into the implementation for free boundary. I don't expect many changes - this should be fast...

Hessian called on single volumes

Just flagging that trying to run the single volume TestCases (i.e. G2V01L0Fi.001.sp) is failing because the .sp files have LHMatrix = T, setting LHMatrix = F fixes the issue.

The error message is:

** On entry to DGETRF parameter number  4 had an illegal value
Note: The following floating-point exceptions are signalling: IEEE_UNDERFLOW_FLAG IEEE_DENORMAL

I don't know enough about the workings of SPEC but I'm guessing it shouldn't be trying to determine a H matrix on a single volume.

Visualization

Dear Spectators,

It is extremely useful to construct Poincare plots, profile plots, iterations-towards-convergence plots, et cetera, and so I hope that it would not be impolite of me to raise at this juncture the issue of visualization.

As some of may already know, I have written a Graphical User Interface (GUI) using the Interactive Data Language (IDL) that I call the Equilibrium Code Helpful Interactive Diagnostic Numerical Applications (Echidna). I use this to prepare input, execution and visualization. I also use Echidna to perform parameter scans, convergence studies and so forth.

However, I don't know if building upon Echidna in IDL will be the best way to proceed.

From my vantage point, I see that there are three choices before us.

Choice A) Do nothing. Each user must create his or her own plotting.
Choice B) Provide a very basic, easy-to-use, bare-bones plotting capability.
Choice C) Provide a comprehensive package that automates parameter scans for convergence studies, plots all the output, quantifies the difference between two different equilibria, creates pretty three-dimensional plots of the nested interfaces, and so on.

Let us begin a discussion on this topic, so that at some point in the hopefully near future we can achieve consensus on what should be provided and on what platform it should be built before putting this matter to a vote.

Warning when compiling dspec

I realized that I get a warning while compiling dspec (this warning was already present in previous versions of spec). The warning does not seem to affect (so far) compilation and execution. But I wanted to report on this.

The warning is:

mpif90 -fdefault-real-8 -g -fbacktrace -fbounds-check -DDEBUG -ffree-line-length-none -o casing_d.o -c casing.F90 -I/opt/hgw/tools/fftw/3.3.4/gcc49-gfortran49-avx-serial/include
casing.F90:262.93:
; ; if( Wcasing ) write(ounit,1001) cput-cpus, myid, Dxyz(1:3,jk), gBn, absest(1:4), id01eaf, mincalls, maxcalls
1
Warning: Upper array reference at (1) is out of bounds (4 > 1) in dimension 1

is the iterfree testcase really iter?

A Poincare plot of the iterfree.sp converged testcase does not show any X-point, the field-lines intersect the divertor in what I would consider "a non-realistic manner", and the computational boundary seems to be a flux-surface, see below:

iterfree_poincare_vessel.pdf

Is that supposed to be an actual iter equilbrium?

Substitution of NAG libraries

The NAG libraries often cause problems with portability of code. NAG is not freely available. Simple changes in release number (R23 vs. R24) can break codes. It is proposed that we fork a branch of SPEC where the NAG libraries will be replaced one by one. Once this code has been sufficiently tested it can be merged with the master branch. Where possible routines will be replaced by BLAS or LAPACK libraries as highly optimized versions of these libraries are available and we can free include unoptimized versions with SPEC so that at least basic functionality is always possible.

Running debug SPEC version (dspec)

Running dspec in both IPP and PPPL gives execution error of the kind:

** INIT = 'S', but N and TRIGN array incompatible.
** ABNORMAL EXIT from NAG Library routine C06FUF: IFAIL = 5
** NAG hard failure - execution terminated

Initialization of inner surfaces with Linitialize=1

For the LHD case I am working on, I found SPEC very difficult to converge (reach force balance) with the Linitialize = 1 option. One problem with this option is that the initial position of the inner surfaces does not change with the input axis position Rac and Zas, in the current version of SPEC.

For example, for two volumes, the inner surface will be initialized only according to the boundary, and has nothing to do with the specified axis.
image

Detail inspection found the following two lines in preset.h are relevant (line 969 and 970).

!if( im(ii).eq.0 ) then ; psifactor(ii,vvol) = Rscale * tflux(vvol)**half       ! 29 Apr 15;
if( im(ii).eq.0 ) then ; psifactor(ii,vvol) = Rscale * tflux(vvol)**zero       ! 08 Feb 16;

It seems that if we switch the commented lines, the inner surfaces will be initialized according to the specified Rac and Zas. The inner surfaces will be initialized later as

;iRbc(1:mn,vvol) = iRbc(1:mn,0) + ( iRbc(1:mn,lvol) - iRbc(1:mn,0) ) * ( psifactor(1:mn,vvol) / Rscale ) / tflux(lvol)**halfmm(1:mn)
;iZbs(2:mn,vvol) = iZbs(2:mn,0) + ( iZbs(2:mn,lvol) - iZbs(2:mn,0) ) * ( psifactor(2:mn,vvol) / Rscale ) / tflux(lvol)**halfmm(2:mn)

image

It seems that the second line was introduced on 08 Feb 16. I don't know the reason for these lines.

Bug in SPEC (introduced by Joaquim)

I found another bug, this time introduced by me when replacing NAG routines for the integration of field-lines to generate Poincare data. The bug is in all versions from the commit:

commit 5772ab5
Author: Joaquim [email protected]
Date: Thu Nov 16 13:58:57 2017 +0100
Replaced NAG routines D02BJF, D02BJW, D02BJX with netlib rksuite.f package routines in pp00ab.h

The error only magically appears for very specific resolutions (e.g., single volume, cartesian run with Mpol=0,Ntor=2) when doing field-line-tracing (after Beltrami field has been computed) and looks like:

** You have made a call to UT with a TWANT that does
** not lie between the previous value of TGOT (TSTART
** on the first call) and TEND. This is not permitted.
** Check your program carefully.
**
** Catastrophic error detected in UT .
**
**
** Execution of your program is being terminated.
**

I am working on it. I think I know where the error lies.

Retrieving old (working) master version

As explained in last comments (Oct25) of the issue "robust convergence", a bug has been identified that was introduced on Aug7 when merging ma00aa with master.

We need to retrieve that version such that the new master version corresponds to the working old version. I think the cleanest way is that I checkout that version and (after testing) make a new commit with these files. Then the history will be preserved, and the latest version will be a working one.

Also, I checked and that does not seem to affect the NAG_REPLACE branch - since, I guess, ma00aa was not merged into it.

Please let me know if this strategy makes sense to you.

sign error in the poloidal flux

While testing the code in order to make exact verification that the Hessian is being calculated correctly, I realized that the poloidal flux in each relaxed volume is not calculated correctly.

Please test running this simple slab-1volume-axisymmetric case,

cartesian_N1m0n0.sp.txt

(rename it by removing .txt extension)

The output (.end) poloidal flux has opposite sign as the one expected in each volume.

Hessian in single-precision?

After having resolved a number of important bugs in the last weeks, I am pursuing the verification of SPEC in a simple geometry.

I am documenting this exercise (which could eventually become a publication or be embeded in Stuart's upcoming free-boundary paper):

Hessian_verification.pdf

As you can see from Fig.2 in this document, we can show now that the solution for the equilibrium interface geometry in SPEC approaches the analytical prediction, with an error reaching machine precision as the resolution (here, only Lrad) is increased.

HOWEVER, the error in the Hessian, while small, saturates at about 1e-8, which seems to indicate that perhaps the Hessian in SPEC (written in the .DF file) is written in single-precision only?

Implementing Sequential Quadratic Programming

I did an naive attempt to implement the SQP algorithm, using the package NLOPT. I have pushed the branch "sqp".
To use the branch, please check if NLOPT is installed on your cluster and change the path of NLOPT in the Makefile accordingly. Otherwise is very easy to install from the link https://nlopt.readthedocs.io/en/latest/

The current problem is that, the dMA matrix is not positive definite (could easily seen by computing the eigenvalue of dMA). The final optimization result is always negative infinity.
It is possible that the Lagrangian multiplier formalism for the boundary condition needs to change. Or some limitations should be placed on the Lagrangian multipliers a b c d e f. This is annoying.

Verification folder to guarantee reproducibility

Dear all,

I have added a folder InputFiles/Verification where we can organize, document, and store the verification cases we produce for SPEC.

I already added two cases, documented in vacuum/biot and vacuum/self. I suggest you do the same if you carry any verification exercise. Please always write README.txt files with some information.

I will now start some force-free verification cases (for this I will create a folder forcefree next to vacuum) for which pressure is zero but there is plasma (parallel) current. This should test the virtual casing and the Piccard free-boundary iterations.

exploiting symmetries

Dear Speculators,

I am still working on reducing the computation time in ma00aa and other subroutines. I see that the ma00aa thread has vanished, but not to worry: I am becoming quite adept at opening new issues!

If you examine
http://w3.pppl.gov/~shudson/Spec/ma00aa.pdf
you will notice that DToocc, DToocs, DToosc and DTooss do not depend on geometry. They need to be calculated only once during the entire calculation. (The Chebyshev resolution may differ in each volume, so some re-indexing of these arrays will be required.)

Also, there are various symmetries of the other arrays constructed in ma00aa that could be exploited. Now that I am getting back into the fine print of SPEC, this should not be too difficult, and I intend to pursue this task. It may be that I get further reduce the time of ma00aa by a factor of 4.

However, I would like to measure timing before and after the changes. You are already aware, I believe, of the timing improvements that I made to ma00aa. This was no big achievement really, as this routine had not yet exploited stellarator symmetry. Also, I believe that the l2stell input file is too tiny to really see the impact of the speed improvements that I intend to pursue, and whether the bottleneck in SPEC when we use large Fourier resolution and a large number of interfaces is really with ma00aa or somewhere else. We need to get SPEC fast on experimentally relevant input files. (Eventually, we want every part of the code to be maximally efficient. This is just a question of priority.)

So, I suggest that Joaquim consider the following:

  1. create an appropriately name subdirectory, e.g. 2016PoPLoizu, of the horrendously named TESTCASES directory, and into this directory place some the input files for W7-X that were used in "Verification of the SPEC code in stellarator geometries", J. Loizu, S.R. Hudson & C. Nührenberg, Phys. Plasmas 23, 112505 (2016);
  2. construct one or more multi-volume W7-X input files. At this stage, it does not even matter if these files converge.

I will proceed on exploiting all the symmetries and so associated with ma00aa and the related routines.

ma01ag

Dear Thunderbirds,

ma01ag.h is redundant. It was an early version of what is now called matrix.h.

Just for some background: originally I adopted a naming convention as follows. The first version of a subroutine was called e.g. ma00aa. The first two letters was supposed to give some idea of what the subroutine did, e.g. ma = matrix, and the two numbers and the two letters were a "version" label. When I introduced an improved, modified version I would keep the old subroutine and call the new subroutine ma00ab, ma00ac for example, and for a major revision I would call ma01aa, for example. Ultimately, I would delete the subroutines that are completely redundant.

More recently, now that SPEC is becoming more stable, I began to change the names to something more intuitive, e.g. ma01ag.h was renamed matrix.h, as this routine construct a matrix that defines the Beltrami fields, and al00aa.h (where al = allocate) was renamed to preset.h.

Sorry for all this background information.

Anyway, it seems that ma01ag is in the repository but it nowhere used. It is not even in the Makefile.

I suggest that someone should delete this file.

Optimization of virtual casing method in SPEC

The virtual casing method in SPEC is far from optimal for the problem at hand. We should address this.

Background links

The integration of the equations of propagation of electric waves
Use of the virtual-casing principle in calculating the containing magnetic field in toroidal plasma systems
The virtual-casing principle for 3D toroidal systems
A magnetic diagnostic code for 3D fusion equilibria
The virtual-casing principle and Helmholtz's theorem

Background info from @SRHudson

It seems that there was some bug in replacing the previous NAG routine with DCUHRE, but this is not the main problem. If anyone wants to fix this bug (I started, and have committed/pushed my changes, but I need to go home now.)

In SPEC we need to calculate the magnetic field on a regular grid, Nt in theta and Nz in zeta, on the computational boundary. For each one of these Ntz = Nt * Nz virtual casing integrals, we need to perform a surface integral over the plasma boundary. I think that the best algorithm in SPEC would be to compute the geometry and tangential field on the plasma boundary just once, and then to use this information for all of the virtual casing integrals required on the Nt * Nz grid on the computational boundary. The point is that there is a lot of information that is common, so if we compute the common information only once then SPEC will be faster.

Presently I am using a slow Fourier transform to compute the magnetic field at whatever point that DCUHRE requires, and because DCUHRE is adaptive, the points on the surface where DCUHRE requires the geometry/tangential field of the plasma boundary will vary as the position on the computational boundary varies. One approach, which I first suggested, is to use/develop a quadrature algorithm that only requires the integrand on a fixed regular grid. This will probably be fasted, but we will potentially lose the accuracy that adaptive routines usually guarantee. Another approach is to first compute the geometry/tangential field on the plasma boundary on a regular grid and thereafter compute the geometry/tangential field using splines (or something fast). I am not sure that this will really solve the problem, because if we need such adaptivity it probably means we need to run SPEC at higher Fourier resolution in any case.

VMEC-SPEC benchmark

@zhisong @jloizu CC: @SRHudson
I am working on SPEC-VMEC benchmark. Right now, to get free-boundary SPEC converged, you need to provide the exact Vns from coils (calculated by Biot-Savart law) and a good initial guess for Bns from plasma current on the computational boundary.

Zhisong did the following to get it converged.

  • Set Bns=0, Lfindzero=0, run virtual casing 30 times to get an initial guess for Bns
  • Set Lfindzero=2, starting from gBnld=0.9, run free-boundary iteration 50 times, halving gBnld each 10 iterations.

But there is a more friendly way, which is SPEC calculates the initial guess for Bns from fixed-boundary run. This will normally be a good guess. I added the following line to let SPEC run fixed-boundary, do virtual casing and update Bns at the first step of free-boundary calculation.

SPEC/xspech.h

Lines 172 to 176 in 5e0e4c7

if (nfreeboundaryiterations .eq. 0) then ! first iteration only run fixed-boundary
Mvol = Nvol
else
Mvol = Nvol + Lfreebound
endif

But somehow, after the fixed boundary run, the virtual casing is not converged, as shown below.

o smal;        
casing :     179.39 : myid= 14 ; [x,y,z]=[  5.39E+00 ,  0.00E+00 ,  1.14E+00 ]; gBn=  4.2676E-01 , err=  3.E-03 ; ifail=  1 ; min/max calls=    16777215    33554432 ; maxpts too smal;        
casing :     179.52 : myid=  5 ; [x,y,z]=[  5.75E+00 ,  0.00E+00 ,  4.37E-01 ]; gBn= -1.3139E-01 , err=  2.E-02 ; ifail=  1 ; min/max calls=    16777215    33554432 ; maxpts too smal;        
casing :     179.69 : myid=  0 ; [x,y,z]=[  5.80E+00 ,  0.00E+00 ,  0.00E+00 ]; gBn= -7.3118E-10 , err=  3.E-03 ; ifail=  1 ; min/max calls=    16777215    33554432 ; maxpts too smal;        
casing :     180.36 : myid=  8 ; [x,y,z]=[  5.66E+00 ,  0.00E+00 ,  6.89E-01 ]; gBn=  1.5996E-01 , err=  3.E-03 ; ifail=  1 ; min/max calls=    16777215    33554432 ; maxpts too smal;        
casing :     180.51 : myid=  2 ; [x,y,z]=[  5.79E+00 ,  0.00E+00 ,  1.76E-01 ]; gBn=  3.5361E-02 , err=  3.E-03 ; ifail=  1 ; min/max calls=    16777215    33554432 ; maxpts too small;

I need your help, as you are more familiar with SPEC source code. Please pull the branch free_bound, compile it and run the example in InputFiles/Verification/forcefree/solovev/solovev_fb_1r.sp. In this input file, I provide the required Vns from FOCUS and zero for Bns, which is the general case for users. I set gBnbld=0 to using virtual casing from fixed-boundary run, but probably we need to reset it to a larger number after Bns was obtained.

standard input files

Dear Speculators,

I suggest that we, in the very near future, create a standard input file that will test the non-axisymmetric capability, i.e. Lstellsym = 0.

Regards,

comparing two output files

When we make changes to the code, sometimes we expect that the code should reproduce the same result for a given input file, e.g. if we are simply working on code-cleaning, or speeding up. However, bugs can non-intentionally be inserted while making these changes.

For this reason, it is good to have some automatic checking-procedure against a "control case". I wrote a simple matlab routine (which should be uploaded some time soon) that compares 2 output files from SPEC, "fname1.sp.h5" and "fname2.sp.h5", and calculates the maximum difference in the geometrical coefficients of all interfaces. One can calculate the absolute maximum difference, Dabs, and the relative maximum difference, Drel.

If the geometry was exaclty the same we would have Dabs=Drel=0. Also, if Dabs~1e-16, then we should still consider them "the same" because of machine precision. However, when I tried to apply this to the l2stell testcase, run with (1) the master branch, and (2) the ma00aa branch, I get

Dabs = 1e-14
Drel = 1e-9

which could be due to the fact that depending on the compiler, the algorithm loop order, etc., the accumulated errors can be different for two versions of the code that "presumably do the same".

So the question is "how to decide whether two outputs are to be considered the same"?

Compiling and running SPEC in IPP

The current version of SPEC does not compile neither in the IPP workstations nor on the IPP cluster.

IPP-workstations use intel compilers (e.g. version 15)
IPP-cluster uses gcc compiler (e.g. version 4.9.2)

Both can load NAG library vesion mk24.

Several (small) problems are:

  • compiling options, e.g. unrecognized command line option -check
  • syntax errors, e.g. syntax error in WRITE statement, invalid characters, etc.
  • etc.

Some (more severe) problems may arise because of NAG calls which refer to routines that have changed name over version updates. I still need to identify those.
nag_replace.pdf

Attached is the Makefile that is used in the "working version" of SPEC in IPP.
Makefile.ipp.txt

Important personal note: the IPP-version of SPEC has been thoroughly verified in fully 3D geometries, (http://aip.scitation.org/doi/pdf/10.1063/1.4967709). Any new version should pass equivalent tests for me to consider it reliable.

Problem with restart in free-boundary

Usually, when a converged equilibrium is found, one can copy the ".sp.end" output file into the ".sp" input file and set Linitialize=0. Then running the ".sp" input makes SPEC start from a converged equilibrium and thus does not need to iterate on the geometry to reduce the force-imbalance.

I realized that this does not work in free-boundary, unless Lconstraint=-1. By "does not work" I mean that the restarted input has a smaller force imbalance but not zero, and one needs to iterate a couple of times the "copy .end into .sp and run again" procedure in order to (finally) have a ".sp" file that is at equilibrium from the start.

This may be due to how the equilibrium poloidal and toroidal fluxes in the vacuum region are written in the ".end " file. Perhaps not written precisely enough?

All output quantities into one HDF5 file

I open this issue to discuss the changes to SPEC in order to write all output quantities into a single HDF5 file. The work happens on the branch "issue68".

In order no to spam the issue tracker, I will introduce the following changes as well in this branch:

  1. allow to specify ext.sp in addition to ext as argument to xspec/dspec to allow useage of tab completion on the command line
  2. remove spaces after \\ to get rid of compiler warnings
  3. rename Fortran source files from *.h to *.f90

documentation

As you know, I think it is very beneficial to have detailed documentation:
http://w3.pppl.gov/~shudson/Spec/subroutines.html

Shall we just continue with the format that I have been using, or do you have any other suggestions?

Should we have "mirror" websites, or should we just keep the documentation on my website?

Purging branches

I think we should delete the free-boundary branch that @zhucaoxiang once created, and then test and merge the fixsigns branch into master. Note: I included in the latter the initial solovev cases that @zhucaoxiang uploaded to the free-boundary branch.

Please, if you agree, proceed to do that.

Beltrami field error higher in outer region than inner region

I recently found that the error of Beltrami field (computed by jo00aa) in the outer region is 2~3 order of magnitude higher than the inner region, for the same Lrad.
Take "InputFiles/TestCases/G3V02L1Fi.001.sp" for example. The error computed in the inner/outer region is as follows.
Inner:

jo00aa : 13.07 : myid= 0 ; lvol = 1 ; lrad = 4 ; E^\s= 4.086146766806381E-04 , E^\t= 1.438951130002829E-04 , E^\z= 2.179158203226244E-07 ; time= 0.00s ;

Outer:

jo00aa : 13.07 : myid= 1 ; lvol = 2 ; lrad = 4 ; E^\s= 4.374610040764162E-02 , E^\t= 7.041557273449774E-03 , E^\z= 4.593364553587003E-04 ; time= 0.00s ;

This also appears in the verification paper by @jloizu, Figure 11.

Does anyone have a clue? Presumably the difference is from the different radial basis functions used in singular/non-singular volumes. But one would expect the latter to be more robust.

W7X input file

Dear all,

Would anyone have an input file with the W7X geometry for the (R,Z) components representing the boundary? That would be useful for benchmarking tests I would like to run. Are we allowed to post this information on the Github site or are there proprietary issues?

Thanks,
Antoine

Virtual casing de-nagged routine shows problems

Sam,

You did replace the virtual casing routine D01EAF with DCUHRE on July 20th 2017:

7f0194f

When I try running a free-boundary case (with mfreeits>0) I get screen output saying:

casing : 1275.28 : myid= 0 ; [x,y,z]=[ 3.34E+00 , 9.43E+00 , -1.58E+00 ]; gBn= -1.6503E-01 , err= 2.E+01 ; ifail= 1 ; min/max calls= 8 195 ; maxpts too smal;
casing : 1275.28 : myid= 0 ; [x,y,z]=[ 3.34E+00 , 9.43E+00 , -1.58E+00 ]; gBn= -1.6503E-01 , err= 2.E+01 ; ifail= 11 ; min/max calls= 195 390 ; NW is too small;

and the code still runs but the normal field on the computational boundary does not seem to converge while performing the Piccard iterations.

Any clue?

Field-line-tracing starting at nonzero theta

Hi Stuart,

I just realized that the new version of SPEC (w.r.t the one we used for the fixed-boundary verification) does not have as possible input parameter the quantity "pqt" that was determining the starting poloidal angle for the field-line tracing. This was very useful to visualize islands depending on their phase.

Is that really not there anymore? If so, shall we add it?

Joaquim

Potentially Big Bug in free-boundary SPEC

While testing free-boundary SPEC in a simple circular tokamak force-free equilibrium, I discovered that the boundary condition on the normal field at the computational boundary, which should be the superposition of the "vacuum part" and the "plasma part", is enforced as a difference,

dMG(id ) = - ( iVns(ii) - iBns(ii) )

in matrix.h, and so the total normal field is zero only when the plasma contribution is equal (and not opposite!) to the vacuum contribution. In particular, I observe that as free-boundary iterations advance, the error, bnserr , between the imposed "plasma normal field contribution" and the "calculated plasma normal field contribution" (using virtual casing), does not converge. However, if I flip the sign in the above statement,

dMG(id ) = - ( iVns(ii) + iBns(ii) )

a converged solution is obtained in a few Piccard free-boundary iterations.

@SRHudson please tell me what do you think about this point. It seems quite crucial.

robust convergence

Following here the discussion that started with the iter-free-boundary testcase, which does not "robustly converge".

Below, I copy a comment by @SRHudson that summarizes this problem:

I think that the failure to robustly converge for this free-boundary iter-like case is associated with regularization problem near the coordinate/magnetic axis. The geometry of innermost interface goes crazy. Please see
http://w3.pppl.gov/~shudson/Spec/packxi.pdf
where I have given a brief description of the coordinate pre-conditioning factor that I have included.
Also, I think that the spectral constraints,
http://w3.pppl.gov/~shudson/Spec/lforce.pdf
are an absolute mess. The spectral constraints need to be revised. I think I know what to do, and I think that I am close to having this problem fixed once and for all, but I have not had the opportunity to concentrate on this problem. If this matter is deemed a priority, then I will work on this.
Note that I can get spec to converge for this case. It just requires a little human intervention and playing around with different choices of Linitialize. See
http://w3.pppl.gov/~shudson/Spec/global.pdf
for the possible values of Linitialize.
There is, however, potentially a more serious problem with this iter-like free-boundary case. Do I have the correct sign on the plasma current? What is the rotational transform negative? Do I have the correct normalization on the magnetic field at the computational boundary produced by the plasma current (computed using the virtual casing)?
I uploaded this input file for reference. It would be a lot easier to verify that free-boundary SPEC is working for tokamaks using a circular cross section boundary configuration.

Utilities

I have created a folder Utilities/matlabtools in which I will, in the next days, upload a series of user-friendly (and documented) routines that can be run with matlab in order to diagnose output from spec, but also to created new inputs or even to run spec sequentially.

To start, I just added a routine that takes two hdf5 file names as input and returns the difference in the results in terms of interface geometry. This is useful to test whether 2 versions of the code that are supposed to solve the same problem (only faster, perhaps) do indeed produce the same results.

Fundamental flaws in Makefile

The Makefile, as it is, does not handle properly the standard/debugging option represented by the flags RFLAGS/DFLAGS, which should be used when compiling xspec or dspec, respectively.

In fact, when compiling, it always ends up calling the debugging option DFLAGS even if one compiles the xspec executable.

This should be corrected.

InputFiles

I have created a folder InputFiles/TestCases and have added two cases that I have been using to test the code. I propose, inspired by Stuart's suggestion, to follow this convention for any input file uploaded in that folder:

Two files per input shall be always imported and named as:
GxVyLzFu.nnn.sp
GxVyLzFu.nnn.info
where the .sp is the actual input file and the .info is a text file with a sentence or two describing the purpose of this input, and with
x=1,2,3 for the geometry used (slab, cylindrical,toroidal)
y=number of volumes
z=0,1,2... for Lconstraint (fixed mu, fixed iota, fixed helicity...)
u=i,r for fixed-boundar (i) or free-boundary (r)
nnn=001,002,etc. to number the files if there is more than one with this G,V,L,F properties.

Please let me know if that works for you.

Change the name of subroutines

Stuart mentioned today that we can change the name of the subroutines that in the form of xx00aa.
The length of the new names must be 6 characters. I propose the following:
Old New Descriptions
pp00aa poinca Poincare plot
pp00ab ftrace field line tracing
ma00aa intgrl volume integrals
ma02aa bltrmi compute the local Beltrami solution
mp00ac iotait iteration over rotational transform
jo00aa flderr compute the field error
tr00ab denrgy compute the derivative of energy in each volume
sc00aa covfld compute co-variant field B
ra00aa wrtfld write vector potential to file

Please let me know if you have any ideas.

Bug in SPEC (introduced by Sam)

I found that there is some bug in SPEC because it produces segmentation fault (or running error depending on the compiler) when running a simple vacuum case at a particular resolution.

More precisely, running a single-volume, l=2 stellarator calculation with Lconstraint=0 (mu=0) and Fourier resolution M=N=10 crashes, while other combinations I tried for M,N resolutions work!

The error comes, e.g., like this:

At line 267 of file ma00aa.F90
Fortran runtime error: Index '222' of dimension 2 of array 'kijs' above upper bound of 221
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.

I checked that this error disappears when using the version of ma00aa.h prior to Sam's commit:

Mild speedup modifications to ma00aa.h
commit 69af93a @lazersos lazersos committed on Jul 19

Could you Sam try to fix this?

Latex documentation

I made some changes in the source regarding the latex documentation. Usually, I go to

https://w3.pppl.gov/~shudson/Spec/subroutines.html

to have a look at the documentation, which I think is compiled from the source. How is this updated according to the new source? Or perhaps the better question is: how to I compile the source so that it generates the documentation in my local repository?

Sorry for my ignorance.

ma00aa

Hi,

I plan to spend some time revising ma00aa, which you can see from
http://w3.pppl.gov/~shudson/Spec/ma00aa.pdf
computes the volume integrals of the metric elements.

Sam has already advised on this, suggesting to remove divisions from within loops and to remove loops. I will also look at exploiting symmetries more efficiently. Restructuring the loops should be easy enough, but the symmetries will take longer to be fully optimized.

As I am still getting used to git, please give me some suggestions. Should I create a new branch? If so, how?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.