800.pot3d_s (POT3D)
Solar Physics: Finite difference method, Preconditioned conjugate gradient solver
Authors listed in alphabetic order:
800.pot3d_s was submitted to the SPEC CPU v8 Benchmark Search Program by Ron Caplan <caplanr[at]predsci [dot] com>.
POT3D computes potential field solutions used to approximate the 3D solar coronal magnetic field using observed photospheric magnetic fields as a boundary condition. It is used for numerous studies of coronal structure and dynamics. It is also part of the CORHEL package at NASA's CCMC where it is used to generate WSA solar wind solutions for use in analysis and providing boundary conditions for heliospheric simulations.
POT3D uses a preconditioned conjugate gradient sparse matrix solver for the Laplace equation in 3D spherical coordinates. The Laplacian is evaluated using finite differences on a logically-rectangular non-uniform spherical grid. Polar boundary conditions involve small collective operations to compute polar averages.
The benchmark was contributed to SPEChpc 2021 and then modifed for SPEC CPU by removing MPI and HDF5. The latest upstream version of POT3D uses Fortran standard parallel "do concurrent" construct for multi-threading. This "do concurrent" construct was ported to the SPEC CPU version of POT3D. In addition, portability options were provided to add "reduce" and "local" clauses for compilers that need them to run successfully.
POT3D uses a namelist in an input file called "pot3d1.dat" to set all parameters of a run. The parameters nr, nt and np specify the number of r, theta and phi divisions for the grid topology. The code also requires an input 2D data set in text format - br_input_[tiny|small].txt - to use for the lower boundary condition. The code can optionally output the potential solution by setting "phifile" to a filename. To refrain from writing the solution, set phifile=''.
POT3D outputs the magnetic energy for each component of the derived magnetic field (as well as the total). The code also outputs the number of iterations to convergence for the Congugate Gradient solver. The energy values and the convergence iteration count are verified using specdiff with a tight tolerance.
Fortran
Fortran standard parallel - "do concurrent"
Various compilers have different levels of support for Fortran standard parallel - "do concurrent". This benchmark uses "reduce" and "local" clauses with "do concurrent" to explicitly list the reduction and local variables inside the parallel regions. Some compilers do not support local and reduce clauses so a portability flag, SPEC_SUPPRESS_LOCAL_AND_REDUCE, is provided to suppress the use of these clauses.
As mentioned in
system-requirements.html#memory, you may need to adjust your stack size(s).
For the main process, Linux and Unix users may want to set
ulimit -s unlimited
and users of the Intel Compiler on Windows might need compiler options such as
/F1800000000
to adjust the main process stack size. For the OpenMP
child processes you may need to adjust OMP_STACKSIZE using
preenv.
The SPEC CPU 2017
FAQ item
for 627.cam4 may be helpful to understand how to set the stack sizes.
The benchmark was contributed to SPEChpc 2021 and modified for SPEC CPU. A public version of the code can be found at https://github.com/predsci/POT3D .
License: Apache 2.0 with the following additional notice:
POT3D's development and support requires substantial effort and we therefore request that Predictive Science Inc. should be acknowledged as the creators of POTS3D by any authors who use it (or derivatives thereof) in their published works.
Copyright © 2026 Standard Performance Evaluation Corporation (SPEC®)