Here we are sharing the experiences of the natESM Research Software Engineers (RSE) that have been gathered during individual sprints or when dealing with specific HPC environments. Additionally, the RSE are providing solutions to coding challenges for your benefit.

User-Defined Derived Type Input/Output with NVHPC compiler

Author: Enrico Degregori (DKRZ)

User-Defined Derived Type Input/Output (UDTIO) allows the programmer to specify how a derived type is read or written from or to a file. This allows the user of an object to perform Input/Output operations without any knowledge of the object's layout.

Case study: Unformatted I/O

Consider the following mesh class:

type t_mesh
integer :: nod2D
real(kind=WP) :: ocean_area
real(kind=WP), allocatable, dimension(:,:) :: coord_nod2D
contains
procedure, private write_t_mesh
procedure, private read_t_mesh
generic :: write(unformatted) => write_t_mesh
generic :: read(unformatted) => read_t_mesh
end type t_mesh

The subroutines for IO of this derived data type are:

subroutine write_t_mesh(mesh, unit, iostat, iomsg)

class(t_mesh), intent(in) :: mesh
integer, intent(in) :: unit
integer, intent(out) :: iostat
character(*), intent(inout) :: iomsg

write(unit, iostat=iostat, iomsg=iomsg) mesh%nod2D
write(unit, iostat=iostat, iomsg=iomsg) mesh%ocean_area
call write_bin_array(mesh%coord_nod2D, unit, iostat, iomsg)

end subroutine write_t_mesh

subroutine read_t_mesh(mesh, unit, iostat, iomsg)

class(t_mesh), intent(inout) :: mesh
integer, intent(in) :: unit
integer, intent(out) :: iostat
character(*), intent(inout) :: iomsg

read(unit, iostat=iostat, iomsg=iomsg) mesh%nod2D
read(unit, iostat=iostat, iomsg=iomsg) mesh%ocean_area
call read_bin_array(mesh%coord_nod2D, unit, iostat, iomsg)

end subroutine read_t_mesh

In the main program then the object can be read/written without knowing the derived data type layout:

program main

implicit none
type(t_mesh) :: mesh

! mesh derived type
open(newunit = fileunit, &
          file = trim(path_in), &
          status = 'replace', &
          form = 'unformatted')
write(fileunit) mesh
close(fileunit)

end program main

NVHPC compiler requires the writing/reading subroutines to be defined as private. If also the generic write/read are defined as private, the program will note fail with a runtime error but inconsistent behaviors can be observed (like wrong data read from files).

Compilation errors

Private components of a derived data type are not accessible with traditional IO operations, so the following code generates a compilation error:

module my_mod
type t
integer :: x
integer, private :: y
end type t
end module my_mod
program prg1
use my_mod
type(t) :: obj
write(*,*) obj ! Illegal due to private y
end

F2003 also does not allow IO operations on entire objects that have pointer components, so the following code generates a compilation error:

module my_mod
type t
integer :: x
integer,pointer :: y
end type
end module my_mod
program prg2
use my_mod
type(t) :: obj
write(*,*) obj ! Illegal due to pointer y
end

If a derived data type contains an allocatable array of another derived data type, the UDTIO operation fails at compile time. For example:

TYPE T_TRACER_DATA
real(kind=WP), allocatable, dimension(:,:) :: values, valuesAB
real(kind=WP) :: gamma0_tra, gamma1_tra, gamma2_tra
integer :: ID

contains
procedure, private :: WRITE_T_TRACER_DATA
procedure, private :: READ_T_TRACER_DATA
generic :: write(unformatted) => WRITE_T_TRACER_DATA
generic :: read(unformatted) => READ_T_TRACER_DATA
END TYPE T_TRACER_DATA

TYPE T_TRACER_WORK
real(kind=WP), allocatable :: del_ttf_advhoriz(:,:), del_ttf_advvert(:,:)

contains
procedure, private :: WRITE_T_TRACER_WORK
procedure, private :: READ_T_TRACER_WORK
generic :: write(unformatted) => WRITE_T_TRACER_WORK
generic :: read(unformatted) => READ_T_TRACER_WORK
END TYPE T_TRACER_WORK

TYPE T_TRACER
integer :: num_tracers=2
type(t_tracer_data), allocatable :: data(:)
type(t_tracer_work) :: work
contains
procedure, private :: WRITE_T_TRACER
procedure, private :: READ_T_TRACER
generic :: write(unformatted) => WRITE_T_TRACER
generic :: read(unformatted) => READ_T_TRACER
END TYPE T_TRACER

Debugging during GPU porting with OpenACC

Author: Enrico Degregori (DKRZ)

Parallel Compiler Assisted Software Testing (PCAST) is a feature available in the NVIDIA HPC Fortran, C++, and C compilers. It compares the GPU computation against the same program running on the CPU. In this case, all compute constructs are done redundantly, on both CPU and GPU. The GPU results can then be compared against the CPU results and the differences reported.

A semi-automatic method which can be used with OpenACC is to allow the runtime to automatically compare values when they are downloaded from device memory. This is enabled with the -gpu=autocompare compiler flag, which also enables the redundant option. This runs each compute construct redundantly on CPU and GPU and compares the results, with no changes to the program itself.

The autocompare feature only compares data when it would get downloaded to system memory. To compare data after some compute construct that is in a data region, where the data is already present on the device, there are three ways to do the comparison at any point in the program.

  • First, you can insert an update self directive to download the data to compare. With the autocompare option enabled, any data downloaded with an update self directive will be downloaded from the GPU and compared to the values computed on the host CPU.

  • Alternatively, you can add a call to acc_compare, which compares the values then present on the GPU with the corresponding values in host memory. The acc_compare routine has only two arguments: the address of the data to be compared and the number of elements to compare. The data type is available in the OpenACC runtime, so doesn’t need to be specified.

Finally, you can use the acc compare directive.

Autocompare is useful to debug differences between CPU and GPU results during the porting. When during the porting the developer is facing a runtime error, the NVHPC compiler is often not giving much information. In this case, it is useful to set export NVCOMPILER_ACC_NOTIFY=3 in the batch script to get useful information about data movements and kernel launching in order to locate the part of the code responsible for the crash.

Port and enable HIP as a Kokkos backend of the ParFlow model using eDSL

Author: Jörg Benke (JSC)

The main idea of the ParFlow sprint was to port and to enable HIP as a Kokkos backend of the hydrologic model ParFlow using the eDSL (embedded Domain Specific Language) of ParFlow. The usage of the eDSL and Kokkos allows in principle a very easy adaptation to different architectures and parallel models. This was the case when porting ParFlow to CUDA with Kokkos. We will shortly list a few points what we have learnt in this sprint regarding the porting efforts for HIP and Kokkos.

  • Kokkos v4.0 breaks backwards compatibility with Kokkos v3.7 for AMD support. The update to Kokkos v4.0 required some updates in the Kokkos API in ParFlow.
  • The response time of the support for Kokkos via its slack channel is very quick.
  • Using the different tools of the Kokkos ecosystem (e.g., kernel logger of Kokkos) or AMD debugging/logging flags in ParFlow (export AMD_LOG_LEVEL=4) are in general very helpful for logging and debugging. 
  • HIPification via eDSL and Kokkos is in principle relatively easy. We’ve modified the ParFlow eDSL to enable the Kokkos API to reach the HIP backend, mimicking the approach already enabled for CUDA.
  • Compiling ParFlow with the HIPFortran of the ROCm environment was challenging because compilation with the standard installed HIPFortran wasn’t possible. This was the case also on LUMI-G, we circumvented it with cloning HIP Fortran and compiling it on our own.

Modular Supercomputing Architecture (MSA) concept at JSC

Author: Catrin Meyer (JSC)

The Modular Supercomputing Architecture concept is a novel way of organizing computing resources in HPC systems, developed at the Jülich Supercomputing Centre (JSC) in the course of the DEEP projects. Its key idea is to segregate hardware resources (e.g. CPUs, GPUs, other accelerators) into separate modules such that the nodes within each module are maximally homogeneous. This approach has the potential to bring substantial benefits for heterogeneous applications, wherein each sub-component has been tailored or is by nature of the numerical solution better suited to run efficiently on specific hardware.

General commands to perform heterogeneous simulations on the Jülich HPC systems
Submission commands
  • Multiple independent job specifications identified in command line separated by “:”.
    • salloc <options 0 + common> : <options 1> [ : <options 2>... ]
  • Resulting heterogeneous job will have as many job components as there were blocks of command line arguments
  • The first block of arguments contains the job options of the first job component as well as common job options that will apply to all other components. The second block contains options for the second job component and so on.
  • Example:
    • salloc -A budget -p partition_a -N 1 : -p partition_b -N 16
Job scripts
  • Submitting non-interactive heterogeneous jobs through sbatch works similarly, but the syntax for seperating blocks of options in a batch script is  slightly different.
  • Use #SBATCH hetjob between the other #SBATCH options in jobscript to separate job packs and their groups of resources.
  • With srun and the “:” format you can spawn jobsteps using heterogeneous resources.
  • With the srun’s option --het-group you can define which het-group of resources will be used for the jobsteps.
    • srun <options and command 0> : <options and command 1> [ : <options and command 2> ]
    • The first block applies to the first component, the second block to the second component and so on. If there are less blocks than job components, the resources of the latter job components go unused as no application is launched there.
    • The option --het-group=<expr> can be used to explicitly assign a block of command line arguments to a job component. It takes as its argument <expr> either a single job component index in the range 0 ... n - 1 where n is the number of job components, or a range of indices like 1-3 or a comma  seperated list of both indices and ranges like 1,3-5.
Example of a jobscript

#!/bin/bash

#SBATCH -N 32 --ntasks-per-node=8 -p batch

#SBATCH hetjob

#SBATCH -N 1 --ntasks-per-node=1 -p batch

#SBATCH hetjob

#SBATCH -N 16 --gres=gpu:4--ntasks-per-node=4 -p gpus

srun --het-group=0 exec0 : --het-group=1 exec1 : --het-group=2 exec2

Example of a scientific use case

The ICON model was ported to and optimized on the MSA-system Jülich Wizard for European Leadership Science (JUWELS) at Jülich Supercomputing Centre, in such a manner that the atmosphere component of ICON runs on GPU equipped nodes, while the ocean component runs on standard CPU-only nodes. Both, atmosphere and ocean, are running globally with a resolution of 5 km. In our test case, an optimal configuration in terms of model performance (core hours per simulation day) was found for the combination 84 GPU nodes on the JUWELS Booster module and 80 CPU nodes on the JUWELS Cluster module, of which 63 nodes were used for the ocean simulation and the remaining 17 nodes were reserved for I/O. With this configuration the waiting times of the coupler were minimized. Compared to a simulation performed on CPUs only, the energy consumption of the MSA approach was nearly halved with comparable runtimes.

More details can be found in: Bishnoi, A., Stein, O., Meyer, C. I., Redler, R., Eicker, N., Haak, H., Hoffmann, L., Klocke, D., Kornblueh, L., and Suarez, E.: Earth system modeling on Modular Supercomputing Architectures: coupled atmosphere-ocean simulations with ICON 2.6.6-rc, EGUsphere [preprint],
https://doi.org/10.5194/egusphere-2023-1476, 2023.

Poor implementation of OpenACC PRIVATE clause in the NVHPC compiler

Author: Enrico Degregori (DKRZ)

A single column component of a climate model is characterized by dependencies in the vertical coordinate. In this case it is usual to allocate a temporary array which is used on every iteration over the surface points. An example with ICON data structures:

!ICON_OMP_PARALLEL_DO PRIVATE(startIndex, endIndex) ICON_OMP_DEFAULT_SCHEDULE
DO jb = cells_in_domain%start_block, cells_in_domain%end_block
    CALL get_index_range(cells_in_domain, jb, startIndex, endIndex)
    DO jc = start_index, end_index
        levels = dolic_c(jc,jb)
        IF (dolic_c(jc,jb) > 0) THEN
            DO level = 1, levels
                pressure(level) = fill_pressure_ocean_physics()
            END DO
            DO level = 1, levels-1
                rho_up(level) = calculate_density(&
                              & temp(jc,level,jb), &
                              & salt(jc,level,jb), &
                              & pressure(level+1))
            END DO
            DO level = 2, levels
                rho_down(level) = calculate_density(&
                                & temp(jc,level,jb), &
                                & salt(jc,level,jb), &
                                & pressure(level))
            END DO
            DO level = 2, levels
                Nsqr(jc,level,jb) = gravConstant * (rho_down(level) - rho_up(level-1)) * &
                                    dzi(jc,level,jb) / s_c(jc,jb)
            END DO
        END IF
    END DO
END DO ! end loop over blocks
!ICON_OMP_END_PARALLEL_DO

A possible way to port this short example is to parallelize over the jc loop given that the range start_index:end_index is large enough to fill the GPU.

!ICON_OMP_PARALLEL_DO PRIVATE(startIndex, endIndex) ICON_OMP_DEFAULT_SCHEDULE
DO jb = cells_in_domain%start_block, cells_in_domain%end_block
    CALL get_index_range(cells_in_domain, jb, startIndex, endIndex)
    !$ACC PARALLEL LOOP GANG VECTOR PRIVATE(rho_up, rho_down, pressure) DEFAULT(PRESENT)
    DO jc = start_index, end_index
        levels = dolic_c(jc,jb)
        IF (dolic_c(jc,jb) > 0) THEN
            DO level = 1, levels
                pressure(level) = fill_pressure_ocean_physics()
            END DO
            DO level = 1, levels-1
                rho_up(level) = calculate_density(&
                              & temp(jc,level,jb), &
                              & salt(jc,level,jb), &
                              & pressure(level+1))
            END DO
            DO level = 2, levels
                rho_down(level) = calculate_density(&
                                & temp(jc,level,jb), &
                                & salt(jc,level,jb), &
                                & pressure(level))
            END DO
            DO level = 2, levels
                Nsqr(jc,level,jb) = gravConstant * (rho_down(level) - rho_up(level-1)) * &
                                    dzi(jc,level,jb) / s_c(jc,jb)
            END DO
        END IF
    END DO
    !$ACC END PARALLEL LOOP
END DO ! end loop over blocks
!ICON_OMP_END_PARALLEL_DO

This parallelization strategy allows to have memory coalescing and, assuming that the horizontal resolution is high enough, maximal throughput can be achieved. However, when profiling the kernel (using Nsight Systems or Nsight Compute), the performance is poor and the occupancy is quite low because of a high utilization of registers. The reason is that the compiler is not allocating the PRIVATE variables in global memory and since their size is equal to the number of vertical levels, the register utilization increases and as a consequence the occupancy decreases.

A possible workaround (purely for GPU performance) is to extend the dimension of the temporary variables, so that they don't have to be PRIVATE and can be allocated in global memory:

!ICON_OMP_PARALLEL_DO PRIVATE(startIndex, endIndex) ICON_OMP_DEFAULT_SCHEDULE
DO jb = cells_in_domain%start_block, cells_in_domain%end_block
    CALL get_index_range(cells_in_domain, jb, startIndex, endIndex)
    !$ACC PARALLEL LOOP GANG VECTOR DEFAULT(PRESENT)
    DO jc = start_index, end_index
        levels = dolic_c(jc,jb)
        IF (dolic_c(jc,jb) > 0) THEN
            DO level = 1, levels
                pressure(jc,level) = fill_pressure_ocean_physics()
            END DO
            DO level = 1, levels-1
                rho_up(jc,level) = calculate_density(&
                              & temp(jc,level,jb), &
                              & salt(jc,level,jb), &
                              & pressure(jc,level+1))
            END DO
            DO level = 2, levels
                rho_down(jc,level) = calculate_density(&
                                & temp(jc,level,jb), &
                                & salt(jc,level,jb), &
                                & pressure(jc,level))
            END DO
            DO level = 2, levels
                Nsqr(jc,level,jb) = gravConstant * (rho_down(jc,level) - rho_up(jc,level-1)) * &
                                    dzi(jc,level,jb) / s_c(jc,jb)
            END DO
        END IF
    END DO
    !$ACC END PARALLEL LOOP
END DO ! end loop over blocks
!ICON_OMP_END_PARALLEL_DO

The performance of the kernel shows now a significant improvement but it is still suboptimal because the memory access of rho_up, rho_down and pressure is not coalesced. As a result, it would be necessary to fully restructure the loops until this compiler bug will be fixed.

GPU porting towards throughput or time-to-solution

Author: Enrico Degregori (DKRZ)

When porting a code to GPU, two different approaches are possible: achieve maximum throughput or minimum time-to-solution. Let's consider a simple example based on ICON data structure:

!ICON_OMP_PARALLEL_DO PRIVATE(startIndex, endIndex) ICON_OMP_DEFAULT_SCHEDULE
DO jb = cells_in_domain%start_block, cells_in_domain%end_block
    CALL get_index_range(cells_in_domain, jb, startIndex, endIndex)
    DO jc = start_index, end_index
        levels = dolic_c(jc,jb)
        IF (dolic_c(jc,jb) > 0) THEN
            DO level = 1, levels
                Nsqr(jc,level,jb) = gravConstant * (rho_down(jc,level,jb) - rho_up(jc,level-1,jb)) * &
                                    dzi(jc,level,jb)
            END DO
        END IF
    END DO
END DO ! end loop over blocks
!ICON_OMP_END_PARALLEL_DO

GPU porting towards maximum throughput would look like this:

!ICON_OMP_PARALLEL_DO PRIVATE(startIndex, endIndex) ICON_OMP_DEFAULT_SCHEDULE
DO jb = cells_in_domain%start_block, cells_in_domain%end_block
    CALL get_index_range(cells_in_domain, jb, startIndex, endIndex)
    !$ACC PARALLEL LOOP GANG VECTOR DEFAULT(PRESENT)
    DO jc = start_index, end_index
        levels = dolic_c(jc,jb)
        IF (dolic_c(jc,jb) > 0) THEN
            DO level = 1,levels
                Nsqr(jc,level,jb) = gravConstant * (rho_down(jc,level,jb) - rho_up(jc,level-1,jb)) * &
                                    dzi(jc,level,jb)
            END DO
        END IF
    END DO
    !$ACC END PARALLEL LOOP
END DO ! end loop over blocks
!ICON_OMP_END_PARALLEL_DO

While towards minimum time-to-solution:

!ICON_OMP_PARALLEL_DO PRIVATE(startIndex, endIndex) ICON_OMP_DEFAULT_SCHEDULE
DO jb = cells_in_domain%start_block, cells_in_domain%end_block
    CALL get_index_range(cells_in_domain, jb, startIndex, endIndex)
    max_levels = maxval(dolic_c(start_index:end_index,jb))
    !$ACC PARALLEL LOOP GANG VECTOR COLLAPSE(2) DEFAULT(PRESENT)
    DO level = 1, max_levels
        DO jc = start_index, end_index
            IF (dolic_c(jc,jb) > 0 .AND. level <= dolic_c(jc,jb)) THEN
                Nsqr(jc,level,jb) = gravConstant * (rho_down(jc,level,jb) - rho_up(jc,level-1,jb)) * &
                                    dzi(jc,level,jb)
            END IF
        END DO
    END DO
    !$ACC END PARALLEL LOOP
END DO ! end loop over blocks
!ICON_OMP_END_PARALLEL_DO

In both cases the memory access is coalesced, but the main difference is the work done by each thread.

- In the first case, the maximum number of threads executing in parallel is equal to the number of surface cells and each thread is computing the operation for each vertical level.

- In the second case, the maximum number of threads executing in parallel is equal to the total number of cells in the full domain and each thread is computing the operation once.

In an high resolution experiment, if the total number of cells on the surface are enough to use all the SMs on a GPU, the first case would have the maximum throughput. However, if we want to scale this setup to decrease the time-to-solution, the amount of work for each GPU would decrease and the second case would allow to achieve the minimum time-to-solution (better scaling).