Skip to content
Snippets Groups Projects
Select Git revision
  • 4f6bae27da136cc6b5f472fe86f7fce7faaa0ca4
  • dev default protected
  • new_pvl_core
  • 8.0-test
  • lts-testing
  • revert-5695-ideal_serial
  • 9.0
  • 9.0.0_RC2
  • 8.0
  • 8.0.5_LTS
  • code8.3.0
  • 9.0.0
  • 9.0.0_RC1
  • gdal_pvl
  • Kelvinrr-patch-3
  • Kelvinrr-patch-2
  • 8.3
  • pvl_core
  • 8.2
  • 8.1
  • Kelvinrr-patch-1
  • 8.0.4
  • 8.3.0
  • 8.2.0
  • 8.1.0
  • 8.0.3
  • 8.0.2
  • 8.0.1
  • 8.0.0
  • 8.1.0_RC2
  • 8.1.0_RC1
  • 8.0.0_RC2
  • 8.0.0_RC1
  • 7.2.0
  • 7.1.0
  • 7.0.0
  • 7.2.0_RC1
  • 7.1.0_RC1
  • 7.0.0_RC2
  • 7.0.0_RC1
  • 6.0.0
41 results

ISIS3

HPC_Imaging

To compile the code, feel free to activate and deactivate options in the Makefile. You will find the code options before and then the acceleration options.

You can simply run the code with the command:

############################################

make w-stacking

############################################

It will redirect you to the file Build/Makefile.local, which is complete enough apart from different library paths, feel free to use it or to change SYSTYPE. My aim was to make compilation as simple as possible.

When you use GPU offloading with OpenMP, please do not compile the CPU part with NVC. This can be easily fixed by setting the environment variable:

############################################

export OMPI_CC = gcc

###########################################

In the case in which the default compiler is NVC. The Makefile is suited to understand which are the parts to be compiled with NVC for the OpenMP offloading. The final linker in this case will be however the NVC/NVC++.

The problem does not raise on AMD platforms, because you use clang/clang++ for both CPUs and GPUs

The extensions of the executable will be changed depending on the different acceleration options.

To run the code, the data/paramfile.txt is available. Feel free to change the paramers, i.e. the path of visibilities, which reduce implementation to use, the number of pixels, the number of OpenMP threads and so on.

Once you have compiled the code, run it simply with the command:

###########################################

mpirun -np [n] [executable] data/paramfile.txt

###########################################

In the case in which the code has been compiled without either -fopenmp or -D_OPENMP options, the code is forced to use the standard MPI_Reduce implementation, since our reduce works only with OpenMP.