Skip to content
Snippets Groups Projects
Commit 5cc6c61b authored by Emanuele De Rubeis's avatar Emanuele De Rubeis
Browse files

Update README.md

parent 91bfa0e1
No related branches found
No related tags found
No related merge requests found
......@@ -28,6 +28,10 @@ Currently in RICK we have not yet implemented the following steps:
<br>
## Preparing the dataset
To be digested by RICK, the input Measurement Set must be written in binary. To do this, a Python script named `create_binMS.py` can be found in the `/scripts`directory. The supported format for MS input is the Measurement Set V2.0 standard. Given that the script needs to read some columns of the input data, we reccommend to use a Singularity image to deal with the `casacore` dependencies. You can find one [here](https://lofar-webdav.grid.sara.nl/software/shub_mirror/tikk3r/lofar-grid-hpccloud/lofar_sksp_v3.5_x86-64_generic_noavx512_ddf.sif?action=show). The script has to be modified with the input Measurement Set.
## How to compile the code
The latest version of the code is in the *merge* branch. To compile it, you need first to specify in the Makefile the required configuration for the RICK execution.<br>
......@@ -106,19 +110,23 @@ To use the **cuFFTMp** with **nvhpc 23.5** you have to add the following paths t
> export LD_LIBRARY_PATH="/leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/gcc-11.3.0/nvhpc-23.5-pdmwq3k5perrhdqyrv2hspv4poqrb2dr/Linux_x86_64/23.5/math_libs/11.8/lib64/:$LD_LIBRARY_PATH"
```
Once you have compiled the code, run it simply with the command:
```
> mpirun -np [n] [executable] data/paramfile.txt
```
When CPU hybrid MPI+OpenMP parallelism is requested, you have to select the number of threads by setting **--cpus-per-task=** in your bash script, then add the following lines:
```
> export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
> export OMP_PLACES=cores
> export OMP_PROC_BIND=close
```
Once you have compiled the code, run it simply with the command:
```
> mpirun -np [n] [executable] data/paramfile.txt
```
Then, to run the code in order to fully fill all the available cores in the node please add **--ntasks-per-socket=** in your bash script and then run:
```
> mpirun -np [n] --bind-to core --map-by ppr:${SLURM_NTASKS_PER_SOCKET}:socket:pe=${SLURM_CPUS_PER_TASK} -x OMP_NUM_THREADS [executable] data/paramfile.txt
```
## Contacts
For feedbacks, suggestions, and requests [contact us](mailto:emanuele.derubeis2@unibo.it)!: emanuele.derubeis2@unibo.it.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment