Skip to content
Snippets Groups Projects
Commit 22bf485a authored by Giacomo Mulas's avatar Giacomo Mulas
Browse files

Merge branch 'prepare_md_docs' into 'master'

Prepare MarkDown documents

See merge request giacomo.mulas/np_tmcode!2
parents 801bdba4 3a658826
No related branches found
No related tags found
No related merge requests found
......@@ -4,34 +4,44 @@ This directory collects all the output of make builds.
## Instructions
The original code produces output in the current working directory (the path where the code is executed from). The build directory is intended to collect local builds and test run output in a safe place, without cluttering the code development folders, thus helping git to filter out unnecessary logs through .gitignore.
The original code produces output in the current working directory (the path where the code is executed from). The build directory is intended to collect local builds and test run output in a safe place, without cluttering the code development folders, thus helping `git` to filter out unnecessary logs through `.gitignore`.
## Code work-flow
This section describes the use of the pre-existing programs, once the binaries have been properly built by a succesful run of make in the src folder.
This section describes the use of the pre-existing programs, once the binaries have been properly built by a succesful run of `make` in the `src` folder.
### cluster
1. cd to the build/cluster folder
2. run edfb (./edfb)
3. run clu (./clu)
1. cd to the `build/cluster` folder
2. run `edfb`
> ./edfb
3. run `clu`
> ./clu
*NOTE:* both `edfb` and `clu` expect an input which is assumed to be in a folder named `../../test_data/cluster/` (i.e. two levels above the current execution path)
NOTE: both edfb and sph expect an input which is assumed to be in a folder named "../../test_data/cluster/" (i.e. two levels above the current execution path)
TODO: set up a code variable to locate the input data (data file paths should not be hard-coded)
*TODO:* set up a code variable to locate the input data (data file paths should not be hard-coded)
### sphere
1. cd to the build/sphere folder
2. run edfb (./edfb)
3. run sph (./sph)
1. cd to the `build/sphere` folder
2. run `edfb`
> ./edfb
3. run `sph`
NOTE: both edfb and sph expect an input which is assumed to be in a folder named "../../test_data/sphere/" (i.e. two levels above the current execution path)
> ./sph
*NOTE:* both `edfb` and `sph` expect an input which is assumed to be in a folder named `../../test_data/sphere/` (i.e. two levels above the current execution path)
TODO: set up a code variable to locate the input data (data file paths should not be hard-coded)
*TODO:* set up a code variable to locate the input data (data file paths should not be hard-coded)
### trapping
The execution of trapping programs requires at least one of the previous programs to have produced a complete output set.
TODO: investigate which conditions allow clu or sph to write TTMS output files.
\ No newline at end of file
*TODO:* investigate which conditions allow `clu` or `sph` to write `TTMS` output files.
# Folder instructions
This directory collects the source code of the original programs and the development folders.
## Instructions
The original code is contained in the folders named `cluster`, `sphere` and `trapping`. Each folder contains a `Makefile` to compile either the whole program set or the single programs. A global `Makefile`, which contains instructions to build all the original source code, is available directly in the `src` folder.
In all cases, build commands executed through `make` will output the object files and the linked binaries in the proper folders under the build directory.
This directory collects the input files for test the code.
# Folder instructions
This directory collects the input files to test the code.
## Instructions
The execution of the original code can be controlled through a set of configuration files that define the characteristics of the problem and affect the type of output. The following sections describe the contents of the configuration files and, to some extent, thos of the output ones, presenting one code in each section.
### cluster
cluster is designed to calculate a complex geometry made up by many spheres. These can be either fully embedded in a larger sphere or separated within the external medium. Sphere compenetration is not accounted for.
*TODO:* add the description of the cluster configuration files
### sphere
sphere is designed to perform the simplest case calculation, namely the scattering of incident radiation on a single sphere. To perform the calculation, the two following formatted files need to be provided:
*TODO:* write the the DEDFB documentation
- DSPH
```
SPHERE_NUMBER MAXIMUM_L_ORDER POLARIZATION_STATUS TRANSITION_SHARPNESS_1 TRANSITION_SHARPNESS_2 GEOMETRY
STARTING_INC_THETA INC_THETA_STEP FINAL_INC_THETA STARTING_SCA_THETA SCA_THETA_STEP FINAL_SCA_THETA
STARTING_INC_PHI INC_PHI_STEP FINAL_INC_PHI STARTING_SCA_PHI SCA_PHI_STEP FINAL_SCA_PHI
WRITE_INTERMEDIATE_FILE
EOF_CODE
```
were the different lines have the following roles:
1. general configuration of the scattering problem, with some specification of the transition between materials
2. definition of the elevation angle arrays for the incident and scattered radiation fields
3. definition of the azimuth angle arrays for the incident and scattered radiation fields
4. a flag to set whether the intermediate data should be written to output files
5. an end-of-file code (generally 0)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment