Simple communication test that involves 5 docker containers:
- client (container_name: client, commands available: 'vos_data')
- server (container_name: transfer_service)
- RabbitMQ (container_name: rabbitmq)
- Redis (container_name: redis)
- File catalog (container_name: file_catalog), now available here: 
  https://www.ict.inaf.it/gitlab/vospace/vospace-file-catalog

In addition to these containers, Sonia Zorba modified 'docker-compose.yml' by adding the REST portion.
The images used for this purpose are:
- git.ia2.inaf.it:5050/vospace/vospace-oats
- git.ia2.inaf.it:5050/vospace/vospace-file-service
- git.ia2.inaf.it:5050/vospace/vospace-ui

The web interface is available on your browser at http://localhost:8080/ when all the containers are up and 
running (read the section here below).
  
###############################################################################################################

You can start the whole environment from the 'vos-ts' directory with:
$ docker-compose up

Once all the containers are up and running, open another shell and access the 'client' container:
$ docker exec -it client /bin/bash

Now you can launch the 'vos_data' command.
Launching the client without any argument will show you how to use it:

client@28970a09202d:~$ vos_data

NAME
       vos_data

SYNOPSYS
       vos_data COMMAND USERNAME

DESCRIPTION
       The purpose of this client application is to notify to the VOSpace backend that
       data is ready to be saved somewhere.
       
       The client accepts only one (mandatory) command at a time.
       A list of supported commands is shown here below:

       cstore
              performs a 'cold storage' request, data will be saved on tape

       hstore
              performs a 'hot storage' request, data will be saved to disk

       The client also needs to know the username associated to a storage request process.
       The username must be the same used for accessing the transfer node.

       
For example, if we want to perform a 'cold storage' request for the 'curban' user, we do:
client@28970a09202d:~$ vos_data cstore curban

Choose one of the following storage locations:

----------------------------------------------------------------------
[*] storage_id: 1    =>   hostname: tape-fe.ia2.inaf.it
----------------------------------------------------------------------

Please, insert a storage id: 1

!!!!!!!!!!!!!!!!!!!!!!!!!!WARNING!!!!!!!!!!!!!!!!!!!!!!!!!!!
If you confirm, all your data on the transfer node will be
available in read-only mode for all the time the storage
process is running.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Are you sure to proceed? [yes/no]: yes

JobID: c63697eafbf711eaa44d0242ac1c0008
Storage process started successfully!

client@28970a09202d:~$


After receiving this request the application will:
1) Create a job object, insert it into the job table of the file catalog database and push a copy into a 
   'pending' queue stored in Redis for scheduling purposes
2) Scan the content of '/home/curban/store/' to find crowded 'leaf' dirs and substitute these ones wit an
   uncompressed tar according to some constraints defined in the global configuration file
3) Re-scan the folder, move the content into a temporary folder if needed and perform recursive MD5 checksum
4) Re-scan the folder for the last time in order to obtain the final directory structure
5) Insert information about files and folders into the Node table of the file catalog, according to the VOSpace
   specification
6) Move the job from the 'write_pending' queue to a 'Write_ready' queue in Redis, if all the previous steps 
   succeeded.
7) Obtain the physical paths from the VOSpace paths of the nodes and copy all the data to the right destination
   according to the information previously inserted by the user
8) Cleanup of the '/home/curban/store/' directory (remove data and set right permissions) and database update
   (async_trans flag is set to 'true').

   
Another thing you can do is to import nodes on the VOSpace file catalog from data already stored somewhere.
For example, suppose we have a hot storage mounted on /mnt/hot_storage/users and visible from the transfer node.
Our user folder will be, for example, /mnt/hot_storage/users/curban.

On the transfer node you will find a directory called 'test_import' containing some data to be used for an import
test.

First of all, launch vos_import without any argument in order to see how to use it:

client@28970a09202d:~$ vos_import 

NAME
       vos_import

SYNOPSYS
       vos_import DIR_PATH USERNAME

DESCRIPTION
       This tool recursively imports nodes on the VOSpace file catalog.
       
       Two parameters are required:

       DIR_PATH:
           the physical absolute path of a directory located within the 
           user directory for a given mount point.
           
       USERNAME:
           the username used for accessing the transfer node.
           
EXAMPLE
      The following command will import recursively all the nodes contained
      in 'mydir' on the VOSpace for the 'jsmith' user:
      
      # vos_import /mnt/storage/users/jsmith/mydir jsmith   
    
client@28970a09202d:~$

Now, launch the import command to import the 'test_import' directory:

client@28970a09202d:~$ vos_import /mnt/hot_storage/users/curban/test_import curban

Import procedure completed!

client@28970a09202d:~$

This kind of operation works only for directories located at the first level of your user folder.


###############################################################################################################
     
You can access the rabbitmq web interface via browser:
    1) Find the IP address of the RabbitMQ broker:
    $ docker network inspect vos-ts_backend_net | grep -i -A 3 rabbitmq
    2) Open your browser and point it to http://IP_ADDRESS:15672 (user: guest, password: guest)

You can access the redis server from the 'client' container:
    1) Use redis-cli to connect to redis:
    $ redis-cli -h redis
    2) You can obtain some info about the jobs by searching them on the 'write_pending' and 'write_ready' queues 
       using the lrange command. For example, a few seconds after launching three jobs with 'dataArchiverCli.py',
       you should be able to see an output similar to the following one:
    redis:6379[2]> lrange write_ready 0 5
    1) "{\"jobId\": \"56577c8645da11ebbbfe356e379843eb\", \"jobType\": \"other\", \"ownerId\": \"2386\", \"phase\": \"PENDING\", 
    \"quote\": null, \"startTime\": null, \"endTime\": null, \"executionDuration\": null, \"destruction\": null, \"parameters\": null, 
    \"results\": null, \"jobInfo\": {\"requestType\": \"HSTORE\", \"userName\": \"szorba\"}}"
    2) "{\"jobId\": \"53d2f2a545da11ebb7bd356e379843eb\", \"jobType\": \"other\", \"ownerId\": \"2048\", \"phase\": \"PENDING\", 
    \"quote\": null, \"startTime\": null, \"endTime\": null, \"executionDuration\": null, \"destruction\": null, \"parameters\": null, 
    \"results\": null, \"jobInfo\": {\"requestType\": \"CSTORE\", \"userName\": \"sbertocco\"}}"
    3) "{\"jobId\": \"502afdca45da11eb9676356e379843eb\", \"jobType\": \"other\", \"ownerId\": \"3354\", \"phase\": \"PENDING\", 
    \"quote\": null, \"startTime\": null, \"endTime\": null, \"executionDuration\": null, \"destruction\": null, \"parameters\": null, 
    \"results\": null, \"jobInfo\": {\"requestType\": \"CSTORE\", \"userName\": \"curban\"}}"
            
You can access the file catalog from the 'client' container:
    1) Access the db via psql client:
    $ psql -h file_catalog -d vospace_testdb -U postgres
    2) You can now perform a query, for example show all the tuples of the Node table displaying some fields:
    vospace_testdb=# SELECT node_id, parent_path, path, name, type, owner_id, creator_id, content_MD5 FROM Node;
    
The default output of the query after the container initialization should be something like this:
 
vospace_testdb=# SELECT node_id, parent_path, path, name, tstamp_wrapper_dir, type, owner_id, creator_id, content_MD5 FROM node;
 node_id | parent_path |  path   |    name    | tstamp_wrapper_dir |   type    | owner_id | creator_id | content_md5 
---------+-------------+---------+------------+--------------------+-----------+----------+------------+-------------
       1 |             |         |            |                    | container | 0        | 0          | 
       2 |             | 2       | curban     |                    | container | 3354     | 3354       | 
       3 |             | 3       | sbertocco  |                    | container | 2048     | 2048       | 
       4 |             | 4       | szorba     |                    | container | 2386     | 2386       | 
       5 |             | 5       | test       |                    | container | 2386     | 2386       | 
       6 | 5           | 5.6     | f1         |                    | container | 2386     | 2386       | 
       7 | 5.6         | 5.6.7   | f2_renamed |                    | container | 2386     | 2386       | 
       8 | 5.6.7       | 5.6.7.8 | f3         |                    | data      | 2386     | 2386       | 
(8 rows)
       
A few seconds after launching three jobs with 'dataArchiverCli.py', the database will be populated and launching the previous
SQL query you will be able an output like the one here below:       

vospace_testdb=# SELECT node_id, parent_path, path, name, tstamp_wrapper_dir, type, owner_id, creator_id, content_MD5 FROM node;
 node_id | parent_path |    path    |       name       | tstamp_wrapper_dir  |   type    | owner_id | creator_id |           content_md5          
  
---------+-------------+------------+------------------+---------------------+-----------+----------+------------+----------------------------------
       1 |             |            |                  |                     | container | 0        | 0          | 
       2 |             | 2          | curban           |                     | container | 3354     | 3354       | 
       3 |             | 3          | sbertocco        |                     | container | 2048     | 2048       | 
       4 |             | 4          | szorba           |                     | container | 2386     | 2386       | 
       5 |             | 5          | test             |                     | container | 2386     | 2386       | 
       6 | 5           | 5.6        | f1               |                     | container | 2386     | 2386       | 
       7 | 5.6         | 5.6.7      | f2_renamed       |                     | container | 2386     | 2386       | 
       8 | 5.6.7       | 5.6.7.8    | f3               |                     | data      | 2386     | 2386       | 
       9 | 2           | 2.9        | mydir            | 2021_01_12-14_48_07 | container | 3354     | 3354       | 
      10 | 2           | 2.10       | foo2.txt         | 2021_01_12-14_48_07 | data      | 3354     | 3354       | e07f37a6bfe96ad66e408380a5e3a899
      11 | 2.9         | 2.9.11     | another_foo2.txt | 2021_01_12-14_48_07 | data      | 3354     | 3354       | e048e5108d71191158b50052d531b0ca
      12 | 3           | 3.12       | foo4.txt         | 2021_01_12-14_48_22 | data      | 2048     | 2048       | 5f429d803340bb7748c52b3931ed54cf
      13 | 4           | 4.13       | aaa              |                     | container | 2386     | 2386       | 
      14 | 4.13        | 4.13.14    | bbb              |                     | container | 2386     | 2386       | 
      15 | 4.13.14     | 4.13.14.15 | foo5.txt         |                     | data      | 2386     | 2386       | 262214d5cde30a74997199fb4e220a26
(15 rows)
 
 
 From the file catalog database you can also obtain information about jobs, according to the UWS specification. 
 Just try the following query:
 vospace_testdb=# SELECT * FROM job;
 
 ###############################################################################################################
 
 Stop the whole environment:
 $ docker-compose down
 
 Cleanup:
 $ docker image prune -a
 $ docker volume prune
