Simple communication test that involes 5 docker containers:
- client (container_name: test_client, files to run: 'vos_rest_client.py' and 'dataArchiver.py')
- server (container_name: transfer_service)
- RabbitMQ (container_name: rabbitmq)
- Redis (container_name: redis)
- File catalog (container_name: file_catalog)

###############################################################################################################

You can start the whole environment with (launch the following command from the 'docker' dir):
$ docker-compose up

Once all the containers are up and running, open another shell and access the 'client' container:
$ docker exec -it client /bin/bash

At this point you can launch the 'vos_rest_client.py' within the 'client' container using the following syntax:
$ python vos_rest_client.py QUEUE_NAME

For example:
$ python vos_rest_client.py start_job_queue

The output should be something like this:

     client@a89c0bb962f7:~$ python vos_rest_client.py start_job_queue
     Sending transfer request:
     {
        "transfer": {
           "@version": "2.1",
           "target": "vos://example.com!vospace/mydata1",
           "direction": "pullFromVoSpace",
           "protocol": {
              "@uri": "ivo://ivoa.net/vospace/core#httpget"
           }
        }
     }
     Response:
     {
        "jobId": "3ff92acedc9611eabf140242ac1f0007",
        "phase": "PENDING",
        "quote": null,
        "startTime": null,
        "endTime": null,
        "executionDuration": null,
        "destruction": null,
        "parameters": null,
        "jobInfo": {
           "transfer": {
              "@version": "2.1",
              "target": "vos://example.com!vospace/mydata1",
              "direction": "pullFromVoSpace",
              "protocol": {
                 "@uri": "ivo://ivoa.net/vospace/core#httpget"
              }
           }
        }
     }

After processing the request, the server launches an internal thread delayed of 10 seconds which changes the 
state of the job from "PENDING" to "EXECUTING".
You can easily verify this change by launching again the client in this other way:
$ python vos_rest_client.py QUEUE_NAME JOB_ID

For example, in our case:
$ python vos_rest_client.py poll_job_queue 3ff92acedc9611eabf140242ac1f0007

The output should be something like this:

     client@a89c0bb962f7:~$ python vos_rest_client.py poll_job_queue 3ff92acedc9611eabf140242ac1f0007
     Sending poll request:
     {
        "jobId": "3ff92acedc9611eabf140242ac1f0007"
     }
     Response:
     {
        "jobId": "3ff92acedc9611eabf140242ac1f0007",
        "phase": "EXECUTING",
        "quote": null,
        "startTime": null,
        "endTime": null,
        "executionDuration": null,
        "destruction": null,
        "parameters": null,
        "jobInfo": {
           "transfer": {
              "@version": "2.1",
              "target": "vos://example.com!vospace/mydata1",
              "direction": "pullFromVoSpace",
              "protocol": {
                 "@uri": "ivo://ivoa.net/vospace/core#httpget"
              }
           }
        }
     }

---------------------------------------------------------------------------------------------------------------

Another thing you can do is to launch the 'dataArchiver.py' client.
Launching the client without any argument will show you how to use it:

client@28970a09202d:~$ python3 dataArchiverCli.py

NAME
       dataArchiverCli.py

SYNOPSYS
       python3.x dataArchiverCli.py COMMAND USERNAME

DESCRIPTION
       The purpose of this client application is to notify to the VOSpace backend that
       data is ready to be saved somewhere.
       
       The client accepts only one command at a time. This command is mandatory.
       A list of supported commands is shown here below:

       cstore
              performs a 'cold storage' request, data will be saved on the tape library

       hstore
              performs a 'hot storage' request, data will be saved on a standard server

       The client also needs to know the username associated to a storage request process.
       The username must be the same used for accessing the transfer node.

       
For example, if we want to perform a 'cold storage' request for a the 'transfer_service' user, we do:
client@28970a09202d:~$ python3 dataArchiverCli.py cstore transfer_service

Sending CSTORE request...

WARNING!!! WARNING!!! WARNING!!! WARNING!!! WARNING!!!
If you confirm, all your data on the transfer node will be
available in read-only mode for all the time the archiving
process is running.
WARNING!!! WARNING!!! WARNING!!! WARNING!!! WARNING!!!

Are you sure to proceed? [yes/no]: yes

JobID: c63697eafbf711eaa44d0242ac1c0008
Store process started successfully!

###############################################################################################################
     
You can access the rabbitmq web interface via browser:
    1) Find the IP address of the RabbitMQ broker:
    $ docker network inspect vos-ts_backend_net | grep -i -A 3 rabbitmq
    2) Open your browser and point it to http://IP_ADDRESS:15672 (user: guest, password: guest)

You can access the redis server from the 'client' container:
    1) Use redis-cli to connect to redis:
    $ redis-cli -h redis -n DB_INDEX
    NOTE: DB_INDEX is a non-negative number representing the db to work on:
    - 0: jobs that retrieve data (pullFromVOSpace) 
    - 1: jobs that store data (push)
    - 2: scheduling queues
    2) You can now perform a query based on the job ID, for example show the job object info stored on db = 1:
    get JOB_ID (if we consider the last example: "get  c63697eafbf711eaa44d0242ac1c0008")
    3) This will return all the information regarding the job
            
You can access the file catalog from the 'client' container:
    1) Access the db via psql client:
    $ psql -h file_catalog -U postgres -d vospace_testdb
    2) You can now perform a query, for example show all the tuples of the Node table displaying some fields:
    vospace_testdb=# SELECT node_id, parent_path, path, name, type, owner_id, creator_id, content_MD5 FROM Node;
    
 The default output of the query after the container initialization should be something like this:
 
  node_id | parent_path | path |  name  |   type    | owner_id | creator_id | content_md5 
 ---------+-------------+------+--------+-----------+----------+------------+--------------
        1 |             |      |        | container | 3354     | 3354       |             
        2 |             | 2    | home   | container | 3354     | 3354       |             
        3 | 2           | 2.3  | curban | container | 3354     | 3354       |             
 (3 rows)
       
 Now, open a shell on the 'transfer_service' container and launch 'store_preprocessor.py', with:
 $ docker exec -it transfer_service /bin/bash
 $ python3 store_preprocessor.py 
 
 The new output of the query will be the following, because a new file called 'foo.txt' was placed in /home/curban/store:

  node_id | parent_path |   path    |        name         |   type    | owner_id | creator_id |           content_md5            
 ---------+-------------+-----------+---------------------+-----------+----------+------------+----------------------------------
        1 |             |           |                     | container | 3354     | 3354       | 
        2 |             | 2         | home                | container | 3354     | 3354       | 
        3 | 2           | 2.3       | curban              | container | 3354     | 3354       | 
        4 | 2.3         | 2.3.4     | store               | container | 3354     | 3354       | 
        5 | 2.3.4       | 2.3.4.5   | 2020_12_02-11_31_56 | container | 3354     | 3354       | 
        6 | 2.3.4.5     | 2.3.4.5.6 | foo.txt             | data      | 3354     | 3354       | d41d8cd98f00b204e9800998ecf8427e
 (6 rows)
 
 
 To stop the whole environment:
 $ docker-compose down
 
 Cleanup:
 $ docker image prune -a
 $ docker volume prune
