CPS.LMC project
===========================
[![Documentation Status](https://readthedocs.org/projects/csp-lmc/badge/?version=latest)](https://developer.skatelescope.org/projects/csp-lmc/en/latest/?badge=latest)
[![coverage report](https://gitlab.com/ska-telescope/csp-lmc/badges/master/coverage.svg)](https://ska-telescope.gitlab.io/csp-lmc/)
[![pipeline status](https://gitlab.com/ska-telescope/csp-lmc/badges/master/pipeline.svg)](https://gitlab.com/ska-telescope/csp-lmc/pipelines)
## Table of contents
* [Introduction](#introduction)
* [Repository](#repository)
* [CSP.LMC Common Package](#csp-lmc-common)
* [Create the CSP.LMC Common Software python package](#python-package)
* [CSP_Mid LMC](#CSP_Mid.LMC)
* [CSP_Mid LMC Deployment in Kubernetes](#mid-CSP.LMC-Kubernetes-Deployment-via-Helm-Charts)
* [CSP_Low LMC](#csp-low-lmc)
* [Run in containers](#how-to-run-in-docker-containers)
* [Known bugs](#known-bugs)
* [Troubleshooting](#troubleshooting)
* [License](#license)
## Introduction
General requirements for the monitor and control functionality are the same for both the SKA MID and LOW telescopes.
In addition, two of three other CSP Sub-elements, namely the `Pulsar Search` and the `Pulsar Timing`, have the same functionality and use the same design in both telescopes.
Functionality common to `CSP_Low.LMC` and `CSP_Mid.LMC` includes: communication framework, logging, archiving, alarm generation, sub-
arraying, some of the functionality related to handling observing mode changes, `Pulsar Search` and
`Pulsar Timing`, and to some extent Very Long Baseline Interferometry (`VLBI`).
The difference between `CSP_Low.LMC` and `CSP_Mid.LMC` is mostly due to different receivers (dishes vs stations) and
different `CBF` functionality and design.
To maximize code reuse, the software common to `CSP_Low.LMC` and `CSP_Mid.LMC` is developed by the work
package `CSP_Common.LMC` and provided to work packages `CSP_Low.LMC` and `CSP_Mid.LMC`, to
be used as a base for telescope specific `CSP.LMC` software.
## Repository organization
To simplify the access at the whole CSP.LMC software, the `CSP_Common.LMC`, `CSP_Low.LMC` and `CSP_Mid.LMC` software packages are hosted in the same SKA GitLab repository, named `CSP.LMC`.
The `CSP.LMC` repository is organized in three main folders, `csp-lmc-common`, `csp-low-lmc` and `csp-mid-lmc`, each presenting
the same organization:
* project source: contains the specific project TANGO Device Class files
* pogo: contains the POGO files of the TANGO Device Classes of the project
* tests: contains the test
* charts: stored the HEML charts to deploy the Mid CSP.LMC system under kubernets environment.
* docker: containes the `docker`, `docker-compose` and `dsconfig` configuration files as well as
the Makefile to generate the docker image and run the tests.
To get a local copy of the repository:
```bash
git clone https://gitlab.com/ska-telescope/csp-lmc.git
```
## Prerequisities
* A TANGO development environment properly configured, as described in [SKA developer portal](https://developer.skatelescope.org/en/latest/tools/tango-devenv-setup.html)
* [SKA Base classes](https://gitlab.com/ska-telescope/lmc-base-classes)
* access to a K8s/minikube cluster.
# CSP_Mid.LMC
The TANGO devices of the CSP_Mid.LMC prototype run in a containerised environment.
Currently only a limitated number of CSP_Mid.LMC and CBF_Mid.LMC devices are run in Docker containers:
* the MidCspMaster and MID CbfMaster
* the MidCspCapabilityMonitor devices
* three instances of the CSP_Mid and CBF_Mid subarrays
* four instances of the Very Coarse Channelizer (VCC) devices
* four instance of the Frequency Slice Processor (FPS) devices
* two instances of the TM TelState Simulator devices
* one instance of the TANGO database
## Containerised Mid CSP.LMC in Kubernetes
The Mid CSP.LMC containerised TANGO servers are managed via Kubernetes.
The system is setup so that each k8s Pod has only one Docker container that in turn
runs only one Tango Device Server application.
Mid CSP.LMC TANGO Servers rely on two different Docker images: `mid-csplmc` and `mid-cbf-mcs`.
The first one runs the CSP.LMC TANGO devices and the sencond those of the Mid CBF.LMC prototype.
### Mid CSP.LMC Kubernetes Deployment via Helm Charts
The deployment of the system is handled by the Helm tool, via the Helm Charts, a set of YAML files describing
how the Kubernetes resources are related.
The Mid CSP.LMC Helm Charts are stored in the `charts` directory, organized in two sub-folders:
* csp-proto with the Helm chart to deploy only the CSP.LMC devices (MidCspCapabilityMonitor, MidcSpMaster nad MidCspSubarray)
* mid-csp with the Helm chart to deploy the whole Mid CSP.LMC system, including the TANGO Database and the Mid CSF.LMC devices.
In particular, the `mid-csp` chart depends on the CSP.LMC, CBF.LMC and Tango DB charts and these dependecies are
dynamically linked specifying the `dependencies` field in the Chart.yaml.
The `Makefile` in the csp-lmc-mid root directory provides the targets to deploy the system, stop the running services and run
the tests locally, on a k8s/minikube machine.
To deploy the whole Mid CSP.LMC system run:
``` bash
make deploy
```
that installs the mid-csp helm chart specifying `test` as relase name and assigns it to the `csp-proto` namespace.
Running the command:
```bash
helm list -n csp-proto
```
an output like the one below is shown:
```
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
test csp-proto 1 2020-09-21 10:07:19.308839059 +0200 CEST deployed mid-csp-0.1.0 0.6.8
```
To list all the pods and service in the csp-proto namespace, issue the command:
```bash
kubectl get all -n csp-proto
```
that provides the following output lines:
```
NAME READY STATUS RESTARTS AGE
pod/databaseds-tango-base-test-0 1/1 Running 0 2m48s
pod/mid-cbf-cbf-proto-cbfmaster-test-0 1/1 Running 0 2m50s
pod/mid-cbf-cbf-proto-cbfsubarray01-test-0 1/1 Running 1 2m50s
pod/mid-cbf-cbf-proto-cbfsubarray02-test-0 1/1 Running 1 2m48s
pod/mid-cbf-cbf-proto-cbfsubarray03-test-0 1/1 Running 1 2m48s
pod/mid-cbf-cbf-proto-fsp01-test-0 1/1 Running 0 2m49s
pod/mid-cbf-cbf-proto-fsp02-test-0 1/1 Running 0 2m49s
pod/mid-cbf-cbf-proto-fsp03-test-0 1/1 Running 0 2m50s
pod/mid-cbf-cbf-proto-fsp04-test-0 1/1 Running 0 2m50s
pod/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest-test-0 1/1 Running 0 2m49s
pod/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest2-test-0 1/1 Running 0 2m48s
pod/mid-cbf-cbf-proto-vcc001-test-0 1/1 Running 3 2m47s
pod/mid-cbf-cbf-proto-vcc002-test-0 1/1 Running 3 2m50s
pod/mid-cbf-cbf-proto-vcc003-test-0 1/1 Running 3 2m50s
pod/mid-cbf-cbf-proto-vcc004-test-0 1/1 Running 3 2m49s
pod/mid-cbf-configurator-cbf-proto-test-m6j2p 0/1 Error 0 2m50s
pod/mid-cbf-configurator-cbf-proto-test-qm8xg 0/1 Completed 0 2m15s
pod/midcsplmc-configurator-csp-proto-test-d7hmp 0/1 Completed 0 2m15s
pod/midcsplmc-configurator-csp-proto-test-qnks4 0/1 Error 0 2m50s
pod/midcsplmc-csp-proto-midcapabilitymonitor-test-0 1/1 Running 3 2m48s
pod/midcsplmc-csp-proto-midcspmaster-test-0 1/1 Running 0 2m50s
pod/midcsplmc-csp-proto-midcspsubarray01-test-0 1/1 Running 1 2m50s
pod/midcsplmc-csp-proto-midcspsubarray02-test-0 1/1 Running 1 2m50s
pod/midcsplmc-csp-proto-midcspsubarray03-test-0 1/1 Running 1 2m50s
pod/tango-base-tangodb-0
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/databaseds-tango-base-test NodePort 10.103.37.75 10000:31664/TCP 2m50s
service/mid-cbf-cbf-proto-cbfmaster-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-cbfsubarray01-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-cbfsubarray02-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-cbfsubarray03-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-fsp01-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-fsp02-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-fsp03-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-fsp04-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest2-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-vcc001-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-vcc002-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-vcc003-test ClusterIP None 1234/TCP 2m50s
service/mid-cbf-cbf-proto-vcc004-test ClusterIP None 1234/TCP 2m50s
service/midcsplmc-csp-proto-midcapabilitymonitor-test ClusterIP None 1234/TCP 2m50s
service/midcsplmc-csp-proto-midcspmaster-test ClusterIP None 1234/TCP 2m50s
service/midcsplmc-csp-proto-midcspsubarray01-test ClusterIP None 1234/TCP 2m50s
service/midcsplmc-csp-proto-midcspsubarray02-test ClusterIP None 1234/TCP 2m50s
service/midcsplmc-csp-proto-midcspsubarray03-test ClusterIP None 1234/TCP 2m50s
service/tango-base-tangodb NodePort 10.102.174.225 3306:30633/TCP 2m50s
NAME READY AGE
statefulset.apps/databaseds-tango-base-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-cbfmaster-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-cbfsubarray01-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-cbfsubarray02-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-cbfsubarray03-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-fsp01-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-fsp02-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-fsp03-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-fsp04-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest2-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-vcc001-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-vcc002-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-vcc003-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-vcc004-test 1/1 2m50s
statefulset.apps/midcsplmc-csp-proto-midcapabilitymonitor-test 1/1 2m50s
statefulset.apps/midcsplmc-csp-proto-midcspmaster-test 1/1 2m50s
statefulset.apps/midcsplmc-csp-proto-midcspsubarray01-test 1/1 2m50s
statefulset.apps/midcsplmc-csp-proto-midcspsubarray02-test 1/1 2m50s
statefulset.apps/midcsplmc-csp-proto-midcspsubarray03-test 1/1 2m50s
statefulset.apps/tango-base-tangodb 1/1 2m50s
NAME COMPLETIONS DURATION AGE
job.batch/mid-cbf-configurator-cbf-proto-test 1/1 61s 2m50s
job.batch/midcsplmc-configurator-csp-proto-test 1/1 59s 2m50s
```
The helm release can be deleted and the application stopped using the command:
```bash
make delete
```
that unistalls the `mid-csp` chart and delete the `test` release in the `csp-proto` namespace.
Other Makefile targets, such as `describe` and `logs`, provide some useful information when the system has been properly deployed.
## Run integration tests on a local k8s/minikube cluster
The project includes a set of tests for the `MidCspMaster` and `MidCspSubarray` TANGO Devices
that can be found in the project `tests` folder.
To run the tests on the local k8s cluster, issue the command:
```bash
make k8s_test
```
from the root project directory.
This command first deploys the system and then executes the integration tests.
After tests end, run the command:
```bash
make delete
```
to uninstall the HELM charts of the `test` release.
## Gitlab continuos integration tests
Continuos integration tests in Gitlab rely on the `.gitlab-ci.yml` configuration file that provides al the scripts to build, test and deploy the application.
This file has been updated to run test in K8s environment and any reference to the use of `docker-compose` as containers manager,
has been removed.
A new job has been added in the pipline `publish` stage to release the the `csp-proto` helm chart in the SKA Helm charts repositoryhostes under `nexus`.
## Docker-compose support
Support to `docker-compose` has not been completely removed even if all the main operations are
performed in kubernetes environment.
Use of `docker-compose` has been maintened only to simplify the development on machines that
are not capable to run minikube in a virtual machine.
The `docker` folder of the project contains all the files required to run the system via the
docker-compose tool.
From the docker folder of the project, one can still:
• build the image running `make build`
• start the system dockers with docker-compose executing `make up`
• run the test on the local machine calling `make test`
The Docker containers running the CBF_Mid devices are instantiated pulling the `mid-cbf-mcs:test` project image from the [Nexus repository](https://nexus.engageska-portugal.pt).
The CSP_Mid.LMC project provides a [Makefile](Makefile) to start the system containers and the tests.
The containerised environment relies on three YAML configuration files:
* `mid-csp-tangodb.yml`
* `mid-csp-lmc.yml`
* `mid-cbf-mcs.yml`
Each file includes the stages to run the the `CSP_Mid.LMC TANGO DB`, the `CSP_Mid.LMC` devices and `Mid-CBF.LMC` TANGO Devices inside separate docker containers.
These YAML files are used by `docker-compose` to run both the CSP_Mid.LMC and CBF.LMC TANGO device
instances, that is, to run the whole `CSP_Mid.LMC` prototype.
In this way, it's possible to execute some preliminary integration tests, as for example the assignment/release of receptors to a `CSP_Mid Subarray` and its configuration to execute a scan in Imaging mode.
The `CSP_Mid.LMC` and `Mid-CBF.LMC TANGO` Devices are registered with the same TANGO DB, and its
configuration is performed via the `dsconfig` TANGO Device provided by the [dsconfig project](https://gitlab.com/MaxIV-KitsControls/lib-maxiv-dsconfig).
To run the `CSP_Mid.LMC` prototype inside Docker containers,issue the command:
```bash
make up
```
from the `docker` of the project directory. At the end of the procedure the command
docker ps
shows the list of the running containers:
```
mid-csp-lmc-tangodb: the MariaDB database with the TANGO database tables
mid-csp-lmc-databaseds: the TANGO DB device server
mid-csp-lmc-cbf_dsconfig: the dsconfig container to configure CBF.LMC devices in the TANGO DB
mid-csp-lmc-cbf_dsconfig: the dsconfig container to configure CSP.LMC devices in the TANGO DB
mid-csp-lmc-midcspmaster: the CspMaster TANGO device
mid-csp-lmc-midcapabilitymonitor: the monitor devices of the CSP_Mid.LMC Capabilities
mid-csp-lmc-midcspsubarray[01-023: two instances of the CspSubarray TANGO device
mid-csp-lmc-rsyslog-csplmc: the rsyslog container for the CSP.LMC devices
mid-csp-lmc-rsyslog-cbf : the rsyslog container for the CBF.LMC devices
mid-csp-lmc-cbfmaster: the CbfMaster TANGO device
mid-csp-lmc-cbfsubarray[01-03]: two instances of the CbfSubarray TANGO device
mid-csp-lmc-vcc[001-004]: four instances of the Mid-CBF VCC TANGO device
mid-csp-lmc-fsp[01-04]: four instances of the Mid-CBF FSP TANGO device
mid-csp-lmc-tmcspsubarrayleafnodetest/2: two instances of the TelState TANGO Device
simulator provided by the CBF project to support scan
configuration for Subarray1/2
```
To stop and removes the Docker containers, issue the command
make down
from the prototype root directory.
__NOTE__
>Docker containers are run with the `--network=host` option.
In this case there is no isolation between the host machine and the containers.
This means that the TANGO DB running in the container is available on port 10000 of the host machine.
Running `jive` on the local host, the `CSP.LMC` and `Mid-CBF.LMC` TANGO Devices registered with the TANGO DB (running in a docker container)
can be visualized and explored.
## Known bugs
## Troubleshooting
## License
See the LICENSE file for details.