# Introduction

This repository holds the configuration of the PRISM View Server (PVS).

The present README.md holds the architecture, conventions, relevant
configuration, installation instructions, as well as canonical references.

# Architecture

The PRISM View Server (PVS) uses various Docker images whereas `core`,
`cache`, `client`, and `preprocessor` are build from this repository and
the others are pulled from docker hub.

## Prerequisites

### Object Storage (OBS)

Access keys to store preprocessed items and caches used by all services.

Access key to input items used by preprocessor.

## Networks

One internal and one external network per stack.

## Volumes

In base stack

* traefik-data

Per collection

* db-data used by database
* redis-data used by redis
* instance-data used by registrar and renderer

## Services

The following services are defined via docker compose files.

### reverse-proxy

* based on the external traefik image
* data stored in local volume on swarm master
* reads swarm changes from /var/run/docker.sock on swarm master
* provides the endpoint for external access
* configured via docker labels

### database

* based on external postgis:10 image
* DB stored in local volume on swarm master
* provides database to all other services

### redis

* based on external redis image
* data stored in local volume on swarm master
* holds these keys
    * preprocessing
        * preprocess-md_queue
            * holds metadata in json including object path for image to be preprocessed
            * `lpush` by ingestor or manually
            * `brpop` by preprocessor
        * preprocess_queue
            * holds items (tar object path) to be preprocessed
            * `lpush` by ingestor or manually
            * `brpop` by preprocessor
        * preprocessing_set
            * holds ids for currently preprocessed items
            * `sadd` by preprocessor
        * preprocess-success_set
            * holds ids for successfully preprocessed items
            * `sadd` by preprocessor
        * preprocess-failure_set
            * holds ids for failed preprocessed items
            * `sadd` by preprocessor
    * registration
        * register_queue
            * holds items (metadata and data objects prefix - same as tar object path above) to be registered
            * `lpush` by preprocessor or manually
            * `brpop` by registrar
        * registering_set
            * holds ids for currently registered items
            * `sadd` by registrar
        * register-success_set
            * holds ids for successfully registered items
            * `sadd` by registrar
        * register-failure_set
            * holds its for failed registered items
            * `sadd` by registrar
    * seeding
        * seed_queue
            * time intervals to pre-seed
            * `lpush` by registrar or manually
            * `brpop` by seeder
        * seed-success_set
        * seed-failure_set

### TODO: ingestor

see new service in #7

### TODO: seeder

* based on cache image
* connects to DB
* `brpop` time interval from seed_queue
* for each seed time and extent from DB
    * pre-seed using renderer

### preprocessor

* based on preprocessor image (GDAL 3.1)
* connects to OBS
* `brpop` item from preprocess_queue or preprocess-md_queue
    * `sadd` to preprocessing_set
    * downloads image or package from OBS
    * translates to COG
    * translates to GSC if needed
    * uploads COG & GSC to OBS
    * adds item (metadata and data object paths) to register_queue
    * `sadd` to preprocess-{success|failure}\_set
    * `srem` from preprocessing_set

### registrar

* based on core image
* connects to OBS & database
* uses instance-data volume
* `brpop` item from register_queue
    * `sadd` ...
    * register in DB
    * (optional) store time:start/time:end in seed_queue
    * `sadd/srem` ...

### cache

* based on cache image
* connects to OBS & database
* provides external service for WMS & WMTS
* either serves WMTS/WMS requests from cache or retrieves on-demand from
  renderer to store in cache and serve

### renderer

* based on core image
* connects to OBS & database
* provides external service for OpenSearch, WMS, & WCS
* renders WMS requests received from cache or seeder

### TODO: ELK stack

see #9

# Usage

## Test locally using docker swarm

Initialize swarm & stack:

```bash
docker swarm init                               # initialize swarm
```

Build images:
```
docker build core/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_core -t registry.gitlab.eox.at/esa/prism/vs/pvs_core
docker build cache/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_cache -t registry.gitlab.eox.at/esa/prism/vs/pvs_cache
docker build preprocessor/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor -t registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor
docker build client/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_client -t registry.gitlab.eox.at/esa/prism/vs/pvs_client
docker build ingestor/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_ingestor -t registry.gitlab.eox.at/esa/prism/vs/pvs_ingestor
```
Or pull them from the registry:
```
docker login -u {DOCKER_USER} -p {DOCKER_PASSWORD} registry.gitlab.eox.at
docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_core
docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_cache
docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor
docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_client
```

Deploy the stack:
```
docker stack deploy -c docker-compose.vhr18.yml -c docker-compose.vhr18.dev.yml vhr18-pvs  # start VHR_IMAGE_2018 stack in dev mode, for example to use local sources
docker stack deploy -c docker-compose.emg.yml -c docker-compose.emg.dev.yml emg-pvs  # start Emergency stack in dev mode, for example to use local sources

docker exec -it $(docker ps -qf "name=vhr18-pvs_renderer") /bin/bash
cd /var/www/pvs/dev/pvs_instance
python manage.py runserver 0.0.0.0:8080
```

Tear town stack including data:

```bash
docker stack rm vhr18-pvs                      # stop stack
docker volume rm vhr18-pvs_db-data                        # delete volumes
docker volume rm vhr18-pvs_redis-data
docker volume rm vhr18-pvs_traefik-data
docker volume rm vhr18-pvs_cache-db
docker volume rm vhr18-pvs_instance-data
```

Generate mapcache.sqlite

```bash
docker exec -it $(docker ps -qf "name=vhr18-pvs_renderer") python3 /var/www/pvs/dev/pvs_instance/manage.py mapcache sync -f
docker exec -it $(docker ps -qf "name=vhr18-pvs_renderer") mv VHR_IMAGE_2018.sqlite /cache-db/vhr18_mapcache_cache.sqlite

docker exec -it $(docker ps -qf "name=emg-pvs_renderer") python3 /var/www/pvs/dev/pvs_instance/manage.py mapcache sync -f
docker exec -it $(docker ps -qf "name=emg-pvs_renderer") mv Emergency.sqlite /cache-db/emg_mapcache_cache.sqlite
```

# Documentation

## Installation

```bash
python3 -m pip install sphinx recommonmark sphinx-autobuild
```

## Generate html and synchronize with client/html/user-guide

```bash
make html

# For watched html automatic building
make html-watch

# For pdf output and sync it to client/html/
make latexpdf
# To shrink size of pdf
gs -sDEVICE=pdfwrite -dPDFSETTINGS=/ebook -dPrinted=false -q -o View-Server_-_User-Guide_small.pdf View-Server_-_User-Guide.pdf
# make latexpdf and make html combined
make build
```

The documentation is generated in the respective *_build/html* directory.

# Create software releases

## Source code release

Create a TAR from source code:

```bash
git archive --prefix release-1.0.0.rc.1/ -o release-1.0.0.rc.1.tar.gz -9 master
```

Save Docker images:

```bash
docker save -o pvs_core.tar registry.gitlab.eox.at/esa/prism/vs/pvs_core
docker save -o pvs_cache.tar registry.gitlab.eox.at/esa/prism/vs/pvs_cache
docker save -o pvs_preprocessor.tar registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor
docker save -o pvs_client.tar registry.gitlab.eox.at/esa/prism/vs/pvs_client
```