EOX GitLab Instance

Skip to content
Snippets Groups Projects

Introduction

This repository holds the configuration of the PRISM View Server (PVS).

The present README.md holds the architecture, conventions, relevant configuration, installation instructions, as well as canonical references.

Architecture

The PRISM View Server (PVS) uses various Docker images whereas core, cache, client, and preprocessor are build from this repository and the others are pulled from docker hub.

Prerequisites

Object Storage (OBS)

Access keys to store preprocessed items and caches used by all services.

Access key to input items used by preprocessor.

Networks

One internal and one external network per stack.

Volumes

In base stack

  • traefik-data

Per collection

  • db-data used by database
  • redis-data used by redis
  • instance-data used by registrar and renderer

Services

The following services are defined via docker compose files.

reverse-proxy

  • based on the external traefik image
  • data stored in local volume on swarm master
  • reads swarm changes from /var/run/docker.sock on swarm master
  • provides the endpoint for external access
  • configured via docker labels

database

  • based on external postgis:10 image
  • DB stored in local volume on swarm master
  • provides database to all other services

redis

  • based on external redis image
  • data stored in local volume on swarm master
  • holds these keys
    • preprocessing
      • preprocess-md_queue
        • holds metadata in json including object path for image to be preprocessed
        • lpush by ingestor or manually
        • brpop by preprocessor
      • preprocess_queue
        • holds items (tar object path) to be preprocessed
        • lpush by ingestor or manually
        • brpop by preprocessor
      • preprocessing_set
        • holds ids for currently preprocessed items
        • sadd by preprocessor
      • preprocess-success_set
        • holds ids for successfully preprocessed items
        • sadd by preprocessor
      • preprocess-failure_set
        • holds ids for failed preprocessed items
        • sadd by preprocessor
    • registration
      • register_queue
        • holds items (metadata and data objects prefix - same as tar object path above) to be registered
        • lpush by preprocessor or manually
        • brpop by registrar
      • registering_set
        • holds ids for currently registered items
        • sadd by registrar
      • register-success_set
        • holds ids for successfully registered items
        • sadd by registrar
      • register-failure_set
        • holds its for failed registered items
        • sadd by registrar
    • seeding
      • seed_queue
        • time intervals to pre-seed
        • lpush by registrar or manually
        • brpop by seeder
      • seed-success_set
      • seed-failure_set

TODO: ingestor

see new service in #7 (closed)

TODO: seeder

  • based on cache image
  • connects to DB
  • brpop time interval from seed_queue
  • for each seed time and extent from DB
    • pre-seed using renderer

preprocessor

  • based on preprocessor image (GDAL 3.1)
  • connects to OBS
  • brpop item from preprocess_queue or preprocess-md_queue
    • sadd to preprocessing_set
    • downloads image or package from OBS
    • translates to COG
    • translates to GSC if needed
    • uploads COG & GSC to OBS
    • adds item (metadata and data object paths) to register_queue
    • sadd to preprocess-{success|failure}_set
    • srem from preprocessing_set

registrar

  • based on core image
  • connects to OBS & database
  • uses instance-data volume
  • brpop item from register_queue
    • sadd ...
    • register in DB
    • (optional) store time:start/time:end in seed_queue
    • sadd/srem ...

cache

  • based on cache image
  • connects to OBS & database
  • provides external service for WMS & WMTS
  • either serves WMTS/WMS requests from cache or retrieves on-demand from renderer to store in cache and serve

renderer

  • based on core image
  • connects to OBS & database
  • provides external service for OpenSearch, WMS, & WCS
  • renders WMS requests received from cache or seeder

TODO: ELK stack

see #9 (closed)

Usage

Test locally using docker swarm

Initialize swarm & stack:

docker swarm init                               # initialize swarm
# build images
    docker build core/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_core -t registry.gitlab.eox.at/esa/prism/vs/pvs_core
    docker build cache/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_cache -t registry.gitlab.eox.at/esa/prism/vs/pvs_cache
    docker build preprocessor/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor -t registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor
    docker build client/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_client -t registry.gitlab.eox.at/esa/prism/vs/pvs_client
#or
    docker login -u {DOCKER_USER} -p {DOCKER_PASSWORD} registry.gitlab.eox.at
    docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_core
    docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_cache
    docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor
    docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_client
docker stack deploy -c docker-compose.vhr18.yml -c docker-compose.vhr18.dev.yml vhr18-pvs  # start VHR_IMAGE_2018 stack in dev mode, for example to use local sources
docker stack deploy -c docker-compose.emg.yml -c docker-compose.emg.dev.yml emg-pvs  # start Emergency stack in dev mode, for example to use local sources

docker exec -it $(docker ps -qf "name=vhr18-pvs_renderer") /bin/bash
cd /var/www/pvs/dev/pvs_instance
python manage.py runserver 0.0.0.0:8080

Tear town stack including data:

docker stack rm vhr18-pvs                      # stop stack
docker volume rm vhr18-pvs_db-data                        # delete volumes
docker volume rm vhr18-pvs_redis-data
docker volume rm vhr18-pvs_traefik-data
docker volume rm vhr18-pvs_cache-db

Generate mapcache.sqlite

docker exec -it $(docker ps -qf "name=vhr18-pvs_renderer") python3 /var/www/pvs/dev/pvs_instance/manage.py mapcache sync -f
docker exec -it $(docker ps -qf "name=vhr18-pvs_renderer") mv VHR_IMAGE_2018.sqlite /cache-db/vhr18_mapcache_cache.sqlite

docker exec -it $(docker ps -qf "name=emg-pvs_renderer") python3 /var/www/pvs/dev/pvs_instance/manage.py mapcache sync -f
docker exec -it $(docker ps -qf "name=emg-pvs_renderer") mv Emergency.sqlite /cache-db/emg_mapcache_cache.sqlite

Documentation

Installation

python3 -m pip install sphinx recommonmark sphinx-autobuild

Generation

make html

# For watched html automatic building
make html-watch

# For pdf output run:
make latex
make latexpdf

The documentation is generated in the respective _build/html directory.