Introduction
This repository holds the configuration of the PRISM View Server (PVS).
The present README.md holds the architecture, conventions, relevant configuration, installation instructions, as well as canonical references.
Architecture
The PRISM View Server (PVS) uses various Docker images whereas core
,
cache
, client
, and preprocessor
are build from this repository and
the others are pulled from docker hub.
Prerequisites
Object Storage (OBS)
Access keys to store preprocessed items and caches used by all services.
Access key to input items used by preprocessor.
Networks
One internal and one external network per stack.
Volumes
In base stack
- traefik-data
Per collection
- db-data used by database
- redis-data used by redis
- instance-data used by registrar and renderer
Services
The following services are defined via docker compose files.
reverse-proxy
- based on the external traefik image
- data stored in local volume on swarm master
- reads swarm changes from /var/run/docker.sock on swarm master
- provides the endpoint for external access
- configured via docker labels
database
- based on external postgis:10 image
- DB stored in local volume on swarm master
- provides database to all other services
redis
- based on external redis image
- data stored in local volume on swarm master
- holds these keys
- preprocessing
- preprocess-md_queue
- holds metadata in json including object path for image to be preprocessed
-
lpush
by ingestor or manually -
brpop
by preprocessor
- preprocess_queue
- holds items (tar object path) to be preprocessed
-
lpush
by ingestor or manually -
brpop
by preprocessor
- preprocessing_set
- holds ids for currently preprocessed items
-
sadd
by preprocessor
- preprocess-success_set
- holds ids for successfully preprocessed items
-
sadd
by preprocessor
- preprocess-failure_set
- holds ids for failed preprocessed items
-
sadd
by preprocessor
- preprocess-md_queue
- registration
- register_queue
- holds items (metadata and data objects prefix - same as tar object path above) to be registered
-
lpush
by preprocessor or manually -
brpop
by registrar
- registering_set
- holds ids for currently registered items
-
sadd
by registrar
- register-success_set
- holds ids for successfully registered items
-
sadd
by registrar
- register-failure_set
- holds its for failed registered items
-
sadd
by registrar
- register_queue
- seeding
- seed_queue
- time intervals to pre-seed
-
lpush
by registrar or manually -
brpop
by seeder
- seed-success_set
- seed-failure_set
- seed_queue
- preprocessing
TODO: ingestor
see new service in #7 (closed)
TODO: seeder
- based on cache image
- connects to DB
-
brpop
time interval from seed_queue - for each seed time and extent from DB
- pre-seed using renderer
preprocessor
- based on preprocessor image (GDAL 3.1)
- connects to OBS
-
brpop
item from preprocess_queue or preprocess-md_queue-
sadd
to preprocessing_set - downloads image or package from OBS
- translates to COG
- translates to GSC if needed
- uploads COG & GSC to OBS
- adds item (metadata and data object paths) to register_queue
-
sadd
to preprocess-{success|failure}_set -
srem
from preprocessing_set
-
registrar
- based on core image
- connects to OBS & database
- uses instance-data volume
-
brpop
item from register_queue-
sadd
... - register in DB
- (optional) store time:start/time:end in seed_queue
-
sadd/srem
...
-
cache
- based on cache image
- connects to OBS & database
- provides external service for WMS & WMTS
- either serves WMTS/WMS requests from cache or retrieves on-demand from renderer to store in cache and serve
renderer
- based on core image
- connects to OBS & database
- provides external service for OpenSearch, WMS, & WCS
- renders WMS requests received from cache or seeder
TODO: ELK stack
see #9 (closed)