Newer
Older
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
This repository holds the configuration of the PRISM View Server (PVS).
The present README.md holds the architecture, conventions, relevant
configuration, installation instructions, as well as canonical references.
# Architecture
The PRISM View Server (PVS) uses various Docker images whereas `core`,
`cache`, `client`, and `preprocessor` are build from this repository and
the others are pulled from docker hub.
## Prerequisites
### Object Storage (OBS)
Access keys to store preprocessed items and caches used by all services.
Access key to input items used by preprocessor.
## Networks
One internal and one external network per stack.
## Volumes
In base stack
* traefik-data
Per collection
* db-data used by database
* redis-data used by redis
* instance-data used by registrar and renderer
## Services
The following services are defined via docker compose files.
### reverse-proxy
* based on the external traefik image
* data stored in local volume on swarm master
* reads swarm changes from /var/run/docker.sock on swarm master
* provides the endpoint for external access
* configured via docker labels
### database
* based on external postgis:10 image
* DB stored in local volume on swarm master
* provides database to all other services
### redis
* based on external redis image
* data stored in local volume on swarm master
* holds these keys
* preprocessing
* preprocess-md_queue
* holds metadata in json including object path for image to be preprocessed
* `lpush` by ingestor or manually
* `brpop` by preprocessor
* preprocess_queue
* holds items (tar object path) to be preprocessed
* `lpush` by ingestor or manually
* `brpop` by preprocessor
* preprocessing_set
* holds ids for currently preprocessed items
* `sadd` by preprocessor
* preprocess-success_set
* holds ids for successfully preprocessed items
* `sadd` by preprocessor
* preprocess-failure_set
* holds ids for failed preprocessed items
* `sadd` by preprocessor
* registration
* register_queue
* holds items (metadata and data objects prefix - same as tar object path above) to be registered
* `lpush` by preprocessor or manually
* `brpop` by registrar
* registering_set
* holds ids for currently registered items
* `sadd` by registrar
* register-success_set
* holds ids for successfully registered items
* `sadd` by registrar
* register-failure_set
* holds its for failed registered items
* `sadd` by registrar
* seeding
* seed_queue
* time intervals to pre-seed
* `lpush` by registrar or manually
* `brpop` by seeder
* seed-success_set
* seed-failure_set
### TODO: ingestor
see new service in #7
### TODO: seeder
* based on cache image
* connects to DB
* `brpop` time interval from seed_queue
* for each seed time and extent from DB
* pre-seed using renderer
### preprocessor
* based on preprocessor image (GDAL 3.1)
* connects to OBS
* `brpop` item from preprocess_queue or preprocess-md_queue
* `sadd` to preprocessing_set
* downloads image or package from OBS
* translates to COG
* translates to GSC if needed
* uploads COG & GSC to OBS
* adds item (metadata and data object paths) to register_queue
* `sadd` to preprocess-{success|failure}\_set
* `srem` from preprocessing_set
### registrar
* based on core image
* connects to OBS & database
* uses instance-data volume
* `brpop` item from register_queue
* `sadd` ...
* register in DB
* (optional) store time:start/time:end in seed_queue
* `sadd/srem` ...
### cache
* based on cache image
* connects to OBS & database
* provides external service for WMS & WMTS
* either serves WMTS/WMS requests from cache or retrieves on-demand from
renderer to store in cache and serve
### renderer
* based on core image
* connects to OBS & database
* provides external service for OpenSearch, WMS, & WCS
* renders WMS requests received from cache or seeder
## TODO: ELK stack
see #9
# Usage
## Test locally using docker swarm
Initialize swarm & stack:
```bash
docker swarm init # initialize swarm
```
Build images:
```
docker build core/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_core -t registry.gitlab.eox.at/esa/prism/vs/pvs_core
docker build cache/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_cache -t registry.gitlab.eox.at/esa/prism/vs/pvs_cache
docker build preprocessor/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor -t registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor
docker build client/ --cache-from registry.gitlab.eox.at/esa/prism/vs/pvs_client -t registry.gitlab.eox.at/esa/prism/vs/pvs_client
Or pull them from the registry:
```
docker login -u {DOCKER_USER} -p {DOCKER_PASSWORD} registry.gitlab.eox.at
docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_core
docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_cache
docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor
docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_client
```
Deploy the stack:
```
docker stack deploy -c docker-compose.vhr18.yml -c docker-compose.vhr18.dev.yml vhr18-pvs # start VHR_IMAGE_2018 stack in dev mode, for example to use local sources
docker stack deploy -c docker-compose.emg.yml -c docker-compose.emg.dev.yml emg-pvs # start Emergency stack in dev mode, for example to use local sources
docker exec -it $(docker ps -qf "name=vhr18-pvs_renderer") /bin/bash
cd /var/www/pvs/dev/pvs_instance
python manage.py runserver 0.0.0.0:8080
```
Tear town stack including data:
```bash
docker stack rm vhr18-pvs # stop stack
docker volume rm vhr18-pvs_db-data # delete volumes
docker volume rm vhr18-pvs_redis-data
docker volume rm vhr18-pvs_traefik-data
docker volume rm vhr18-pvs_cache-db
docker volume rm vhr18-pvs_instance-data