EOX GitLab Instance

README.md 17.6 KB
Newer Older
Stephan Meißl's avatar
Stephan Meißl committed
1
# Introduction
Stephan Meißl's avatar
Stephan Meißl committed
2

Stephan Meißl's avatar
Stephan Meißl committed
3
4
5
6
7
8
9
10
This repository holds the configuration of the PRISM View Server (PVS).

The present README.md holds the architecture, conventions, relevant
configuration, installation instructions, as well as canonical references.

# Architecture

The PRISM View Server (PVS) uses various Docker images whereas `core`,
Lubomir Doležal's avatar
Lubomir Doležal committed
11
`cache`, `client`, `ingestor`, `fluentd` and `preprocessor` are build from this repository and
Stephan Meißl's avatar
Stephan Meißl committed
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
the others are pulled from docker hub.

## Prerequisites

### Object Storage (OBS)

Access keys to store preprocessed items and caches used by all services.

Access key to input items used by preprocessor.

## Networks

One internal and one external network per stack.

## Volumes

In base stack

* traefik-data

Lubomir Doležal's avatar
Lubomir Doležal committed
32
33
34
35
In logging stack

* logging_es-data

Stephan Meißl's avatar
Stephan Meißl committed
36
37
38
39
40
Per collection

* db-data used by database
* redis-data used by redis
* instance-data used by registrar and renderer
Lubomir Doležal's avatar
Lubomir Doležal committed
41
42
* report-data sftp output of reporting interface
* from-fepd - sftp input to **ingestor**
Stephan Meißl's avatar
Stephan Meißl committed
43
44
45
46
47
48
49
50
51
52
53
54
55

## Services

The following services are defined via docker compose files.

### reverse-proxy

* based on the external traefik image
* data stored in local volume on swarm master
* reads swarm changes from /var/run/docker.sock on swarm master
* provides the endpoint for external access
* configured via docker labels

56
57
### shibauth

Stephan Meißl's avatar
Stephan Meißl committed
58
* based on shibauth image derived from the external unicon/shibboleth-sp:3.0.4 Apache + Shibboleth SP3 image
59
60
61
62
* provides authentication and authorization via SAML2
* docker configuration files set access control rules
* traefik labels determine which services are protected via Shib

Stephan Meißl's avatar
Stephan Meißl committed
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
### database

* based on external postgis:10 image
* DB stored in local volume on swarm master
* provides database to all other services

### redis

* based on external redis image
* data stored in local volume on swarm master
* holds these keys
    * preprocessing
        * preprocess-md_queue
            * holds metadata in json including object path for image to be preprocessed
            * `lpush` by ingestor or manually
            * `brpop` by preprocessor
        * preprocess_queue
            * holds items (tar object path) to be preprocessed
            * `lpush` by ingestor or manually
            * `brpop` by preprocessor
        * preprocessing_set
            * holds ids for currently preprocessed items
            * `sadd` by preprocessor
        * preprocess-success_set
            * holds ids for successfully preprocessed items
            * `sadd` by preprocessor
        * preprocess-failure_set
            * holds ids for failed preprocessed items
            * `sadd` by preprocessor
    * registration
        * register_queue
            * holds items (metadata and data objects prefix - same as tar object path above) to be registered
            * `lpush` by preprocessor or manually
            * `brpop` by registrar
        * registering_set
            * holds ids for currently registered items
            * `sadd` by registrar
        * register-success_set
            * holds ids for successfully registered items
            * `sadd` by registrar
        * register-failure_set
104
            * holds ids for failed registered items
Stephan Meißl's avatar
Stephan Meißl committed
105
106
107
108
109
110
111
112
113
            * `sadd` by registrar
    * seeding
        * seed_queue
            * time intervals to pre-seed
            * `lpush` by registrar or manually
            * `brpop` by seeder
        * seed-success_set
        * seed-failure_set

Lubomir Doležal's avatar
Lubomir Doležal committed
114
### ingestor
Stephan Meißl's avatar
Stephan Meißl committed
115

Lubomir Doležal's avatar
Lubomir Doležal committed
116
117
118
119
120
121
* based on ingestor image
* by default a flask app listening on `/` endpoint for `POST` requests with reports
* or can be overriden to be used as inotify watcher on a configured folder for new appearance of reports
* accepts browse reports with references to images on Swift
* extracts the browse metadata (id, time, footprint, image reference)
* `lpush` metadata into a `preprocess-md_queue`
Stephan Meißl's avatar
Stephan Meißl committed
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170

### TODO: seeder

* based on cache image
* connects to DB
* `brpop` time interval from seed_queue
* for each seed time and extent from DB
    * pre-seed using renderer

### preprocessor

* based on preprocessor image (GDAL 3.1)
* connects to OBS
* `brpop` item from preprocess_queue or preprocess-md_queue
    * `sadd` to preprocessing_set
    * downloads image or package from OBS
    * translates to COG
    * translates to GSC if needed
    * uploads COG & GSC to OBS
    * adds item (metadata and data object paths) to register_queue
    * `sadd` to preprocess-{success|failure}\_set
    * `srem` from preprocessing_set

### registrar

* based on core image
* connects to OBS & database
* uses instance-data volume
* `brpop` item from register_queue
    * `sadd` ...
    * register in DB
    * (optional) store time:start/time:end in seed_queue
    * `sadd/srem` ...

### cache

* based on cache image
* connects to OBS & database
* provides external service for WMS & WMTS
* either serves WMTS/WMS requests from cache or retrieves on-demand from
  renderer to store in cache and serve

### renderer

* based on core image
* connects to OBS & database
* provides external service for OpenSearch, WMS, & WCS
* renders WMS requests received from cache or seeder

Lubomir Doležal's avatar
Lubomir Doležal committed
171
### logging stack
Stephan Meißl's avatar
Stephan Meißl committed
172

Lubomir Doležal's avatar
Lubomir Doležal committed
173
174
175
176
177
178
179
* uses elasticsearch:7.9 & kibana:7.9 external images
* fluentd image is build and published to registry because of additional plugins
* ES data stored in local volume on swarm master
* external access allowed to kibana through traefik
* log parsing enabled for cache and core

### sftp
Stephan Meißl's avatar
Stephan Meißl committed
180

Lubomir Doležal's avatar
Lubomir Doležal committed
181
182
* uses external atmoz/sftp image
* provides sftp access to two volumes for report exchange on registration result xmls and ingest requirement xmls
Lubomir Doležal's avatar
Lubomir Doležal committed
183
* accessible on swarm master on port 2222-22xx
Lubomir Doležal's avatar
Lubomir Doležal committed
184
* credentials supplied via config
Stephan Meißl's avatar
Stephan Meißl committed
185
186

# Usage
187
188
189
190
191
192
193

## Test locally using docker swarm

Initialize swarm & stack:

```bash
docker swarm init                               # initialize swarm
194
195
196
```

Build images:
Lubomir Doležal's avatar
Lubomir Doležal committed
197
Note we use **dev** tag for local development, so images need to be built locally
198
```
Lubomir Doležal's avatar
Lubomir Doležal committed
199
200
201
docker build core/ -t registry.gitlab.eox.at/esa/prism/vs/pvs_core:dev
docker build cache/ -t registry.gitlab.eox.at/esa/prism/vs/pvs_cache:dev
docker build preprocessor/ -t registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor:dev
202
docker build -f client/Dockerfile.dev -t registry.gitlab.eox.at/esa/prism/vs/pvs_client:dev client/
Lubomir Doležal's avatar
Lubomir Doležal committed
203
204
docker build fluentd/ -t registry.gitlab.eox.at/esa/prism/vs/fluentd:dev
docker build ingestor/ -t registry.gitlab.eox.at/esa/prism/vs/pvs_ingestor:dev
Lubomir Doležal's avatar
Lubomir Doležal committed
205
docker build sftp/ -t registry.gitlab.eox.at/esa/prism/vs/pvs_sftp:dev
Stephan Meißl's avatar
Stephan Meißl committed
206
docker build shibauth/ -t registry.gitlab.eox.at/esa/prism/vs/pvs_shibauth:dev
207
```
Stephan Meißl's avatar
Stephan Meißl committed
208
209

For production deployment, as the registry is open to the public, this part is done by a later step `Deploy the stack in production` as it will pull necessary images automatically.
Lubomir Doležal's avatar
Lubomir Doležal committed
210

Lubomir Doležal's avatar
Lubomir Doležal committed
211
212
213
214
Create external network for stack to run:
```
docker network create -d overlay vhr18-extnet
docker network create -d overlay emg-extnet
Lubomir Doležal's avatar
Lubomir Doležal committed
215
docker network create -d overlay dem-extnet
216
```
Lubomir Doležal's avatar
Lubomir Doležal committed
217
Add following .env files with credentials to the cloned copy of the repository config/<stack>/ folder: `vhr18_db.env`, `vhr18_obs.env`, `vhr18_django.env`.
218

Mussab Abdalla's avatar
Mussab Abdalla committed
219
220
create docker secrets:

Mussab Abdalla's avatar
Mussab Abdalla committed
221
Sensitive environment variables are not included in the .env files, and must be generated as docker secrets. All stacks currently share these secret names, therefore it must stay the same for all stacks. The same goes for sftp configuration values, To create docker secrets, and configs run:
Mussab Abdalla's avatar
Mussab Abdalla committed
222
```bash
Mussab Abdalla's avatar
Mussab Abdalla committed
223
# secret creation
Mussab Abdalla's avatar
Mussab Abdalla committed
224
225
226
227
# replace the "<variable>" with the value of the secret
printf "<OS_PASSWORD_DOWNLOAD>" | docker secret create OS_PASSWORD_DOWNLOAD -
printf "<DJANGO_PASSWORD>" | docker secret create DJANGO_PASSWORD -
printf "<OS_PASSWORD>" | docker secret create OS_PASSWORD -
228
printf "<DJANGO_SECRET_KEY>" | docker secret create DJANGO_SECRET_KEY -
Mussab Abdalla's avatar
Mussab Abdalla committed
229
230

# configs creation
Lubomir Doležal's avatar
minor    
Lubomir Doležal committed
231
printf "<user>:<password>:<UID>:<GID>" | docker config create sftp_users_<name> -
232
# for production base stack deployment, additional basic authentication credentials list need to be created
233
234
# format of such a list used by traefik are username:hashedpassword (MD5, SHA1, BCrypt)
sudo apt-get install apache2-utils
235
htpasswd -nb <username> <password> >> auth_list.txt
236
237
docker secret create BASIC_AUTH_USERS_AUTH auth_list.txt
docker secret create BASIC_AUTH_USERS_APIAUTH auth_list_api.txt
Mussab Abdalla's avatar
Mussab Abdalla committed
238
239
```

240
Currently all deployments use the same certificates for **shibauth** service. If more need to be created, for each new stack, two more secrets need to be created, where **shibauth** is deployed. These ensure that the SP is recognized and its identity confirmed by the IDP. They are configured as **stack-name-capitalized_SHIB_KEY** and **stack-name-capitalized_SHIB_CERT**. In order to create them, use the attached **keygen.sh** command-line tool in */config* folder.
241
242
```bash
SPURL="https://emg.pass.copernicus.eu" # service initial access point made accessible by traefik
243
./config/keygen.sh -h $SPURL -y 20 -e $SPURL/shibboleth -n sp-signing -f
Stephan Meißl's avatar
Stephan Meißl committed
244
245
docker secret create EMG_SHIB_CERT sp-signing-cert.pem
docker secret create EMG_SHIB_KEY sp-signing-key.pem
246
247
248
```
Additionally a docker config `idp-metadata` containing the metadata of the used IDP needs to be added:
```bash
Stephan Meißl's avatar
Stephan Meißl committed
249
docker config create idp_metadata idp-metadata-received.xml
250
251
```

Lubomir Doležal's avatar
Lubomir Doležal committed
252
Deploy the stack in dev environment:
253
```
254
docker stack deploy -c docker-compose.vhr18.yml -c docker-compose.vhr18.dev.yml -c docker-compose.logging.yml -c docker-compose.logging.dev.yml vhr18-pvs  # start VHR_IMAGE_2018 stack in dev mode, for example to use local sources
Lubomir Doležal's avatar
Lubomir Doležal committed
255
256
docker stack deploy -c docker-compose.emg.yml -c docker-compose.emg.dev.yml -c docker-compose.logging.yml -c docker-compose.logging.dev.yml emg-pvs # start Emergency stack in dev mode, for example to use local sources
```
Lubomir Doležal's avatar
Lubomir Doležal committed
257
Deploy base & logging stack in production environment:
Lubomir Doležal's avatar
Lubomir Doležal committed
258
```
Lubomir Doležal's avatar
cleanup    
Lubomir Doležal committed
259
docker stack deploy -c docker-compose.logging.yml -c docker-compose.logging.ops.yml logging
260
docker stack deploy -c docker-compose.base.ops.yml base-pvs
261
```
Lubomir Doležal's avatar
Lubomir Doležal committed
262
263
264
265
Deploy the stack in production environment:
Please note that in order to reuse existing database volumes, <stack-name> needs to be the same. Here we use `vhr18-pvs` but in operational service `vhr18-pdas` is used.
```
docker stack deploy -c docker-compose.vhr18.yml -c docker-compose.vhr18.ops.yml vhr18-pvs
266
```
Lubomir Doležal's avatar
Lubomir Doležal committed
267

Lubomir Doležal's avatar
Lubomir Doležal committed
268
First steps:
269
270

To register first data, use the following command inside the registrar container:
271
```
Lubomir Doležal's avatar
Lubomir Doležal committed
272
273
UPLOAD_CONTAINER=<product_bucket_name> && python3 registrar.py --objects-prefix <product_object_storage_item_prefix>
```
274
275
276
277
278
279
280
281
282
283
If you want to create changes for github.com/eoxc/eoxc with deployed dev stack via webpack dev-server, you need to mount the eoxc folder into client container node_modules and apply a marionette monkeypatch to your local cloned `eoxc`:
```
services:
  client:
    volumes:
      - type: bind
        source: /home/lubomir/projects/eoxc
        target: /node_modules/eoxc
```
```
Lubomir Doležal's avatar
Lubomir Doležal committed
284
patch _cloned_eoxc/node_modules/backbone.marionette/lib/core/backbone.marionette.js client/eoxc_marionette.patch 
285
286
```

287
288
289
290
291
292
293
Tear town stack including data:

```bash
docker stack rm vhr18-pvs                      # stop stack
docker volume rm vhr18-pvs_db-data                        # delete volumes
docker volume rm vhr18-pvs_redis-data
docker volume rm vhr18-pvs_traefik-data
294
docker volume rm vhr18-pvs_instance-data
295
296
```

297
### Setup logging
298

299
300
301
302
303
304
To access the logs, navigate to http://localhost:5601 . Ignore all of the fancy enterprise capabilities and select Kibana > Discover in the hamburger menu.

On first run, you need to define an index pattern to select the data source for kibana in elastic search.
Since we only have fluentd, you can just use `*` as index pattern.
Select `@timestamp` as time field
([see also](https://www.elastic.co/guide/en/kibana/current/tutorial-define-index.html)).
Stephan Meißl's avatar
Stephan Meißl committed
305
Example of a kibana query to discover logs of a single service:
Lubomir Doležal's avatar
Lubomir Doležal committed
306
307
308
309
```
https://<kibana-url>/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))&_a=(columns:!(path,size,code,log),filters:!(),index:<index-id>,interval:auto,query:(language:kuery,query:'%20container_name:%20"<service-name>"'),sort:!())
```
Development service stacks keep their logging to stdout/stderr unless `logging` dev stack is used.
Stephan Meißl's avatar
Stephan Meißl committed
310
On production machine, `fluentd` is set as a logging driver for docker daemon by modifying `/etc/docker/daemon.json` to
Lubomir Doležal's avatar
Lubomir Doležal committed
311
312
313
314
315
316
317
318
```
{
    "log-driver": "fluentd",
    "log-opts": {
        "fluentd-sub-second-precision": "true"
    }
}
```
Mussab Abdalla's avatar
Mussab Abdalla committed
319
320
### setup sftp

Mussab Abdalla's avatar
Mussab Abdalla committed
321
The `SFTP` image allow remote access into 2 logging folders, you can define (edit/add) users, passwords and (UID/GID) using `docker config create` mentioned above.
Mussab Abdalla's avatar
Mussab Abdalla committed
322

Lubomir Doležal's avatar
Lubomir Doležal committed
323
In the below example the username is `eox`, once the stack is deployed you can sftp into the logging folders through port 2222 (for ``vhr18``, ``emg`` and ``dem`` have 2223 and 2224 respectively) if you are running the dev stack localhost :
Mussab Abdalla's avatar
Mussab Abdalla committed
324
325
326

```bash
sftp -P 2222 eox@127.0.0.1
Stephan Meißl's avatar
Stephan Meißl committed
327
```
Mussab Abdalla's avatar
Mussab Abdalla committed
328
329
You will log in  into`/home/eox/data` directory which contains the 2 logging directories : `to/panda` and `from/fepd`

Mussab Abdalla's avatar
Mussab Abdalla committed
330
 **NOTE:**  The mounted directory that you are directed into is *`/home/user`*, where `user` is the username, hence when setting / editing  the username in configs, the `sftp` mounted volumes path in `docker-compose.<collection>.yml` must change respectively.
Stephan Meißl's avatar
Stephan Meißl committed
331

Stephan Meißl's avatar
Stephan Meißl committed
332
333
# Documentation

Lubomir Doležal's avatar
Lubomir Doležal committed
334
335
Documentation `user-guide` and `operator-guide` is built with each commit in CI step `pages` - for master, staging and tags and is deployed to our Gitlab pages on https://esa.pages.eox.at/prism/vs/<user|operator>/<branch>/index.html or in CI step `review-docs` - deployed to https://esa.pages.eox.at/-/prism/vs/-/jobs/$CI_JOB_ID/artifacts/public/master/index.html

Stephan Meißl's avatar
Stephan Meißl committed
336
337
## Installation

Lubomir Doležal's avatar
Lubomir Doležal committed
338
339
If you want to build it locally, do following:

Stephan Meißl's avatar
Stephan Meißl committed
340
```bash
Lubomir Doležal's avatar
wip    
Lubomir Doležal committed
341
python3 -m pip install sphinx recommonmark sphinx-autobuild
Stephan Meißl's avatar
Stephan Meißl committed
342
343
```

Lubomir Doležal's avatar
Lubomir Doležal committed
344
## Generate html
Stephan Meißl's avatar
Stephan Meißl committed
345
346
347
348

```bash
make html

349
350
351
# For watched html automatic building
make html-watch

Lubomir Doležal's avatar
Lubomir Doležal committed
352
# For pdf output
Stephan Meißl's avatar
Stephan Meißl committed
353
make latexpdf
Stephan Meißl's avatar
Stephan Meißl committed
354
# To shrink size of pdf
355
gs -sDEVICE=pdfwrite -dPDFSETTINGS=/ebook -dPrinted=false -q -o View-Server_-_User-Guide_small.pdf View-Server_-_User-Guide.pdf
356
357
# make latexpdf and make html combined
make build
Stephan Meißl's avatar
Stephan Meißl committed
358
359
360
```

The documentation is generated in the respective *_build/html* directory.
361

362
363
# Create software releases

Lubomir Doležal's avatar
Lubomir Doležal committed
364
365
366
## Release a new vs version

We use [bump2version](https://github.com/c4urself/bump2version) to increment versions of invividual docker images and create git tags. Tags after push trigger CI `docker push` action of versioned images. It also updates used image versions in `.ops` docker compose files.
Lubomir Doležal's avatar
Lubomir Doležal committed
367

Lubomir Doležal's avatar
Lubomir Doležal committed
368
Pushing to `master` branch updates `latest` images, while `staging` branch push updates `staging` images.
Lubomir Doležal's avatar
Lubomir Doležal committed
369
For **versions** in general, we use semantic versioning with format {major}.{minor}.{patch}-{release}.{build}.
Lubomir Doležal's avatar
Lubomir Doležal committed
370
First check deployed staging version on staging platform (TBD), then if no problems are found, proceed.
Lubomir Doležal's avatar
Lubomir Doležal committed
371
Following operation should be done on `staging` or `master` branch.
Lubomir Doležal's avatar
Lubomir Doležal committed
372
373
374
375
376
```
bump2version <major/minor/patch/release/build>
git push
git push --tags
```
Lubomir Doležal's avatar
Lubomir Doležal committed
377
378
If it was done on `staging` branch, then it should be merged to `master`, unless only a patch to previous major versions is made.
A hotfix to production is developed in a branch initiated from master, then merged to staging for verification. It is then merged to master for release.
379
380
381
382
383
## Source code release

Create a TAR from source code:

```bash
Lubomir Doležal's avatar
Lubomir Doležal committed
384
git archive --prefix release-1.0.0/ -o release-1.0.0.tar.gz -9 master
385
386
387
388
389
390
391
392
393
```

Save Docker images:

```bash
docker save -o pvs_core.tar registry.gitlab.eox.at/esa/prism/vs/pvs_core
docker save -o pvs_cache.tar registry.gitlab.eox.at/esa/prism/vs/pvs_cache
docker save -o pvs_preprocessor.tar registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor
docker save -o pvs_client.tar registry.gitlab.eox.at/esa/prism/vs/pvs_client
Lubomir Doležal's avatar
Lubomir Doležal committed
394
395
docker save -o pvs_ingestor.tar registry.gitlab.eox.at/esa/prism/vs/pvs_ingestor
docker save -o fluentd.tar registry.gitlab.eox.at/esa/prism/vs/fluentd
396
```
Mussab Abdalla's avatar
Mussab Abdalla committed
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
# Terrain data generation for Cesium

## Mount swift as a file system

The following steps were implemented having the cloud storage mounted as a file system, to mount swift into directory called `mount`:

```bash
# mount swift bucket
docker run -ti --rm --device /dev/fuse --cap-add SYS_ADMIN --privileged     --env MODE=0777 \
--env ARGS="--debug --allow-other --os-storage-url=<swift_auth_url> --os-auth-token=<swift_auth_token>" \
--volume=$(pwd)/mount:/ovh:shared jeromebreton/svfs
```
where `<swift_auth_url>`  and  `<swift_auth_url>` are the outputs of `swift auth` command.

## Pre-processing of the DEM data

First we need to create a global mosaic of the desired resolution. Having a list of the raster `*.tif` ( e.g 30 meter `30_DTE` in this case). you can use GDAL to generate the mosaic :

```bash
# to create the mosaic
docker run -v $(pwd):/data geodata/gdal gdalbuildvrt -te -180 -90 180 90 -a_srs EPSG:4326 -input_file_list ./mount/terrain-src/30_DTE_list.txt ./mount/terrain-src/30_DTE_mosaic.vrt
```

## Terrain Generation

to generate Cesium-friendly terrain data ( up to zoom level 7 ), we need to generate the quantized mesh tiles + the `layer.json` which is placed in the root of the terrain tiles directory

```bash
# for creating the layer.json (limited to zoom level 7)
docker run -i -v $(pwd)/mount/:/data tumgis/ctb-quantized-mesh ctb-tile -f Mesh  -s 7 -e 0 -R -C -l -o /data/terrain-mesh /data/terrain-src/30_DTE_mosaic.vrt

```
it is recommended to generate terrain data zoom-level by zoom-level

```bash
# for mesh creation ( only zoom level 7)
docker run -i -v $(pwd)/mount/:/data tumgis/ctb-quantized-mesh ctb-tile -f Mesh -s 7 -e 7 -R -C -o /data/terrain-mesh /data/terrain-src/30_DTE_mosaic.vrt

```
we repeat the above step over and over for each lower zoom level ( setting `n` to the respective zoom-level `-s <n> -e <n>`).

For the lower zoom levels ( e.g  <= 4), we need to reduce the resolution of the mosaic to avoid `ctb-tile` errors, we can use `gdalwarp` :

```bash
docker run -i -v $(pwd)/mount/:/data tumgis/ctb-quantized-mesh gdalwarp -tr 0.01 0.01 /data/terrain-src/30_DTE_mosaic.vrt /data/terrain-src/REDUCED_30_DTE_mosaic.vrt
```