EOX GitLab Instance

Skip to content
Snippets Groups Projects
Commit 0388ef1d authored by Fabian Schindler's avatar Fabian Schindler
Browse files

Merge branch 'docs' of gitlab.eox.at:esa/prism/vs into docs

parents 57ec6c35 5b6f09af
No related branches found
No related tags found
1 merge request!8Docs
......@@ -8,9 +8,9 @@ This section details the data ingestion and later management in the VS.
Redis queues
------------
The central synchronization component in the VS is the redis key-value store.
It provides various queues, which are listened on by the services. For
operators it provides a high-level interface through wich data products can be
The central synchronization component in the VS is the ``redis`` key-value store.
It provides various queues, which the services are listening to. For
operators it provides a high-level interface through which data products can be
registered and managed.
Via the Redis, the ingestion can be triggered and observed. In order to
......@@ -51,7 +51,7 @@ is better to retrieve it for every command instead of relying on a variable:
docker exec -it $(docker ps -qf "name=<stack-name>_redis")
For the sake of brevity, the next commands in this chaptere are using either of
For the sake of brevity, the next commands in this chapter are using either of
the above techniques and will just print the final commands inside the redis
container.
......@@ -63,7 +63,7 @@ container.
element is part of a particular group, e.g: being preprocessed, or having
failed registration.
``Lists`` are used as a task queue. It is possible to add items to eithre
``Lists`` are used as a task queue. It is possible to add items to either
end of the queue, but by convention items are pushed on the "left" and
popped from the "right" end of the list resulting in a last-in-first-out
(LIFO) queue. It is entirely possible to push elements to the "right" end
......@@ -84,7 +84,7 @@ new path of an object to preprocess on the ``preprocess_queue``:
redis-cli lpush preprocess_queue "/data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar"
Usually, with a preprocessor service running and no other items in the
``preprocess_queue`` this value will be immediatly popped from the list and
``preprocess_queue`` this value will be immediately popped from the list and
processed. For the sake of demonstration this command would print the contents
of the ``preprocess_queue``:
......@@ -93,7 +93,7 @@ of the ``preprocess_queue``:
$ redis-cli lrange preprocess_queue 0 -1
/data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar
Now that the product is beeing preprocessed, it should be visible in the
Now that the product is being preprocessed, it should be visible in the
``preprocessing_set``. As the name indicates, this is using the ``Set``
datatype, thus requiring the ``SMEMBERS`` subcommand to list:
......@@ -106,7 +106,7 @@ Once the preprocessing of the product is finished, the preprocessor will remove
the currently worked on path from the ``preprocessing_set`` and add it either
to the ``preprocess-success_set`` or the ``preprocess-failure_set`` depending
on whether the processing succeeded or not. They can be inspected using the
same ``SMEMBERS`` subcommand but either name as parameter.
same ``SMEMBERS`` subcommand with one of set names as a parameter.
Additionally, upon success, the preprocessor places the same product path on
the ``register_queue``, where it can be inspected with the following command.
......@@ -206,10 +206,10 @@ Collections can be deleted, without affecting the contained products.
collection, deleting said collections without a replacement can lead to
service disruptions.
In certain scenarios it may be useful, to add specific products to or exclude
In certain scenarios it may be useful to add specific products to or exclude
them from a collection. For this, the Product identifier needs to be known. To
find out the Product identifier, either the OpenSearch on an existing
collection or the CLI command ``id list`` can be used.
find out the Product identifier, either query of the existing
collection via OpenSearch or the CLI command ``id list`` can be used.
When the identifier is obtained, the following management command inserts a
product into a collection:
......
......@@ -3,39 +3,42 @@
Initialization
==============
In order to set up an instance of VS, the ``pvs_starter`` utility is
recommended. It is distributed as a Python package, easily installed via
``pip``.
In order to set up an instance of the View Server (VS), the separate
``pvs_starter`` utility is recommended.
Running the Initialization
--------------------------
The ``pvs_starter`` utility is distributed as a Python package and easily
installed via ``pip``.
.. code-block:: bash
pip3 install pvs_starter # TODO: git url
Now the VS instance can be set up like this:
Now a new VS instance can be set up like this:
.. code-block:: bash
python3 -m pvs_starter.cli config.yaml out/ -f
This takes the initialization configuration ``config.yaml`` to generate
the structure in the ``out/`` directory.
the required structure of a new VS instance in the ``out/`` directory.
Configuration of the Initialization
-----------------------------------
Initialization config
---------------------
The important part of the initialization is the configuration. The format is
structured in YAML and will be detailed here. It contains the following
The important part of the initialization is the configuration. The file is
structured in YAML as detailed below. It contains the following
sections:
``database``
~~~~~~~~~~~~
Here, there access credentials of the database are stored. It defines the
internal database name, user and password that will be created when the stack
Here, access details and credentials of the database are stored. It defines the
internal database name, user, and password that will be created when the stack
is deployed. Note that there is no ``host`` setting, as this will be handled
automatically.
automatically within the Docker Swarm.
.. code-block:: yaml
......@@ -68,14 +71,14 @@ TODO
``products``
~~~~~~~~~~~~
This section defines product type related information. The two most important
This section defines ``product_type`` related information. The two most important
settings here are the ``type_extractor`` and ``level_extractor`` structures
which specify how the product type and product level will be extracted from
which specify how the product type and product level should be extracted from
the metadata. For this, an XPath (or multiple) can be specified to retrieve
that information.
The ``types`` section defines the available product types and what browse
and mask types are to be generated.
The ``types`` section defines the available ``product_types`` and which ``browse``
and ``mask`` types are to be generated.
.. code-block:: yaml
......@@ -130,7 +133,7 @@ and mask types are to be generated.
~~~~~~~~~~~~~~~
In the ``collections`` section, the collections are set up and it is defined
which products of what type and level will be inserted into them. The
which products based on ``product_type`` and ``product_level`` will be inserted into them. The
``product_types`` must list types defined in the ``products`` section.
.. code-block:: yaml
......@@ -147,11 +150,11 @@ which products of what type and level will be inserted into them. The
~~~~~~~~~~~~
Here, the three relevant storages can be configured: the ``source``,
``preprocessed`` and ``cache`` storages.
``preprocessed``, and ``cache`` storages.
The source storage defines the location from which the original files will be
pulled to be preprocessed. Preprocessed images and metadata will then be
pushed to the ``preprocessed`` storage. The cache service will cache images on
The ``source`` storage defines the location from which the original files will be
downloaded to be preprocessed. Preprocessed images and metadata will then be
uploaded to the ``preprocessed`` storage. The cache service will cache images on
the ``cache`` storage.
Each storage definition uses the same structure and can target various types
......@@ -203,8 +206,7 @@ TODO: improve example
``cache``
~~~~~~~~~
This section defines the exposed services layers of the cache, and how the
internal layers shall be cached.
This section defines the exposed services, and how the layers shall be cached internally.
.. code-block:: yaml
......@@ -272,6 +274,7 @@ internal layers shall be cached.
title: VHR Image 2018 Level 3 NDVI
abstract: VHR Image 2018 Level 3 NDVI
style: earth
# grids? cache options?
# TODO grids? cache options?
Once the initialization is finished the next step is to deploy the Docker Swarm
stack as described in the section :ref:`setup`.
......@@ -67,8 +67,12 @@ The following configuration files impact the behavior of the View Server:
published layers.
- `init-db.sh`: This file sets up the registrar and renderer side of the VS.
Setup
-----
Initialization and Setup
------------------------
In order to help with the initial setup of a VS, the ``pvs_starter`` package
allows to quickly establish the required structure of configuration files.
described in the section :ref:`initialization` allows to quickly establish the
required structure of configuration files.
The section :ref:`setup` describes how to deploy a Docker Swarm stack using the
configuration files generated in the initialization step.
......@@ -36,3 +36,22 @@ Updating the service software is done using previously established tools. To
update the service in question, it needs to be scaled to zero replicas. Then
the new image can be pulled, and the service can be scaled back to its original
value. This forces the start of the service from the newly fetched image.
Another option to keep the service running during the upgrade procedure is to sequentially
restart the individual instances of the services after pulling a newer image using a command:
.. code-block:: bash
docker service update --force <stack-name>_<service-name>
Updating configurations or environment files
---------------
Updating the service configurations or environment files used can not be done just by
rescaling the impacted services to 0 and rerunning. The whole stack needs to be shut down using a command:
.. code-block:: bash
docker stack rm <stack-name>
A new deployment of the stack will already have updated configuration. The above mentioned process necessarily
involved a certain service downtime between shutting down of the stack and new deployment.
......@@ -5,7 +5,7 @@ Setup
In this chapter the setup of a new VS stack is detailed. Before this step can
be done, the configuration and environment files need to be present. These
files can be added manually, or be created in the initialization step.
files can be added manually or be created in the :ref:`initialization` step.
Docker
......@@ -81,7 +81,7 @@ images from the default repository, this happens automatically. When private
repositories are used, they need to be configured beforehand.
Currently, all images used in VS that are not off-the-shelf are hosted on the
``registry.gitlab.eox.at`` registry. It can be configured to be used with this
command with the correct username and password filled in:
command with the correct ``username`` and ``password`` filled in:
.. code-block:: bash
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment