EOX GitLab Instance

Skip to content
Snippets Groups Projects
Commit 4bb8ac2f authored by Fabian Schindler's avatar Fabian Schindler
Browse files

Updates to docs

parent 23412f55
No related branches found
No related tags found
No related merge requests found
......@@ -59,8 +59,8 @@ all containers have actually stopped) the next step is to delete the
Now that the volume was deleted, the stack can be re-deployed as described
above, which will trigger the automatic re-creation and initialization of the
volume. For the ``instance-data``, it means that the instance will be re-created
and all database models with it.
volume. For the ``instance-data``, it means that the instance will be
re-created and all database models with it.
Docker Compose Settings
......@@ -177,7 +177,8 @@ retrieve the original product files:
VS Environment Variables
^^^^^^^^^^^^^^^^^^^^^^^^
These environment variables are used by the VS itself to configure various parts.
These environment variables are used by the VS itself to configure various
parts.
.. note::
These variables are used during the initial stack setup. When these
......
......@@ -8,8 +8,8 @@ This section details the data ingestion and later management in the VS.
Redis queues
------------
The central synchronization component in the VS is the ``redis`` key-value store.
It provides various queues, which the services are listening to. For
The central synchronization component in the VS is the ``redis`` key-value
store. It provides various queues, which the services are listening to. For
operators it provides a high-level interface through which data products can be
registered and managed.
......@@ -160,7 +160,10 @@ it is passed as a command line argument, which is then processed normally.
.. code-block:: bash
python preprocessor.py ... TODO
python3 /preprocessor.py \
--mode standard \
--replace \
--tar-object-path /data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar
In this mode, the item will not be placed in the resulting set
(``preprocessing_set``, ``preprocess-success_set``, and
......@@ -184,7 +187,7 @@ alias is assumed:
.. code-block:: bash
alias manage.py='python3 ..../manage.py' # TODO
alias manage.py='python3 /var/www/pvs/dev/pvs_instance/manage.py'
A collection is a grouping of earth observation products, accessible as a
......
......@@ -14,7 +14,7 @@ installed via ``pip``.
.. code-block:: bash
pip3 install pvs_starter # TODO: git url
pip3 install pvs_starter git+git@gitlab.eox.at:esa/prism/pvs_starter.git
Now a new VS instance can be set up like this:
......@@ -65,20 +65,20 @@ later access the admin panel to inspect the registered data.
Here, the preprocessing can be configured in detail.
TODO
.. TODO
``products``
~~~~~~~~~~~~
This section defines ``product_type`` related information. The two most important
settings here are the ``type_extractor`` and ``level_extractor`` structures
which specify how the product type and product level should be extracted from
the metadata. For this, an XPath (or multiple) can be specified to retrieve
that information.
This section defines ``product_type`` related information. The two most
important settings here are the ``type_extractor`` and ``level_extractor``
structures which specify how the product type and product level should be
extracted from the metadata. For this, an XPath (or multiple) can be specified
to retrieve that information.
The ``types`` section defines the available ``product_types`` and which ``browse``
and ``mask`` types are to be generated.
The ``types`` section defines the available ``product_types`` and which
``browse`` and ``mask`` types are to be generated.
.. code-block:: yaml
......@@ -133,8 +133,9 @@ and ``mask`` types are to be generated.
~~~~~~~~~~~~~~~
In the ``collections`` section, the collections are set up and it is defined
which products based on ``product_type`` and ``product_level`` will be inserted into them. The
``product_types`` must list types defined in the ``products`` section.
which products based on ``product_type`` and ``product_level`` will be inserted
into them. The ``product_types`` must list types defined in the ``products``
section.
.. code-block:: yaml
......@@ -152,8 +153,8 @@ which products based on ``product_type`` and ``product_level`` will be inserted
Here, the three relevant storages can be configured: the ``source``,
``preprocessed``, and ``cache`` storages.
The ``source`` storage defines the location from which the original files will be
downloaded to be preprocessed. Preprocessed images and metadata will then be
The ``source`` storage defines the location from which the original files will
be downloaded to be preprocessed. Preprocessed images and metadata will then be
uploaded to the ``preprocessed`` storage. The cache service will cache images on
the ``cache`` storage.
......@@ -162,10 +163,6 @@ of storages, such as OpenStack swift.
These storage definitions will be used in the appropriate sections.
TODO: improve example
.. code-block:: yaml
storages:
......@@ -206,7 +203,8 @@ TODO: improve example
``cache``
~~~~~~~~~
This section defines the exposed services, and how the layers shall be cached internally.
This section defines the exposed services, and how the layers shall be cached
internally.
.. code-block:: yaml
......@@ -274,7 +272,6 @@ This section defines the exposed services, and how the layers shall be cached in
title: VHR Image 2018 Level 3 NDVI
abstract: VHR Image 2018 Level 3 NDVI
style: earth
# TODO grids? cache options?
Once the initialization is finished the next step is to deploy the Docker Swarm
stack as described in the section :ref:`setup`.
......@@ -51,6 +51,7 @@ the used images:
- mdillon/postgis:10
- redis
- traefik:2.1
- fluent/fluentd
- registry.gitlab.eox.at/esa/prism/vs/pvs_core:latest
- registry.gitlab.eox.at/esa/prism/vs/pvs_cache:latest
- registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor:latest
......
......@@ -36,8 +36,9 @@ Updating the service software is done using previously established tools. To
update the service in question, it needs to be scaled to zero replicas. Then
the new image can be pulled, and the service can be scaled back to its original
value. This forces the start of the service from the newly fetched image.
Another option to keep the service running during the upgrade procedure is to sequentially
restart the individual instances of the services after pulling a newer image using a command:
Another option to keep the service running during the upgrade procedure is to
sequentially restart the individual instances of the services after pulling a
newer image using a command:
.. code-block:: bash
......@@ -46,14 +47,16 @@ restart the individual instances of the services after pulling a newer image usi
Updating configurations or environment files
--------------------------------------------
Updating the service configurations or environment files used can not be done just by
rescaling the impacted services to 0 and rerunning. The whole stack needs to be shut down using the command:
Updating the service configurations or environment files used can not be done
just by rescaling the impacted services to 0 and rerunning. The whole stack
needs to be shut down using the command:
.. code-block:: bash
docker stack rm <stack-name>
A new deployment of the stack will use the updated configuration. The above mentioned process necessarily
involves a certain service downtime between shutting down of the stack and new deployment.
A new deployment of the stack will use the updated configuration. The above
mentioned process necessarily involves a certain service downtime between
shutting down of the stack and new deployment.
The next section :ref:`ingestion` explains how to get data into the VS.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment