EOX GitLab Instance

Skip to content
Snippets Groups Projects
Commit 4bb8ac2f authored by Fabian Schindler's avatar Fabian Schindler
Browse files

Updates to docs

parent 23412f55
No related branches found
No related tags found
No related merge requests found
...@@ -59,8 +59,8 @@ all containers have actually stopped) the next step is to delete the ...@@ -59,8 +59,8 @@ all containers have actually stopped) the next step is to delete the
Now that the volume was deleted, the stack can be re-deployed as described Now that the volume was deleted, the stack can be re-deployed as described
above, which will trigger the automatic re-creation and initialization of the above, which will trigger the automatic re-creation and initialization of the
volume. For the ``instance-data``, it means that the instance will be re-created volume. For the ``instance-data``, it means that the instance will be
and all database models with it. re-created and all database models with it.
Docker Compose Settings Docker Compose Settings
...@@ -177,7 +177,8 @@ retrieve the original product files: ...@@ -177,7 +177,8 @@ retrieve the original product files:
VS Environment Variables VS Environment Variables
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
These environment variables are used by the VS itself to configure various parts. These environment variables are used by the VS itself to configure various
parts.
.. note:: .. note::
These variables are used during the initial stack setup. When these These variables are used during the initial stack setup. When these
......
...@@ -8,8 +8,8 @@ This section details the data ingestion and later management in the VS. ...@@ -8,8 +8,8 @@ This section details the data ingestion and later management in the VS.
Redis queues Redis queues
------------ ------------
The central synchronization component in the VS is the ``redis`` key-value store. The central synchronization component in the VS is the ``redis`` key-value
It provides various queues, which the services are listening to. For store. It provides various queues, which the services are listening to. For
operators it provides a high-level interface through which data products can be operators it provides a high-level interface through which data products can be
registered and managed. registered and managed.
...@@ -160,7 +160,10 @@ it is passed as a command line argument, which is then processed normally. ...@@ -160,7 +160,10 @@ it is passed as a command line argument, which is then processed normally.
.. code-block:: bash .. code-block:: bash
python preprocessor.py ... TODO python3 /preprocessor.py \
--mode standard \
--replace \
--tar-object-path /data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar
In this mode, the item will not be placed in the resulting set In this mode, the item will not be placed in the resulting set
(``preprocessing_set``, ``preprocess-success_set``, and (``preprocessing_set``, ``preprocess-success_set``, and
...@@ -184,7 +187,7 @@ alias is assumed: ...@@ -184,7 +187,7 @@ alias is assumed:
.. code-block:: bash .. code-block:: bash
alias manage.py='python3 ..../manage.py' # TODO alias manage.py='python3 /var/www/pvs/dev/pvs_instance/manage.py'
A collection is a grouping of earth observation products, accessible as a A collection is a grouping of earth observation products, accessible as a
......
...@@ -14,7 +14,7 @@ installed via ``pip``. ...@@ -14,7 +14,7 @@ installed via ``pip``.
.. code-block:: bash .. code-block:: bash
pip3 install pvs_starter # TODO: git url pip3 install pvs_starter git+git@gitlab.eox.at:esa/prism/pvs_starter.git
Now a new VS instance can be set up like this: Now a new VS instance can be set up like this:
...@@ -65,20 +65,20 @@ later access the admin panel to inspect the registered data. ...@@ -65,20 +65,20 @@ later access the admin panel to inspect the registered data.
Here, the preprocessing can be configured in detail. Here, the preprocessing can be configured in detail.
TODO .. TODO
``products`` ``products``
~~~~~~~~~~~~ ~~~~~~~~~~~~
This section defines ``product_type`` related information. The two most important This section defines ``product_type`` related information. The two most
settings here are the ``type_extractor`` and ``level_extractor`` structures important settings here are the ``type_extractor`` and ``level_extractor``
which specify how the product type and product level should be extracted from structures which specify how the product type and product level should be
the metadata. For this, an XPath (or multiple) can be specified to retrieve extracted from the metadata. For this, an XPath (or multiple) can be specified
that information. to retrieve that information.
The ``types`` section defines the available ``product_types`` and which ``browse`` The ``types`` section defines the available ``product_types`` and which
and ``mask`` types are to be generated. ``browse`` and ``mask`` types are to be generated.
.. code-block:: yaml .. code-block:: yaml
...@@ -133,8 +133,9 @@ and ``mask`` types are to be generated. ...@@ -133,8 +133,9 @@ and ``mask`` types are to be generated.
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
In the ``collections`` section, the collections are set up and it is defined In the ``collections`` section, the collections are set up and it is defined
which products based on ``product_type`` and ``product_level`` will be inserted into them. The which products based on ``product_type`` and ``product_level`` will be inserted
``product_types`` must list types defined in the ``products`` section. into them. The ``product_types`` must list types defined in the ``products``
section.
.. code-block:: yaml .. code-block:: yaml
...@@ -152,8 +153,8 @@ which products based on ``product_type`` and ``product_level`` will be inserted ...@@ -152,8 +153,8 @@ which products based on ``product_type`` and ``product_level`` will be inserted
Here, the three relevant storages can be configured: the ``source``, Here, the three relevant storages can be configured: the ``source``,
``preprocessed``, and ``cache`` storages. ``preprocessed``, and ``cache`` storages.
The ``source`` storage defines the location from which the original files will be The ``source`` storage defines the location from which the original files will
downloaded to be preprocessed. Preprocessed images and metadata will then be be downloaded to be preprocessed. Preprocessed images and metadata will then be
uploaded to the ``preprocessed`` storage. The cache service will cache images on uploaded to the ``preprocessed`` storage. The cache service will cache images on
the ``cache`` storage. the ``cache`` storage.
...@@ -162,10 +163,6 @@ of storages, such as OpenStack swift. ...@@ -162,10 +163,6 @@ of storages, such as OpenStack swift.
These storage definitions will be used in the appropriate sections. These storage definitions will be used in the appropriate sections.
TODO: improve example
.. code-block:: yaml .. code-block:: yaml
storages: storages:
...@@ -206,7 +203,8 @@ TODO: improve example ...@@ -206,7 +203,8 @@ TODO: improve example
``cache`` ``cache``
~~~~~~~~~ ~~~~~~~~~
This section defines the exposed services, and how the layers shall be cached internally. This section defines the exposed services, and how the layers shall be cached
internally.
.. code-block:: yaml .. code-block:: yaml
...@@ -274,7 +272,6 @@ This section defines the exposed services, and how the layers shall be cached in ...@@ -274,7 +272,6 @@ This section defines the exposed services, and how the layers shall be cached in
title: VHR Image 2018 Level 3 NDVI title: VHR Image 2018 Level 3 NDVI
abstract: VHR Image 2018 Level 3 NDVI abstract: VHR Image 2018 Level 3 NDVI
style: earth style: earth
# TODO grids? cache options?
Once the initialization is finished the next step is to deploy the Docker Swarm Once the initialization is finished the next step is to deploy the Docker Swarm
stack as described in the section :ref:`setup`. stack as described in the section :ref:`setup`.
...@@ -51,6 +51,7 @@ the used images: ...@@ -51,6 +51,7 @@ the used images:
- mdillon/postgis:10 - mdillon/postgis:10
- redis - redis
- traefik:2.1 - traefik:2.1
- fluent/fluentd
- registry.gitlab.eox.at/esa/prism/vs/pvs_core:latest - registry.gitlab.eox.at/esa/prism/vs/pvs_core:latest
- registry.gitlab.eox.at/esa/prism/vs/pvs_cache:latest - registry.gitlab.eox.at/esa/prism/vs/pvs_cache:latest
- registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor:latest - registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor:latest
......
...@@ -36,8 +36,9 @@ Updating the service software is done using previously established tools. To ...@@ -36,8 +36,9 @@ Updating the service software is done using previously established tools. To
update the service in question, it needs to be scaled to zero replicas. Then update the service in question, it needs to be scaled to zero replicas. Then
the new image can be pulled, and the service can be scaled back to its original the new image can be pulled, and the service can be scaled back to its original
value. This forces the start of the service from the newly fetched image. value. This forces the start of the service from the newly fetched image.
Another option to keep the service running during the upgrade procedure is to sequentially Another option to keep the service running during the upgrade procedure is to
restart the individual instances of the services after pulling a newer image using a command: sequentially restart the individual instances of the services after pulling a
newer image using a command:
.. code-block:: bash .. code-block:: bash
...@@ -46,14 +47,16 @@ restart the individual instances of the services after pulling a newer image usi ...@@ -46,14 +47,16 @@ restart the individual instances of the services after pulling a newer image usi
Updating configurations or environment files Updating configurations or environment files
-------------------------------------------- --------------------------------------------
Updating the service configurations or environment files used can not be done just by Updating the service configurations or environment files used can not be done
rescaling the impacted services to 0 and rerunning. The whole stack needs to be shut down using the command: just by rescaling the impacted services to 0 and rerunning. The whole stack
needs to be shut down using the command:
.. code-block:: bash .. code-block:: bash
docker stack rm <stack-name> docker stack rm <stack-name>
A new deployment of the stack will use the updated configuration. The above mentioned process necessarily A new deployment of the stack will use the updated configuration. The above
involves a certain service downtime between shutting down of the stack and new deployment. mentioned process necessarily involves a certain service downtime between
shutting down of the stack and new deployment.
The next section :ref:`ingestion` explains how to get data into the VS. The next section :ref:`ingestion` explains how to get data into the VS.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment