From fbfb5c7a98bf688d23cf3c86212dc74987ec7eec Mon Sep 17 00:00:00 2001 From: Lubomir Bucek <lubomir.bucek@eox.at> Date: Mon, 29 Jun 2020 16:04:09 +0200 Subject: [PATCH] minor typos, add note about updating config / env --- documentation/operator-guide/ingestion.rst | 16 +++++++------- .../operator-guide/initialization.rst | 21 +++++++++---------- documentation/operator-guide/management.rst | 19 +++++++++++++++++ documentation/operator-guide/setup.rst | 4 ++-- 4 files changed, 39 insertions(+), 21 deletions(-) diff --git a/documentation/operator-guide/ingestion.rst b/documentation/operator-guide/ingestion.rst index 58c49949..7571c6d8 100644 --- a/documentation/operator-guide/ingestion.rst +++ b/documentation/operator-guide/ingestion.rst @@ -8,9 +8,9 @@ This section details the data ingestion and later management in the VS. Redis queues ------------ -The central synchronization component in the VS is the redis key-value store. -It provides various queues, which are listened on by the services. For -operators it provides a high-level interface through wich data products can be +The central synchronization component in the VS is the ``redis`` key-value store. +It provides various queues, which the services are listening to. For +operators it provides a high-level interface through which data products can be registered and managed. Via the Redis, the ingestion can be triggered and observed. In order to @@ -51,7 +51,7 @@ is better to retrieve it for every command instead of relying on a variable: docker exec -it $(docker ps -qf "name=<stack-name>_redis") -For the sake of brevity, the next commands in this chaptere are using either of +For the sake of brevity, the next commands in this chapter are using either of the above techniques and will just print the final commands inside the redis container. @@ -63,7 +63,7 @@ container. element is part of a particular group, e.g: being preprocessed, or having failed registration. - ``Lists`` are used as a task queue. It is possible to add items to eithre + ``Lists`` are used as a task queue. It is possible to add items to either end of the queue, but by convention items are pushed on the "left" and popped from the "right" end of the list resulting in a last-in-first-out (LIFO) queue. It is entirely possible to push elements to the "right" end @@ -84,7 +84,7 @@ new path of an object to preprocess on the ``preprocess_queue``: redis-cli lpush preprocess_queue "/data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar" Usually, with a preprocessor service running and no other items in the -``preprocess_queue`` this value will be immediatly popped from the list and +``preprocess_queue`` this value will be immediately popped from the list and processed. For the sake of demonstration this command would print the contents of the ``preprocess_queue``: @@ -93,7 +93,7 @@ of the ``preprocess_queue``: $ redis-cli lrange preprocess_queue 0 -1 /data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar -Now that the product is beeing preprocessed, it should be visible in the +Now that the product is being preprocessed, it should be visible in the ``preprocessing_set``. As the name indicates, this is using the ``Set`` datatype, thus requiring the ``SMEMBERS`` subcommand to list: @@ -106,7 +106,7 @@ Once the preprocessing of the product is finished, the preprocessor will remove the currently worked on path from the ``preprocessing_set`` and add it either to the ``preprocess-success_set`` or the ``preprocess-failure_set`` depending on whether the processing succeeded or not. They can be inspected using the -same ``SMEMBERS`` subcommand but either name as parameter. +same ``SMEMBERS`` subcommand with one of set names as a parameter. Additionally, upon success, the preprocessor places the same product path on the ``register_queue``, where it can be inspected with the following command. diff --git a/documentation/operator-guide/initialization.rst b/documentation/operator-guide/initialization.rst index 6b1418fb..ad81c5a0 100644 --- a/documentation/operator-guide/initialization.rst +++ b/documentation/operator-guide/initialization.rst @@ -32,7 +32,7 @@ sections: ``database`` ~~~~~~~~~~~~ -Here, there access credentials of the database are stored. It defines the +Here, access credentials of the database are stored. It defines the internal database name, user and password that will be created when the stack is deployed. Note that there is no ``host`` setting, as this will be handled automatically. @@ -68,14 +68,14 @@ TODO ``products`` ~~~~~~~~~~~~ -This section defines product type related information. The two most important +This section defines ``product_type`` related information. The two most important settings here are the ``type_extractor`` and ``level_extractor`` structures -which specify how the product type and product level will be extracted from +which specify how the product type and product level should be extracted from the metadata. For this, an XPath (or multiple) can be specified to retrieve that information. -The ``types`` section defines the available product types and what browse -and mask types are to be generated. +The ``types`` section defines the available ``product_types`` and which ``browse`` +and ``mask`` types are to be generated. .. code-block:: yaml @@ -130,7 +130,7 @@ and mask types are to be generated. ~~~~~~~~~~~~~~~ In the ``collections`` section, the collections are set up and it is defined -which products of what type and level will be inserted into them. The +which products of based on ``product_type`` and ``product_level`` will be inserted into them. The ``product_types`` must list types defined in the ``products`` section. .. code-block:: yaml @@ -149,9 +149,9 @@ which products of what type and level will be inserted into them. The Here, the three relevant storages can be configured: the ``source``, ``preprocessed`` and ``cache`` storages. -The source storage defines the location from which the original files will be -pulled to be preprocessed. Preprocessed images and metadata will then be -pushed to the ``preprocessed`` storage. The cache service will cache images on +The ``source`` storage defines the location from which the original files will be +downloaded to be preprocessed. Preprocessed images and metadata will then be +uploaded to the ``preprocessed`` storage. The cache service will cache images on the ``cache`` storage. Each storage definition uses the same structure and can target various types @@ -203,8 +203,7 @@ TODO: improve example ``cache`` ~~~~~~~~~ -This section defines the exposed services layers of the cache, and how the -internal layers shall be cached. +This section defines the exposed services, and how the layers shall be cached internally. .. code-block:: yaml diff --git a/documentation/operator-guide/management.rst b/documentation/operator-guide/management.rst index b90e6492..2b247691 100644 --- a/documentation/operator-guide/management.rst +++ b/documentation/operator-guide/management.rst @@ -36,3 +36,22 @@ Updating the service software is done using previously established tools. To update the service in question, it needs to be scaled to zero replicas. Then the new image can be pulled, and the service can be scaled back to its original value. This forces the start of the service from the newly fetched image. +Another option to keep the service running during the upgrade procedure is to sequentially +restart the individual instances of the services after pulling a newer image using a command: + +.. code-block:: bash + + docker service update --force <stack-name>_<service-name> + +Updating configurations or environment files +--------------- + +Updating the service configurations or environment files used can not be done just by +rescaling the impacted services to 0 and rerunning. The whole stack needs to be shut down using a command: + +.. code-block:: bash + + docker stack rm <stack-name> + +A new deployment of the stack will already have updated configuration. The above mentioned process necessarily +involved a certain service downtime between shutting down of the stack and new deployment. diff --git a/documentation/operator-guide/setup.rst b/documentation/operator-guide/setup.rst index 5d27a838..8db89eda 100644 --- a/documentation/operator-guide/setup.rst +++ b/documentation/operator-guide/setup.rst @@ -5,7 +5,7 @@ Setup In this chapter the setup of a new VS stack is detailed. Before this step can be done, the configuration and environment files need to be present. These -files can be added manually, or be created in the initialization step. +files can be added manually or be created in the :ref:`initialization` step. Docker @@ -81,7 +81,7 @@ images from the default repository, this happens automatically. When private repositories are used, they need to be configured beforehand. Currently, all images used in VS that are not off-the-shelf are hosted on the ``registry.gitlab.eox.at`` registry. It can be configured to be used with this -command with the correct username and password filled in: +command with the correct ``username`` and ``password`` filled in: .. code-block:: bash -- GitLab