EOX GitLab Instance

Skip to content
Snippets Groups Projects
Commit 40bf5cfa authored by Lubomir Dolezal's avatar Lubomir Dolezal
Browse files

operators guide additions

parent 01a9d382
No related branches found
No related tags found
2 merge requests!36Staging to master to prepare 1.0.0 release,!34Shib auth
......@@ -397,7 +397,7 @@ preprocessing
force_north_up
TODO
Circumvents the naming of corner names and assumes a north-up orientation of the image.
tps
......
......@@ -162,12 +162,23 @@ it is passed as a command line argument, which is then processed normally.
.. code-block:: bash
python3 /preprocessor.py \
--mode standard \
--replace \
--tar-object-path /data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar
preprocess \
--config-file /preprocessor_config.yml \
--validate \
--use-dir /tmp \
data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar
In this mode, the item will not be placed in the resulting set
In order to preprocess a ngEO Ingest Browse Report, an additonal ``--browse-report`` parameter needs to be added:
.. code-block:: bash
preprocess \
--config-file /preprocessor_config.yml \
--browse-report \
--use-dir /tmp \
browse_report_test1.json
In this "one-off" mode, the item will not be placed in the resulting set
(``preprocessing_set``, ``preprocess-success_set``, and
``preprocess-failure_set``).
......@@ -273,3 +284,12 @@ Deregistration
.. code-block:: bash
manage.py coverage deregister "${product_id}_coverage"
Preprocessing vs registration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The preprocessing step aims to ensure that cloud optimized GeoTIFF (COG) files are created in order to significantly speed up the viewing of large amount of data in lower zooms. There are several cases, where such preprocessing is not necessary or wanted.
- If data are already in COGs and in favorable projection, which will be presented to the user for most of the times, direct registration should be used. This means, paths to individual products will be pushed directly to the register queues.
- Also for cases, where preprocessing step would take too much time, direct registration allowing access to the metadata and catalog functions, while justifying slower rendering times can be preferred.
\ No newline at end of file
......@@ -62,9 +62,9 @@ shutting down of the stack and new deployment.
Inspecting reports
------------------
Once registered, a xml report containes wcs and wms getcapabilities of the registered product is generated and can be accessed by connecting to the `SFTP` image
Once registered, a xml report containing wcs and wms getcapabilities of the registered product is generated and can be accessed by connecting to the `SFTP` image
via the sftp protocol.
In order to log into the logging folders through port 2222 on the hosting ip (e.g. localhost if you are running the dev stack ) The following command can be used:
In order to log into the logging folders through port 2222 on the hosting ip (e.g. localhost if you are running the dev stack) The following command can be used:
.. code-block:: bash
......@@ -74,8 +74,8 @@ this will direct the user into `/home/<username>/data` directory which contains
.. Note:: The mounted directory that the user is directed into is *`/home/user`*, where `user` is the username, hence when changing the username in the `.conf` file, the `sftp` mounted volumes path in `docker-compose.<collection>.yml` must be changed respectively.
Inspecting logs
---------------
Inspecting logs in development
------------------------------
All service components are running inside docker containers and it is therefore possible to inspect the logs for anomalies via standard docker logs calls redirected for example to less command to allow paging through them.
......@@ -95,8 +95,24 @@ It is possible to show logs of all containers belonging to a service from a mast
docker service logs <stack-name>_<service-name> -t 2>&1 | sort -k 1 2>&1 | tail -n <number-of-last-lines> 2>&1 | less
The docker service logs is intended as a quick way to view the latest log entries of all tasks of a service, but should not be used as a main way to collect these logs. For that it would be appropriate to use a logging driver, and extract the logs from the nodes to a central log aggregator.
The docker service logs is intended as a quick way to view the latest log entries of all tasks of a service, but should not be used as a main way to collect these logs. For that it would be appropriate to use a logging driver, and extract the logs from the nodes to a central log aggregator.
The docker service logs is intended as a quick way to view the latest log entries of all tasks of a service, but should not be used as a main way to collect these logs. For that, on production setup, an additional EFK (Elasticsearch, Fluentd, Kibana) stack is deployed.
Inspecting logs in production
-----------------------------
Fluentd is configured as main logging driver of the Docker daemon on Virtual machine level. Therefore for other services to run, Fluentd service must be running too. To access the logs, interactive and multi-purpose Kibana interface is available and exposed externally by traefik.
For simple listing of the filtered time-sorted logs as an equivalent to `docker service logs` command, a basic ``Discover`` app can be used. The main panel to interact with the logs is the ``Search`` bar, allowing filtered field-data and free-text searches, modyfing time range etc. The individual log results will then appear in the ``Document table`` panel in the bottom of the page.
For specific help, please consult `Kibana official documentation <https://www.elastic.co/guide/en/kibana/current/discover.html>`_
Kibana discover image here.
Kibana also allows to aggregate log data based on a search query in two modes of operation:
- ``Bucketing``,
- ``Metrics``, keeping track of computed metrics over a set of documents (buckets).
Increasing logging level
------------------------
......@@ -112,7 +128,9 @@ A restart of respective service for the change to be applied is also necessary.
cd ${INSTALL_DIR}/pvs_instance
sed -i 's/DEBUG = False/DEBUG = True/g' settings.py
In order to increase logging level of registrar and preprocessor services to `debug`, the respective Python scripts need to be run with an optional parameter **-v 4**.
In order to increase logging level of registrar and preprocessor services to `DEBUG`, the respective Python commands need to be run with an optional parameter **--debug**.
Ingestor service by default logs its messages in DEBUG mode.
The cache services internally uses a Mapcache software, which usually incorporates an Apache 2 HTTP Server. Due to that, logging level is shared throughout the whole service and is based on Apache `.conf` file, which is stored in $APACHE_CONF environment variable. To change the logging level, edit this file, by setting a **LogLevel debug** and then gracefully restart the Apache component (this way, the cache service itself will not restart and renew default configuration).
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment