From ea1b630700a576af2114c361fb55925dbf31a6a6 Mon Sep 17 00:00:00 2001
From: Lubomir Bucek <lubomir.bucek@eox.at>
Date: Tue, 20 Oct 2020 17:15:13 +0200
Subject: [PATCH] update readme

---
 README.md | 50 ++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 40 insertions(+), 10 deletions(-)

diff --git a/README.md b/README.md
index 3cce22ff..8276c1be 100644
--- a/README.md
+++ b/README.md
@@ -104,9 +104,14 @@ The following services are defined via docker compose files.
         * seed-success_set
         * seed-failure_set
 
-### TODO: ingestor
+### ingestor
 
-see new service in #7
+* based on ingestor image
+* by default a flask app listening on `/` endpoint for `POST` requests with reports
+* or can be overriden to be used as inotify watcher on a configured folder for new appearance of reports
+* accepts browse reports with references to images on Swift
+* extracts the browse metadata (id, time, footprint, image reference)
+* `lpush` metadata into a `preprocess-md_queue`
 
 ### TODO: seeder
 
@@ -156,9 +161,20 @@ see new service in #7
 * provides external service for OpenSearch, WMS, & WCS
 * renders WMS requests received from cache or seeder
 
-### TODO: ELK stack
+### logging stack
 
-see #9
+* uses elasticsearch:7.9 & kibana:7.9 external images
+* fluentd image is build and published to registry because of additional plugins
+* ES data stored in local volume on swarm master
+* external access allowed to kibana through traefik
+* log parsing enabled for cache and core
+
+### sftp
+
+* uses external atmoz/sftp image
+* provides sftp access to two volumes for report exchange on registration result xmls and ingest requirement xmls
+* accessible on swarm master on port 2222 
+* credentials supplied via config
 
 # Usage
 
@@ -211,7 +227,11 @@ printf "<OS_PASSWORD>" | docker secret create OS_PASSWORD -
 Deploy the stack in dev environment:
 ```
 docker stack deploy -c docker-compose.vhr18.yml -c docker-compose.vhr18.dev.yml -c docker-compose.logging.yml -c docker-compose.logging.dev.yml vhr18-pvs  # start VHR_IMAGE_2018 stack in dev mode, for example to use local sources
-docker stack deploy -c docker-compose.emg.yml -c docker-compose.emg.dev.yml emg-pvs -c docker-compose.logging.yml -c docker-compose.logging.dev.yml # start Emergency stack in dev mode, for example to use local sources
+docker stack deploy -c docker-compose.emg.yml -c docker-compose.emg.dev.yml -c docker-compose.logging.yml -c docker-compose.logging.dev.yml emg-pvs # start Emergency stack in dev mode, for example to use local sources
+```
+Deploy base stack in production environment:
+```
+docker stack deploy -c docker-compose.base.ops.yml base-pvs
 ```
 First steps:
 ```
@@ -238,8 +258,20 @@ On first run, you need to define an index pattern to select the data source for
 Since we only have fluentd, you can just use `*` as index pattern.
 Select `@timestamp` as time field
 ([see also](https://www.elastic.co/guide/en/kibana/current/tutorial-define-index.html)).
-
-
+Example of a kibana query to discover logs of a single service: 
+```
+https://<kibana-url>/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))&_a=(columns:!(path,size,code,log),filters:!(),index:<index-id>,interval:auto,query:(language:kuery,query:'%20container_name:%20"<service-name>"'),sort:!())
+```
+Development service stacks keep their logging to stdout/stderr unless `logging` dev stack is used.
+On production machine, `fluentd` is set as a logging driver for docker daemon by modifying `/etc/docker/daemon.json` to 
+```
+{
+    "log-driver": "fluentd",
+    "log-opts": {
+        "fluentd-sub-second-precision": "true"
+    }
+}
+```
 ### setup sftp
 
 The `SFTP` image allow remote access into 2 logging folders, you can define (edit/add) users, passwords and (UID/GID) in the respective configuration file ( e.g  *config/vhr_sftp_users.conf* ).
@@ -251,9 +283,7 @@ sftp -P 2222 eox@127.0.0.1
 ``` 
 You will log in  into`/home/eox/data` directory which contains the 2 logging directories : `to/panda` and `from/fepd`
 
- **NOTE:**  The mounted directory that you are directed into is *`/home/user`*, where `user` is the username, hence when changing the username in the `.conf` file, the `sftp` mounted volumes path in `docker-compse.<collection>.yml` must change respectively.
-
-Once a product is registered, a xml report that contains `WMS` and `WCS` getCapabilities links is generated and saved in the same volume which `to/panda` is mounted to, once you successfuly sftp into the "sftp image" you can navigate to the generated reports.
+ **NOTE:**  The mounted directory that you are directed into is *`/home/user`*, where `user` is the username, hence when changing the username in the `.conf` file, the `sftp` mounted volumes path in `docker-compose.<collection>.yml` must change respectively.
  
 
 # Documentation
-- 
GitLab