diff --git a/documentation/operator-guide/conf.py b/documentation/operator-guide/conf.py
index c4839a6c6d62719bd824cee3e225baab8e05d895..bd0fda7971468c754f8672eed9da12a702a31439 100644
--- a/documentation/operator-guide/conf.py
+++ b/documentation/operator-guide/conf.py
@@ -27,8 +27,6 @@ author = u'EOX IT Services GmbH'
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
 # ones.
 extensions = [
-    'recommonmark',
-
 ]
 
 # Add any paths that contain templates here, relative to this directory.
diff --git a/documentation/operator-guide/ingestion.rst b/documentation/operator-guide/ingestion.rst
new file mode 100644
index 0000000000000000000000000000000000000000..2a50654a27c58abf1729691b7ea58c60d6a273ad
--- /dev/null
+++ b/documentation/operator-guide/ingestion.rst
@@ -0,0 +1,213 @@
+.. _ingestion:
+
+Data Ingestion
+==============
+
+This section details the data ingestion and later management in the VS.
+
+Redis queues
+------------
+
+The central synchronization component in the VS is the redis key-value store.
+It provides various queues, which are listened on by the services. For example,
+the ``preprocessor`` service instances listen on the ``preprocess_queue`` on
+the Redis. Whenever an item is added to the queue, it is eventually consumed by
+one of the ``preprocessor`` replicas, which performs the preprocessing. When
+completed, it pushes the processed item into the ``register_queue`` which
+itself is listened by the ``registrar``.
+
+So via the Redis, the ingestion can be triggered and observed. In order to
+eventually start the preprocessing of a product, its path on the configured
+object storage has to be pushed onto the ``preprocess_queue``, as will be
+explained in detail in this chapter.
+
+As the Redis store is not publicly accessible from outside of the stack. So to
+interact with it, the operator has to run a command from one of the services.
+Conveniently, the service running Redis also has the ``redis-cli`` tool
+installed that lets users interact with the store.
+
+When doing one off commands, it is maybe more convenient to execute it on a
+running service. For this, the ``docker ps`` command can be used to select the
+identifier of the running docker container of the redis service.
+
+.. code-block:: bash
+
+    container_id=$(docker ps -qf "name=<stack-name>_redis")
+
+With this identifier, a command can be issued:
+
+.. code-block:: bash
+
+    docker exec -it $container_id redis-cli ...
+
+When performing more than one command, it can be simpler to open a shell on the
+service instead:
+
+.. code-block:: bash
+
+    docker exec -it $container_id bash
+
+As the container ID may change (for example when the replica is restarted) it
+is better to retrieve it for every command instead of relying on a variable:
+
+.. code-block:: bash
+
+    docker exec -it $(docker ps -qf "name=<stack-name>_redis")
+
+For the sake of brevity, the next commands in this chaptere are using either of
+the above techniques and will just print the final commands inside the redis
+container.
+
+.. note::
+
+    For the VS, only the ``List``  and ``Set`` `Redis data types
+    <https://redis.io/topics/data-types>`_ are really used. ``Sets`` are an
+    unordered collection of string elements. In VS it is used to denote that an
+    element is part of a particular group, e.g: being preprocessed, or having
+    failed registration.
+
+    ``Lists`` are used as a task queue. It is possible to add items to eithre
+    end of the queue, but by convention items are pushed on the "left" and
+    popped from the "right" end of the list resulting in a last-in-first-out
+    (LIFO) queue. It is entirely possible to push elements to the "right" end
+    as-well, and an operator may want to do so in order to add an element to be
+    processed as soon as possible instead of waiting before all other elements
+    before it are processed.
+
+    The full list of available commands can be found for both `Lists
+    <https://redis.io/commands#list>`_ and `Sets
+    <https://redis.io/commands#set>`_.
+
+For a more concrete example: the following command finds the container ID of
+the redis service replica, and executes a ``redis-cli lpush`` command to add a
+new path of an object to preprocess on the ``preprocess_queue``:
+
+.. code-block:: bash
+
+    redis-cli lpush preprocess_queue "/data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar"
+
+Usually, with a preprocessor service running and no other items in the
+``preprocess_queue`` this value will be immediatly popped from the list and
+processed. For the sake of demonstration this command would print the contents
+of the ``preprocess_queue``:
+
+.. code-block:: bash
+
+    $ redis-cli lrange preprocess_queue 0 -1
+    /data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar
+
+Now that the product is beeing preprocessed, it should be visible in the
+``preprocessing_set``. As the name indicates, this is using the ``Set``
+datatype, thus requiring the ``SMEMBERS`` subcommand to list:
+
+.. code-block:: bash
+
+    $ redis-cli smembers preprocessing_set 0 -1
+    /data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar
+
+Once the preprocessing of the product is finished, the preprocessor will remove
+the currently worked on path from the ``preprocessing_set`` and add it either
+to the ``preprocess-success_set`` or the ``preprocess-failure_set`` depending
+on whether the processing succeeded or not. They can be inspected using the
+same ``SMEMBERS`` subcommand but either name as parameter.
+
+Additionally, upon success, the preprocessor places the same product path on
+the ``register_queue``, where it can be inspected with the following command.
+
+.. code-block:: bash
+
+    $ redis-cli lrange register_queue 0 -1
+    /data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar
+
+If an operator wants to trigger the re-registration of a product only the
+product path needs to be pushed to this queue:
+
+.. code-block:: bash
+
+    redis-cli lpush register_queue "/data25/OA/PL00/1.0/00/urn:eop:DOVE:MULTISPECTRAL_4m:20180811_081455_1054_3be7/0001/PL00_DOV_MS_L3A_20180811T081455_20180811T081455_TOU_1234_3be7.DIMA.tar"
+
+Very similar to the preprocessing, during the registration the product path is
+added to the ``registering_set``, afterwards the path is placed to either the
+``register-success_set`` or ``register-failure_set``. Again, these queues or
+sets can be inspected by the ``LRANGE`` or ``SMEMBERS`` subcommands.
+
+Data Management
+---------------
+
+Sometimes it is necessary to directly interact with the registrar/renderer. The
+following section shows what tasks on the registrar can be accomplished.
+
+For all intents and purposes in this section it is assumed, that the operator
+is logged into a shell on the ``registrar`` service. This can be achieved via
+the following command (assuming at least one registrar replica is running):
+
+.. code-block:: bash
+
+    docker exec -it $(docker ps -qf "name=<stack-name>_registrar") bash
+
+The contents of the shared registrar/renderer database can be managed using
+the registrars instance ``manage.py`` script. For brevity, the following bash
+alias is assumed:
+
+.. code-block:: bash
+
+    alias manage.py='python3 ..../manage.py' # TODO
+
+
+Manual Data Registration
+------------------------
+
+.. warning::
+
+    This approach is not recommended for production use, as it circumvents the
+    Redis sets to track what products have been registered and where the
+    registration failed.
+
+
+Collection Management
+---------------------
+
+A collection is a grouping of earth observation products, accessible as a
+single entity via various service endpoints. Depending on the configuration,
+multiple collections are created when the service is set up. They can be listed
+using the ``collection list`` command.
+
+New collections can be created using the ``collection create`` command. This
+can refer to a ``Collection Type``, which will restrict the collection in terms
+of insertable products: only products of an allowed ``Product Type`` can be
+added. Detailed information about the available Collection management commands
+can be found in the `CLI documentation <https://docs.eoxserver.org/en/master/users/coverages.html#command-line-interfaces>`__.
+
+Collections can be deleted, without affecting the contained products.
+
+.. warning::
+
+    Since the other services have fixed configuration and depend on specific
+    collection, deleting said collections without a replacement can lead to
+    service disruptions.
+
+In certain scenarios it may be useful, to add specific products to or exclude
+them from a collection. For this, the Product identifier needs to be known. To
+find out the Product identifier, either the OpenSearch on an existing
+collection or the CLI command ``id list`` can be used.
+
+When the identifier is obtained, the following management command inserts a
+product into a collection:
+
+.. code-block:: bash
+
+    manage.py collection insert <collection-id> <product-id>
+
+Multiple products can be inserted in one pass by providing more than one
+identifier.
+
+The reverse command excludes a product from a collection:
+
+.. code-block:: bash
+
+    manage.py collection exclude <collection-id> <product-id>
+
+Again, multiple products can be excluded in a single call.
+
+
+
diff --git a/documentation/operator-guide/management.rst b/documentation/operator-guide/management.rst
new file mode 100644
index 0000000000000000000000000000000000000000..b90e6492cfc6be2bc55154381e03ff155a5231e8
--- /dev/null
+++ b/documentation/operator-guide/management.rst
@@ -0,0 +1,38 @@
+.. _management:
+
+Service Management
+==================
+
+This section shows how a deployed VS stack can be interacted with.
+
+
+Scaling
+-------
+
+Scaling is a handy tool to ensure stable performance, even when dealing with
+higher usage on either service. For example, the preprocessor and registrar can
+be scaled to a higher replica count to enable a better throughput when
+ingesting data into the VS.
+
+The following command scales the ``renderer`` service to 5 replicas:
+
+.. code-block:: bash
+
+    docker service scale <stack-name>_renderer=5
+
+A service can also be scaled to zero replicas, effectively disabling the
+service.
+
+.. warning::
+
+    The ``redis`` and ``database`` should never be scaled (their replica count
+    should remain 1) as this can lead to service disruptions and corrupted data.
+
+
+Updating Images
+---------------
+
+Updating the service software is done using previously established tools. To
+update the service in question, it needs to be scaled to zero replicas. Then
+the new image can be pulled, and the service can be scaled back to its original
+value. This forces the start of the service from the newly fetched image.
diff --git a/documentation/operator-guide/operator-guide.rst b/documentation/operator-guide/operator-guide.rst
index 961097770870012db9a7dada38ffe3779cc4f9ad..e7dab9fb865e2ff58efca9307b3b00e1b33be277 100644
--- a/documentation/operator-guide/operator-guide.rst
+++ b/documentation/operator-guide/operator-guide.rst
@@ -1,7 +1,14 @@
 Operator Guide
 ==============
 
-TODO
+
+
+.. toctree::
+   :maxdepth: 3
+
+   management
+   ingestion
+
 
 
 Admin
diff --git a/documentation/operator-guide/setup.rst b/documentation/operator-guide/setup.rst
index 2517af903817d10537794780c7521cfb9ca78bda..5d27a838691e0cfaf5410bdf5f17f31283a0fc64 100644
--- a/documentation/operator-guide/setup.rst
+++ b/documentation/operator-guide/setup.rst
@@ -72,3 +72,59 @@ Additional information for swarm management can be obtained in the official
 documentation of the project:
 https://docs.docker.com/engine/reference/commandline/swarm/
 
+
+Image retrieval
+---------------
+
+Before the Docker images can be used, they have to be retrieved first. With
+images from the default repository, this happens automatically. When private
+repositories are used, they need to be configured beforehand.
+Currently, all images used in VS that are not off-the-shelf are hosted on the
+``registry.gitlab.eox.at`` registry. It can be configured to be used with this
+command with the correct username and password filled in:
+
+.. code-block:: bash
+
+    docker login -u <username> -p <password> registry.gitlab.eox.at
+
+Now the relevant images can be pulled:
+
+.. code-block:: bash
+
+    docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_core
+    docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_cache
+    docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_preprocessor
+    docker pull registry.gitlab.eox.at/esa/prism/vs/pvs_client
+
+# TODO: ingestor image?
+
+
+Stack Deployment
+----------------
+
+Now that a Docker Swarm is established, it is time to deploy the VS as a stack.
+This is done using the created Docker Compose configuration files. In order to
+enhance the re-usability, these files are split into multiple parts to be used
+for both development and final service deployment.
+
+For a development deployment one would do (replace ``name`` with the actual
+service identifier:
+
+.. code-block:: bash
+
+    docker stack deploy -c docker-compose.<name>.yml -c docker-compose.<name>.dev.yml <name>-pdas
+
+
+This command actually performs a variety of tasks. First off, it obtains any
+missing images, such as the image for the reverse proxy, the database or the
+redis key-value store.
+
+When all relevant images are pulled from their respective repository the
+services of the stack are initialized. In the default setting, each service is
+represented by a single container of its respective service type. When starting
+for the first time, the startup procedure takes some time, as everything needs
+to be initialized. This includes the creation of the database, user,
+required tables and the Django instance.
+
+That process can be supervised using the ``docker service ls`` command, which
+lists all available services and their respective status.