.. _initialization:

Initialization
==============

In order to set up an instance of the View Server (VS), the separate
``pvs_starter`` utility is recommended.

Running the Initialization
--------------------------

The ``pvs_starter`` utility is distributed as a Python package and easily
installed via ``pip``.

.. code-block:: bash

    pip3 install pvs_starter git+git@gitlab.eox.at:esa/prism/pvs_starter.git

Now a new VS instance can be set up like this:

.. code-block:: bash

    python3 -m pvs_starter.cli config.yaml out/ -f

This takes the initialization configuration ``config.yaml`` to generate
the required structure of a new VS instance in the ``out/`` directory.

Configuration of the Initialization
-----------------------------------

The important part of the initialization is the configuration. The file is
structured in YAML as detailed below. It contains the following
sections:

``database``
~~~~~~~~~~~~

Here, access details and credentials of the database are stored. It defines the
internal database name, user, and password that will be created when the stack
is deployed. Note that there is no ``host`` setting, as this will be handled
automatically within the Docker Swarm.

.. code-block:: yaml

    database:
      name: vs_db
      user: vs_user
      password: Go-J_eOUvj2k

``django_admin``
~~~~~~~~~~~~~~~~

This section deals with the setup of a Django admin account. This is used to
later access the admin panel to inspect the registered data.

.. code-block:: yaml

    django_admin:
      user: admin
      mail: office@eox.at
      password: jvLwv_20x-69

``preprocessor``
~~~~~~~~~~~~~~~~

Here, the preprocessing can be configured in detail.

.. TODO


``products``
~~~~~~~~~~~~

This section defines ``product_type`` related information. The two most
important settings here are the ``type_extractor`` and ``level_extractor``
structures which specify how the product type and product level should be
extracted from the metadata. For this, an XPath (or multiple) can be specified
to retrieve that information.

The ``types`` section defines the available ``product_types`` and which
``browse`` and ``mask`` types are to be generated.

.. code-block:: yaml

    products:
      type_extractor:
        xpath:
        namespace_map:
      level_extractor:
        xpath:
        namespace_map:
      types:
        PL00:
          default_browse: TRUE_COLOR
          browses:
            TRUE_COLOR:
              red:
                expression: red
                range: [1000, 15000]
                nodata: 0
              green:
                expression: green
                range: [1000, 15000]
                nodata: 0
              blue:
                expression: blue
                range: [1000, 15000]
                nodata: 0
            FALSE_COLOR:
              red:
                expression: nir
                range: [1000, 15000]
                nodata: 0
              green:
                expression: red
                range: [1000, 15000]
                nodata: 0
              blue:
                expression: green
                range: [1000, 15000]
                nodata: 0
            NDVI:
              grey:
                expression: (nir-red)/(nir+red)
                range: [-1, 1]
          masks:
            validity:
              validity: true



``collections``
~~~~~~~~~~~~~~~

In the ``collections`` section, the collections are set up and it is defined
which products based on ``product_type`` and ``product_level`` will be inserted
into them. The ``product_types`` must list types defined in the ``products``
section.

.. code-block:: yaml

    collections:
      COLLECTION:
        product_types:
          - PL00
        product_levels:
          - Level_1
          - Level_3

``storages``
~~~~~~~~~~~~

Here, the three relevant storages can be configured: the ``source``,
``preprocessed``, and ``cache`` storages.

The ``source`` storage defines the location from which the original files will
be downloaded to be preprocessed. Preprocessed images and metadata will then be
uploaded to the ``preprocessed`` storage. The cache service will cache images on
the ``cache`` storage.

Each storage definition uses the same structure and can target various types
of storages, such as OpenStack swift.

These storage definitions will be used in the appropriate sections.

.. code-block:: yaml

    storages:
      source:
        auth_type: keystone
        auth_url:
        version: 3
        username:
        password:
        tenant_name:
        tenant_id:
        region_name:
        container:
      preprocessed:
        auth_type: keystone
        auth_url:
        version: 3
        username:
        password:
        tenant_name:
        tenant_id:
        region_name:
        container:
      cache:
        type: swift
        auth_type: keystone
        auth_url: https://auth.cloud.ovh.net/v3/
        auth_url_short: https://auth.cloud.ovh.net/
        version: 3
        username:
        password:
        tenant_name:
        tenant_id:
        region_name:
        container:


``cache``
~~~~~~~~~

This section defines the exposed services, and how the layers shall be cached
internally.

.. code-block:: yaml

    cache:
      metadata:
        title: PRISM Data Access Service (PASS) developed by EOX
        abstract: PRISM Data Access Service (PASS) developed by EOX
        url: https://vhr18.pvs.prism.eox.at/cache/ows
        keyword: view service
        accessconstraints: UNKNOWN
        fees: UNKNOWN
        contactname: Stephan Meissl
        contactphone: Please contact via mail.
        contactfacsimile: None
        contactorganization: EOX IT Services GmbH
        contactcity: Vienna
        contactstateorprovince: Vienna
        contactpostcode: 1090
        contactcountry: Austria
        contactelectronicmailaddress: office@eox.at
        contactposition: CTO
        providername: EOX
        providerurl: https://eox.at
        inspire_profile: true
        inspire_metadataurl: TBD
        defaultlanguage: eng
        language: eng
      services:
        wms:
          enabled: true
        wmts:
          enabled: true
      connection_timeout: 10
      timeout: 120
      expires: 3600
      key: /{tileset}/{grid}/{dim}/{z}/{x}/{y}.{ext}
      tilesets:
        VHR_IMAGE_2018__TRUE_COLOR:
          title: VHR Image 2018 True Color
          abstract: VHR Image 2018 True Color
        VHR_IMAGE_2018__FALSE_COLOR:
          title: VHR Image 2018 False Color
          abstract: VHR Image 2018 False Color
        VHR_IMAGE_2018__NDVI:
          title: VHR Image 2018 NDVI
          abstract: VHR Image 2018 NDVI
          style: earth
        VHR_IMAGE_2018_Level_1__TRUE_COLOR:
          title: VHR Image 2018 Level 1 True Color
          abstract: VHR Image 2018 Level 1 True Color
        VHR_IMAGE_2018_Level_1__FALSE_COLOR:
          title: VHR Image 2018 Level 1 False Color
          abstract: VHR Image 2018 Level 1 False Color
        VHR_IMAGE_2018_Level_1__NDVI:
          title: VHR Image 2018 Level 1 NDVI
          abstract: VHR Image 2018 Level 1 NDVI
          style: earth
        VHR_IMAGE_2018_Level_1__TRUE_COLOR:
          title: VHR Image 2018 Level 3 True Color
          abstract: VHR Image 2018 Level 3 True Color
        VHR_IMAGE_2018_Level_1__FALSE_COLOR:
          title: VHR Image 2018 Level 3 False Color
          abstract: VHR Image 2018 Level 3 False Color
        VHR_IMAGE_2018_Level_1__NDVI:
          title: VHR Image 2018 Level 3 NDVI
          abstract: VHR Image 2018 Level 3 NDVI
          style: earth

Once the initialization is finished the next step is to deploy the Docker Swarm
stack as described in the section :ref:`setup`.