EOX GitLab Instance

Skip to content
Snippets Groups Projects
user avatar
gitlab authored
a612f497
History

Chart for the View Server (VS) bundling all services

All the commands must be ran from the repo folder. Must have kubectl and helm utilities and one of k3s or minikube clusters.

Useful commands:

helm dependency update # updates the dependencies

helm template testing . --output-dir ../tmp/ -f values.custom.yaml # renders the helm template files. Used in conjunction with vs-starter

The template command outputs the rendered yaml files for debugging and checking.

Installing chart locally

In order to test the services together, here's a way to install the chart locally in your k3s/minicube. This install is based on the default values in the repo and customizes them.

Prerequisites

When running k3s and minikube in docker, the paths doesn't refer to your machine, but the docker container where k3s runs. The following display how to setup each solution.

Minikube

Minikube needs to be started with the following:

minikube start --mount-string /home/$user/:/minikube-host/
minikube addons enable ingress

Here the full home directory is bound to the cluster at /minikube-host in order to bind mounts necessary for development. Also the ingress addon is enabled.

k3s

k3s is also started creating with specifying a volume

k3s cluster create --volume /home/$user/:/k3s-host/

Here the full home directory is bound to the cluster at /k3s-host in order to bind mounts necessary for development.

values.yaml

The default values.yaml should be enough for a very basic setup, but overriding with a separate one is required for more elaborate setups and deployments.

Persistent volume claims

For the services to start, you're going to need the pvcs as below. Create a file pvc.yaml and run kubectl apply -f pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-access-db
  namespace: <your-namespace>
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path # this should be `standard` for minikube
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-access-redis
  namespace: <your-namespace>
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path # this should be `standard` for minikube
  resources:
    requests:
      storage: 2Gi

Deploying the stack

Install:

helm install test . --values values.custom.yaml

Upgrade with overrides values:

helm upgrade test . --values values.custom.yaml --values your-values-override-file.yaml

If you specify multiple values files, helm will merge them together. You can even delete keys by setting them to null in the override file a .

A useful override is to set the ingress host e.g. to nip.io:

global:
  ingress:
    hosts:
      - host: data-access.<IP-ADDRESS>.nip.io
    tls:
      - hosts:
          - data-access.<IP-ADDRESS>.nip.io

Where <IP-ADDRESS> is one of

  • 127.0.0.1
  • Output of minikube ip

You might also want to change the number of replicas:

renderer:
  replicaCount: 1

For development, it's useful to mount the code as volume. This can be done via a values override.

When using k3s & minikube, it's enough to define a volume like this:

registrar:
  volumes:
    - name: eoxserver
      hostPath:
        path: /<app>-host/path/to/eoxserver
        type: DirectoryOrCreate
  volumeMounts:
  - mountPath: /usr/local/lib/python3.8/dist-packages/eoxserver/
    name: eoxserver

where app is k3s or minikube and a volumeMount for the container: