Chart for the View Server (VS) bundling all services
All the commands must be ran from the repo folder. Must have kubectl
and helm
utilities and one of k3s
or minikube
clusters.
Useful commands:
helm dependency update # updates the dependencies
helm template testing . --output-dir ../tmp/ -f values.custom.yaml # renders the helm template files. Used in conjunction with vs-starter
The template command outputs the rendered yaml files for debugging and checking.
Installing chart locally
In order to test the services together, here's a way to install the chart locally in your k3s
/minicube
. This install is based on the default values in the repo and customizes them.
Prerequisites
When running k3s and minikube in docker, the paths doesn't refer to your machine, but the docker container where k3s runs. The following display how to setup each solution.
Minikube
Minikube needs to be started with the following:
minikube start --mount-string /home/$user/:/minikube-host/
minikube addons enable ingress
Here the full home directory is bound to the cluster at /minikube-host
in order to bind mounts necessary for development. Also the ingress addon is enabled.
k3s
k3s is also started creating with specifying a volume
k3s cluster create --volume /home/$user/:/k3s-host/
Here the full home directory is bound to the cluster at /k3s-host
in order to bind mounts necessary for development.
values.yaml
The default values.yaml should be enough for a very basic setup, but overriding with a separate one is required for more elaborate setups and deployments.
Persistent volume claims
For the services to start, you're going to need the pvcs as below. Create a file pvc.yaml
and run kubectl apply -f pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-access-db
namespace: <your-namespace>
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path # this should be `standard` for minikube
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-access-redis
namespace: <your-namespace>
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path # this should be `standard` for minikube
resources:
requests:
storage: 2Gi
Deploying the stack
Install:
helm install test . --values values.custom.yaml
Upgrade with overrides values:
helm upgrade test . --values values.custom.yaml --values your-values-override-file.yaml
If you specify multiple values files, helm will merge them together. You can even delete keys by setting them to null
in the override file a .
dev domain name:
To use the default domain name http://dev.local
, you should add the domain name to your hosts e.g ( in linux add to etc/hosts
):
<IP-ADDRESS> vs.local
Where <IP-ADDRESS>
is the output of minikube ip
, then you can simply navigate to http://dev.local
to access the client
You might also want to change the number of replicas:
renderer:
replicaCount: 1
For development, it's useful to mount the code as volume. This can be done via a values override.
When using k3s & minikube, it's enough to define a volume like this:
registrar:
volumes:
- name: eoxserver
hostPath:
path: /<app>-host/path/to/eoxserver
type: DirectoryOrCreate
volumeMounts:
- mountPath: /usr/local/lib/python3.8/dist-packages/eoxserver/
name: eoxserver
where app
is k3s
or minikube
and a volumeMount
for the container:
The current deployment is configured to use a directory
local storage called data
, which should be mounted when creating the cluster. The folder contains a number of products, for testing/ registering
you should change the mounting path for the registrar
and the renderer
by replacing the host path for mounted volumes, for example ( for the registrar
):
registrar:
volumes:
- name: local-storage
hostPath:
path: /<app>-host/path/to/vs-deployment/data
type: Directory
volumeMounts:
- mountPath: /mnt/data
name: local-storage
Where <app>
is either k3s
or minikube
preprocessing products:
For preprocessing the provided products in testing/preprocessed_list.csv
which exists on swift bucket, you need to specify an output bucket for the preprocessing result.
You can set an arbitrary bucket -to be removed afterwards-
helm upgrade test . --values values.custom.yaml --set global.storage.target.container=<containerID>
kubectl exec deployment/test-preprocessor -- preprocessor preprocess --config-file /config.yaml "RS02_SAR_QF_SLC_20140518T050904_20140518T050909_TRS_33537_0000.tar"
P.S: In case no container was specified preprocessor will create a container with the item name
registering products
you can register products by executing commands either directly through the registrar
or through the redis
component (e.g use the stac item inside testing/product_list.json
):
- registrar:
kubectl exec deployment/test-registrar -- registrar --config-file /config.yaml register items '<stac_item>'
- redis:
kubectl exec test-redis-master-0 -- redis-cli lpush register_queue '<stac_item>'
Demo deployment
There is a set of sample data in the data
directory, the list of the stac items are stored in testing/demo_product_list.json
. The configuration of the sample data is included in the /testing/values-testing.yaml
config file.