EOX GitLab Instance

README.md 4.74 KB
Newer Older
Nikola Jankovic's avatar
Nikola Jankovic committed
1
# Chart for the View Server (VS) bundling all services
Nikola Jankovic's avatar
Nikola Jankovic committed
2

Nikola Jankovic's avatar
Nikola Jankovic committed
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
All the commands must be ran from the repo folder. Must have `kubectl` and `helm` utilities and one of `k3s` or `minikube` clusters.

Useful commands:

```bash
helm dependency update # updates the dependencies

helm template testing . --output-dir ../tmp/ -f values.custom.yaml # renders the helm template files. Used in conjunction with vs-starter
```

The template command outputs the rendered yaml files for debugging and checking.

## Installing chart locally

In order to test the services together, here's a way to install the chart locally in your `k3s`/`minicube`. This install is based on the default values in the repo and customizes them.

### Prerequisites

When running k3s and minikube in docker, the paths doesn't refer to your machine, but the docker container where k3s runs. The following display how to setup each solution.

#### Minikube
Minikube needs to be started with the following:

```bash
minikube start --mount-string /home/$user/:/minikube-host/
minikube addons enable ingress
```

Here the full home directory is bound to the cluster at `/minikube-host` in order to bind mounts necessary for development. Also the ingress addon is enabled.

#### k3s
k3s is also started creating with specifying a volume

``` bash
k3s cluster create --volume /home/$user/:/k3s-host/
```

Here the full home directory is bound to the cluster at `/k3s-host` in order to bind mounts necessary for development.

#### values.yaml
The default values.yaml should be enough for a very basic setup, but overriding with a separate one is required for more elaborate setups and deployments.

#### Persistent volume claims
For the services to start, you're going to need the pvcs as below. Create a file `pvc.yaml` and run `kubectl apply -f pvc.yaml`
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-access-db
  namespace: <your-namespace>
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path # this should be `standard` for minikube
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-access-redis
  namespace: <your-namespace>
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path # this should be `standard` for minikube
  resources:
    requests:
      storage: 2Gi
```

### Deploying the stack

Install:
```bash
helm install test . --values values.custom.yaml
```

Upgrade with overrides values:
```bash
helm upgrade test . --values values.custom.yaml --values your-values-override-file.yaml
```

If you specify multiple values files, helm will merge them together. [You can even delete keys by setting them to `null` in the override file a](https://helm.sh/docs/chart_template_guide/values_files/) .

Mussab Abdalla's avatar
Mussab Abdalla committed
89
dev domain name:
Nikola Jankovic's avatar
Nikola Jankovic committed
90

Mussab Abdalla's avatar
Mussab Abdalla committed
91
92
93
To use the default domain name `http://dev.local`, you should add the domain name to your hosts e.g ( in linux add to `etc/hosts`):
```
<IP-ADDRESS> vs.local
94
95
```

Mussab Abdalla's avatar
Mussab Abdalla committed
96
Where `<IP-ADDRESS>` is the output of `minikube ip`, then you can simply navigate to `http://dev.local` to access the client
97

Nikola Jankovic's avatar
Nikola Jankovic committed
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
You might also want to change the number of replicas:
```yaml
renderer:
  replicaCount: 1
```

For development, it's useful to mount the code as volume. This can be done via a values override.

When using k3s & minikube, it's enough to define a volume like this:

```yaml
registrar:
  volumes:
    - name: eoxserver
      hostPath:
        path: /<app>-host/path/to/eoxserver
        type: DirectoryOrCreate
  volumeMounts:
  - mountPath: /usr/local/lib/python3.8/dist-packages/eoxserver/
    name: eoxserver
```
where `app` is `k3s` or `minikube` and a `volumeMount` for the container:
Mussab Abdalla's avatar
Mussab Abdalla committed
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143

The current deployment is configured to use a `swift` storage container called `pvs_testing`, which contains  a number of products, for testing/ registering

preprocessing products:

For preprocessing the provided products in `testing/preprocessed_list.csv` which exists on swift bucket, you need to specify an output bucket for the preprocessing result.

You can set an arbitrary bucket -to be removed afterwards-

```
helm upgrade test . --values values.custom.yaml --set global.storage.target.container=<containerID>

kubectl exec deployment/test-preprocessor -- preprocessor preprocess --config-file /config.yaml "RS02_SAR_QF_SLC_20140518T050904_20140518T050909_TRS_33537_0000.tar"

```

P.S: In case no container was specified preprocessor will create a container with the item name

registering products

you can register products by executing commands either directly through the `registrar` or through the `redis` component:

* registrar:
```
144
kubectl exec deployment/test-registrar -- registrar --config-file /config.yaml register items '<stac_item>'
Mussab Abdalla's avatar
Mussab Abdalla committed
145
146
147
148
149
150
151
152

```

* redis:
```
kubectl exec test-redis-master-0 -- redis-cli lpush register_queue '<stac_item>'

```