The installation can be performed using the following commands in order:
This will install Supervisor on the selected worker node as a pod.
1. Create a directory on the desired worker node, to be used to store the supervisor configuration and license:
mkdir -p /home/user/supervisor-data/
This is only a reference path used for this documentation. If a different path is used, edit the following commands accordingly.
2. Copy the provided license file in the data directory on the worker node:
cp SFM-010010-10.lic /home/user/supervisor-data/license.lic
Replace the name of the file in this command with the actual license file provided.
3. Load the provided Supervisor docker container on the worker node (replace X.Y.Z with the appropriate version number).
Kubernetes uses containerd's own image store, hence we import into containerd in order for Kubernetes to see it.
sudo ctr -n k8s.io images import profitap-supervisor-vX.Y.Z.tar
4. Create the following YAML file (e.g. sv.yml) on the master node, with image, hostPath path, and nodeSelector hostname adjusted as per your setup:
apiVersion: v1 kind: Pod metadata: name: supervisor-pod labels: app: supervisor spec: hostNetwork: true # Use host networking containers: - name: supervisor image: profitap-supervisor:v1.1.0 # Use the image name after loading imagePullPolicy: IfNotPresent # Use IfNotPresent since the image is local # env: # - name: SUPERVISOR_THREADS # value: "8" volumeMounts: - mountPath: /data # Path in the container name: supervisor-storage # Name of the volume volumes: - name: supervisor-storage # Volume name hostPath: path: /home/profitap/supervisor-data # Updated local path on the host nodeSelector: kubernetes.io/hostname: workernode1 # Updated hostname
5. Run the sv.yml file with kubectl on the master node:
kubectl apply -f sv.yml
At this point, the Supervisor pod should be running. If you wish to verify that deployment has proceeded correctly, you can check the running containers using the following command:
kubectl get pods
The installation can be performed using the following commands in order:
This will install Supervisor on the selected worker node as a deployment.
1. Create a directory on the desired worker node, to be used to store the supervisor configuration and license:
mkdir -p /home/user/supervisor-data/
This is only a reference path used for this documentation. If a different path is used, edit the following commands accordingly.
2. Copy the provided license file in the data directory on the worker node:
cp SFM-010010-10.lic /home/user/supervisor-data/license.lic
Replace the name of the file in this command with the actual license file provided.
3. Load the provided Supervisor docker container on the worker node (replace X.Y.Z with the appropriate version number).
Kubernetes uses containerd's own image store, hence we import into containerd in order for Kubernetes to see it.
sudo ctr -n k8s.io images import profitap-supervisor-vX.Y.Z.tar
4. Create the following YAML file (e.g. sv-deployment.yml) on the master node, with image, hostPath path, and nodeSelector hostname adjusted as per your setup:
apiVersion: apps/v1 kind: Deployment metadata: name: supervisor-deployment labels: app: supervisor spec: replicas: 1 selector: matchLabels: app: supervisor template: metadata: labels: app: supervisor spec: hostNetwork: true # Use host networking containers: - name: supervisor image: profitap-supervisor:v1.1.0 imagePullPolicy: IfNotPresent # Use local image if available # env: # - name: SUPERVISOR_THREADS # value: "8" volumeMounts: - mountPath: /data name: supervisor-storage volumes: - name: supervisor-storage hostPath: path: /home/profitap/supervisor-data nodeSelector: kubernetes.io/hostname: workernode1
5. Run the sv-deployment.yml file with kubectl on the master node:
kubectl apply -f sv-deployment.yml
At this point, the Supervisor deployment should be running. If you wish to verify that deployment has proceeded correctly, you can check the running containers using the following commands:
kubectl get pods kubectl get deployments
The installation can be performed using the following commands in order:
This will install Supervisor on any worker node as a deployment with remote data directory. The remote data directory is an NFS share (CIFS is not supported).
1. Create the following YAML file (e.g. pv-nfs.yml), with server IP and path adjusted as per your setup:
apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 5Gi accessModes: - ReadWriteMany nfs: path: /volume1/remotedata server: 10.10.10.233 persistentVolumeReclaimPolicy: Retain storageClassName: nfs-storage
Run the pv-nfs.yml file with kubectl:
kubectl apply -f pv-nfs.yml
This will create a remote persistent volume.
Note: NFS tools need to exist on worker nodes. Tools can be installed using:
sudo apt update sudo apt install -y nfs-common
2. Create the following YAML file (e.g. pvc-nfs.yml):
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: supervisor-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: nfs-storage volumeName: nfs-pv # Explicitly bind to the specific PV
Run the pvc-nfs.yml file with kubectl:
kubectl apply -f pvc-nfs.yml
This will create the PVC on kubernetes cluster.
3. Create the following YAML file (e.g. sv-deployment-remote.yml), replacing the image name with the appropriate image name:
apiVersion: apps/v1 kind: Deployment metadata: name: supervisor-deployment labels: app: supervisor spec: replicas: 1 selector: matchLabels: app: supervisor template: metadata: labels: app: supervisor spec: hostNetwork: true # Use host networking containers: - name: supervisor image: profitap-supervisor:v1.1.0 imagePullPolicy: IfNotPresent # Use local image if available # env: # - name: SUPERVISOR_THREADS # value: "8" volumeMounts: - name: supervisor-storage mountPath: /data # Same container path volumes: - name: supervisor-storage persistentVolumeClaim: claimName: supervisor-pvc # Using the PVC instead of hostPath
Run the sv-deployment-remote.yml file with kubectl:
kubectl apply -f sv-deployment-remote.yml
At this point, the Supervisor deployment should be running. If you wish to verify that deployment has proceeded correctly, you can check the running containers using the following commands:
kubectl get pods kubectl get deployments