Details
-
New Feature
-
Status: Open
-
Major
-
Resolution: Unresolved
-
3.5.0
-
None
-
None
Description
Today Spark supports the following types of Kubernetes volumes: hostPath, emptyDir, nfs and persistentVolumeClaim.
In our case, Kubernetes cluster is multi-tenant and we cannot make cluster-wide changes when deploying our application to the Kubernetes cluster. Our application requires static shared file system. So, we cannot use hostPath (don't have control of hosting VMs) and persistentVolumeClaim (requires cluster-wide change when deploying PV). Our security department does not allow nfs.
What would help in our case, is the use of csi driver (taken from here: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/e2e_usage.md#option3-inline-volume):
kind: Pod apiVersion: v1 metadata: name: nginx-azurefile-inline-volume spec: nodeSelector: "kubernetes.io/os": linux containers: - image: mcr.microsoft.com/oss/nginx/nginx:1.19.5 name: nginx-azurefile command: - "/bin/bash" - "-c" - set -euo pipefail; while true; do echo $(date) >> /mnt/azurefile/outfile; sleep 1; done volumeMounts: - name: persistent-storage mountPath: "/mnt/azurefile" readOnly: false volumes: - name: persistent-storage csi: driver: file.csi.azure.com volumeAttributes: shareName: EXISTING_SHARE_NAME # required secretName: azure-secret # required mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock" # optional