Hi,
I have setup a ceph cluster (octopus) and installed he rbd
plugins/provisioner in my Kubernetes cluster.
I can create dynamically FS and Block Volumes which is fine. For that I
have created the following the following storageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph
provisioner: rbd.csi.ceph.com
parameters:
clusterID: <clusterID>
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-system
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-system
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-system
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
This works fine for ephemeral dynamically crated volumes. But now I want
to use durable volume with the reclaimPolicy:Retain. I expect that I
need to create the image in my kubernetes pool on the ceph cluster first
- which I have done.
I defined the following new storage class with the reclaimPolicy 'Retain':
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-durable
provisioner: rbd.csi.ceph.com
parameters:
clusterID: <clusterID>
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-system
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-system
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-system
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- discard
And finally I created the following PersistentVolume and
PersistentVolumeClaim:
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: demo-internal-index
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
claimRef:
namespace: office-demo-internal
name: index
csi:
driver: driver.ceph.io
fsType: ext4
volumeHandle: demo-internal-index
storageClassName: ceph-durable
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: index
namespace: office-demo-internal
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-durable
resources:
requests:
storage: 1Gi
volumeName: "demo-internal-index"
But this seems not to work and I can see the following deployment warning:
attachdetach-controller AttachVolume.Attach failed for volume
"demo-internal-index" : attachdetachment timeout for volume
demo-internal-index
But the PV exists:
$ kubectl get pv
NAME CAPACITY ACCESS MODES
RECLAIM POLICY STATUS CLAIM
STORAGECLASS REASON AGE
demo-internal-index 1Gi RWO
Retain Bound office-demo-internal/index
ceph-durable 2m35s
and also the PVC exists:
$ kubectl get pvc -n office-demo-internal
NAME STATUS VOLUME CAPACITY ACCESS MODES
STORAGECLASS AGE
index Bound demo-internal-index 1Gi RWO ceph-durable 53m
I guess my PV object is nonsense? Can someone provide me an example how
to setup the PV object in Kubernetes. I only found examples where the
Ceph Monitoring IPs and the user/password is configured within the PV
object. But I would expect that this is covered by the storage class
already?
Thanks for your help
===
Ralph
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx