hi,
I figured it out.
1) the image created in ceph should only have the feature 'layering'.
This can be created with the command:
$ rbd create test-image --size=1024 --pool=kubernetes --image-feature
layering
2) now the deployment should look like this:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-static-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
volumeName: rbd-static-pv
storageClassName: ceph
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: rbd-static-pv
spec:
volumeMode: Filesystem
storageClassName: ceph
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
csi:
driver: rbd.csi.ceph.com
fsType: ext4
nodeStageSecretRef:
name: csi-rbd-secret
namespace: ceph-system
volumeAttributes:
clusterID: "<clusterID>"
pool: "kubernetes"
staticVolume: "true"
# The imageFeatures must match the created ceph image exactly!
imageFeatures: "layering"
volumeHandle: test-image
Where the <clusterID> must be replaced with the fsid. I wonder why this
is not documented on the ceph homepage.
I have written my own documentation here:
https://github.com/imixs/imixs-cloud/tree/master/management/ceph
===
Ralph
On 12.06.21 15:37, Ralph Soika wrote:
Hi,
I have setup a ceph cluster (octopus) and installed he rbd
plugins/provisioner in my Kubernetes cluster.
I can create dynamically FS and Block Volumes which is fine. For that
I have created the following the following storageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph
provisioner: rbd.csi.ceph.com
parameters:
clusterID: <clusterID>
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-system
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-system
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-system
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
This works fine for ephemeral dynamically crated volumes. But now I
want to use durable volume with the reclaimPolicy:Retain. I expect
that I need to create the image in my kubernetes pool on the ceph
cluster first - which I have done.
I defined the following new storage class with the reclaimPolicy
'Retain':
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-durable
provisioner: rbd.csi.ceph.com
parameters:
clusterID: <clusterID>
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-system
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-system
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-system
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- discard
And finally I created the following PersistentVolume and
PersistentVolumeClaim:
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: demo-internal-index
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
claimRef:
namespace: office-demo-internal
name: index
csi:
driver: driver.ceph.io
fsType: ext4
volumeHandle: demo-internal-index
storageClassName: ceph-durable
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: index
namespace: office-demo-internal
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-durable
resources:
requests:
storage: 1Gi
volumeName: "demo-internal-index"
But this seems not to work and I can see the following deployment
warning:
attachdetach-controller AttachVolume.Attach failed for volume
"demo-internal-index" : attachdetachment timeout for volume
demo-internal-index
But the PV exists:
$ kubectl get pv
NAME CAPACITY ACCESS MODES
RECLAIM POLICY STATUS CLAIM
STORAGECLASS REASON AGE
demo-internal-index 1Gi RWO Retain
Bound office-demo-internal/index ceph-durable 2m35s
and also the PVC exists:
$ kubectl get pvc -n office-demo-internal
NAME STATUS VOLUME CAPACITY ACCESS MODES
STORAGECLASS AGE
index Bound demo-internal-index 1Gi RWO ceph-durable 53m
I guess my PV object is nonsense? Can someone provide me an example
how to setup the PV object in Kubernetes. I only found examples where
the Ceph Monitoring IPs and the user/password is configured within the
PV object. But I would expect that this is covered by the storage
class already?
Thanks for your help
===
Ralph
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
*Imixs Software Solutions GmbH*
*Web:* www.imixs.com <http://www.imixs.com> *Phone:* +49 (0)89-452136 16
*Timezone:* Europe/Berlin - CET/CEST
*Office:* Agnes-Pockels-Bogen 1, 80992 München
Registergericht: Amtsgericht Muenchen, HRB 136045
Geschaeftsführer: Gaby Heinle u. Ralph Soika
*Imixs* is an open source company, read more: www.imixs.org
<http://www.imixs.org>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx