Re: High number of Cephfs Subvolumes compared to Cephfs persistent volumes in K8S environnement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Edouard, 

For each subvolume listed by 'ceph fs subvolume ls cephfs csi', you could get its PV name with 'rados listomapvals' and then check if this PV still exists or not in K8s: 

$ ceph fs subvolume ls cephfs csi 
[ 
{ 
"name": "csi-vol-fab753bf-c4c0-42d0-98d4-8dd1caf5055f" 
} 
] 

$ rados listomapvals csi.volume.fab753bf-c4c0-42d0-98d4-8dd1caf5055f --pool=<cephfs-metadata-pool> --namespace=csi 
csi.imagename 
value (44 bytes) : 
00000000 63 73 69 2d 76 6f 6c 2d 66 61 62 37 35 33 62 66 |csi-vol-fab753bf| 
00000010 2d 63 34 63 30 2d 34 32 64 30 2d 39 38 64 34 2d |-c4c0-42d0-98d4-| 
00000020 38 64 64 31 63 61 66 35 30 35 35 66 |8dd1caf5055f| 
0000002c 

csi.volname 
value (40 bytes) : 
00000000 70 76 63 2d 34 35 33 61 36 35 65 33 2d 61 31 33 |pvc-453a65e3-a13| 
00000010 64 2d 34 64 34 39 2d 62 65 34 37 2d 33 32 62 37 |d-4d49-be47-32b7| 
00000020 34 36 30 63 30 62 39 33 |460c0b93| 
00000028 

csi.volume.owner 
value (15 bytes) : 
00000000 70 77 65 62 2d 6a 75 6c 69 65 69 6e 66 37 37 |pweb-julieinf77| 
0000000f 

The csi.volname should be the name of the PV (here pvc-453a65e3-a13d-4d49-be47-32b7460c0b93) and the csi.volume.owner (if existent) should be the namespace where the PVC was created. 

You could also check "created_at", "mtime", and browse "path" information from the below command to evalute the value of this data: 

$ ceph fs subvolume info cephfs csi-vol-fab753bf-c4c0-42d0-98d4-8dd1caf5055f csi | grep -E 'created_at|mtime|path' 
"created_at": "2024-05-17 07:24:14", 
"mtime": "2024-10-23 02:00:07", 
"path": "/volumes/csi/csi-vol-fab753bf-c4c0-42d0-98d4-8dd1caf5055f/1bec4120-7c84-4268-b653-8e6421456df9", 

Regards, 
Frédéric. 

----- Le 23 Oct 24, à 13:59, Edouard FAZENDA <e.fazenda@xxxxxxx> a écrit : 

> Dear Ceph Community,

> Maybe you can help me with this

> I have inconsistencies with the number of subvolumes in Cephfs in my Ceph
> Cluster and the number of Persistent Volumes of StorageClass Cephfs in my K8S
> cluster

> I have K8S 1.25 version on my kubernetes cluster and use the Ceph CSI CephFs
> 3.8.1, my ceph cluster is 16.2.15 pacific release.

> I have 35 Persistent Volumes of type CephFS

> $ kubectl get pv -o
> custom-columns=name:.metadata.name,subvolume:.spec.csi.volumeAttributes.subvolumeName,storageclass:.spec.storageClassName
> | grep cephfs

> pvc-0062e845-4a87-4a32-8f19-cbfff3b2789d
> csi-vol-919a2302-f964-11ed-b5e0-b615b8ae8847 csi-cephfs-sc

> pvc-0298fc13-bc5f-489a-b88e-64d7cdc61f1e
> csi-vol-7ce2f152-d96c-4f54-b335-9ede672ce320 csi-cephfs-sc

> pvc-09991b1e-b9f6-4a7c-b710-25131433c6e9
> csi-vol-ef2e7be0-8044-11ec-b79c-fa4d817fb9f0 csi-cephfs-sc

> pvc-19b8f683-b3e8-4476-abc3-9d7d7e4d8941
> csi-vol-5a9ce4b8-1edb-11ed-8f9a-e6ec00c2f5e5 csi-cephfs-sc

> pvc-2fc90e06-22b4-4e80-b33a-c4c5dd6a0baa
> csi-vol-760f34d1-4e26-11ed-af16-d67e0f01e63a csi-cephfs-sc

> pvc-316aa495-3d52-45a5-aac3-4f64bf598aa3
> csi-vol-9d135b18-8b43-402b-a606-a2c426527b20 csi-cephfs-sc

> pvc-342b66f9-f6d5-413c-8634-e51a25d638a8
> csi-vol-7e3aca09-8d76-11eb-95b9-ee71e85dde57 csi-cephfs-sc

> pvc-48c664a6-073e-4498-9370-d4a718f11729
> csi-vol-72d71655-9518-11ee-a81d-b2c17fca41c9 csi-cephfs-sc

> pvc-5349d4ec-7e62-48b7-bd8c-8f718a13906b
> csi-vol-9748018f-4df9-11ed-af16-d67e0f01e63a csi-cephfs-sc

> pvc-5db458a4-2dfb-46d7-8b23-2bde45a9bf46
> csi-vol-bd87eade-e726-4a5d-993c-56dbece42420 csi-cephfs-sc

> pvc-5fd1f3a5-d120-4283-8e2c-b6e1e3e4cbd1
> csi-vol-e7ec3159-9d1c-11eb-82b9-aaa0cbaeebfa csi-cephfs-sc

> pvc-669c35f9-9927-4e44-94d2-a9ddd0eda914
> csi-vol-ef9539f6-9c5a-40d8-a89b-087ad7c6e9b3 csi-cephfs-sc

> pvc-6909652a-2da6-4a13-a111-59a21ed685b2
> csi-vol-a9a1fff7-550a-43a1-aa52-4f6710e1ca9d csi-cephfs-sc

> pvc-72c2f70e-2976-4188-8d1a-f2928745097b
> csi-vol-51045501-f17b-4f4d-af08-100ea3db3db3 csi-cephfs-sc

> pvc-73b1d3f3-f861-44d7-bbd0-43fc16f9e7a9
> csi-vol-439769d7-9b94-4350-bfa2-fe5e9a182650 csi-cephfs-sc

> pvc-76b04010-1e00-4fea-82b7-855ff83ab820
> csi-vol-28852a9b-79c0-4eb7-afcf-b443ff9f867d csi-cephfs-sc

> pvc-7e0b08ea-9d31-45a7-a098-588563c18f19
> csi-vol-4a5086d9-62ad-11ee-80a7-323a25995550 csi-cephfs-sc

> pvc-83853b0b-ab55-46be-ae3a-78f8d431e780
> csi-vol-ef2e919c-8044-11ec-b79c-fa4d817fb9f0 csi-cephfs-sc

> pvc-89c85899-c26a-4efd-91dd-cc891d303ad6
> csi-vol-e8b89f50-fdf9-11ec-8f9a-e6ec00c2f5e5 csi-cephfs-sc

> pvc-8a4c12b1-b7d8-4337-b218-7de2bd6eadde
> csi-vol-718d31a6-9b4a-11ee-a81d-b2c17fca41c9 csi-cephfs-sc

> pvc-8b56ae01-1809-4552-9192-930bd1db95f8
> csi-vol-a72f2d4c-1f8b-4a71-a846-2b7a6febbf82 csi-cephfs-sc

> pvc-97b80840-fd60-436a-8223-5cf41bfe56ef
> csi-vol-0289ed46-4df6-11ed-af16-d67e0f01e63a csi-cephfs-sc

> pvc-984eadce-c908-4e21-a558-70d12bcff3c2
> csi-vol-ef2e8a6b-8044-11ec-b79c-fa4d817fb9f0 csi-cephfs-sc

> pvc-9de21d9d-80ae-49f8-bea0-2a265945cfa6
> csi-vol-c7b4f302-4d71-4777-b6c0-82720ebe8950 csi-cephfs-sc

> pvc-9f5651b5-0cb2-47d2-889e-69b09d96fe36
> csi-vol-9ba650df-3fa5-43f0-abea-5231643fb1bf csi-cephfs-sc

> pvc-a2607a68-62a9-47a0-8f15-871359460556
> csi-vol-028a40c4-4df6-11ed-af16-d67e0f01e63a csi-cephfs-sc

> pvc-a84158f8-1d21-475a-9987-a0e8a3347c42
> csi-vol-22835b9b-c31c-46c6-aec2-67b554c683d3 csi-cephfs-sc

> pvc-b3d81d90-eaec-4d8a-8065-790d9c0e6ea7
> csi-vol-c908a216-cf78-11eb-a202-52b7fddc54da csi-cephfs-sc

> pvc-bcc5979b-f9cb-4c10-ab64-24424749856b
> csi-vol-e8dd361e-b6fb-4a19-94af-7cae07d4702c csi-cephfs-sc

> pvc-bd09916e-a47d-429f-9371-ac779d3959b3
> csi-vol-cc9573bc-6733-11ed-af16-d67e0f01e63a csi-cephfs-sc

> pvc-dac9a949-cd28-4448-9c66-7bab365c3d25
> csi-vol-21b0d822-6fc3-11ec-96cb-dec0a2aa44f8 csi-cephfs-sc

> pvc-e0783e40-f49c-4d7c-a404-4554ce35341c
> csi-vol-0289e975-4df6-11ed-af16-d67e0f01e63a csi-cephfs-sc

> pvc-f660eb28-2af4-45de-b00b-72f73e08745d
> csi-vol-4b6e4fd4-9e4e-4b0d-a022-305b8e1d8e84 csi-cephfs-sc

> pvc-fd994df6-9ca5-47d5-af64-afec037fd5d1
> csi-vol-ef2e42e3-8044-11ec-b79c-fa4d817fb9f0 csi-cephfs-sc

> pvc-fe72f0a2-c260-4a40-acca-50f47d31c11a
> csi-vol-6c0efe87-d02c-11eb-a202-52b7fddc54da csi-cephfs-sc

> I have 985 subvolumes in Cephfs csi volume , i have no idea on why i have this
> difference.

> $ ceph fs subvolume ls cephfs csi | awk '{if ($2 != "") print $2}' | sed
> 's/"//g' | wc -l

> 1020

> If i check the number of Volume Snapshots of StorageClass Cephfs via the command
> "kubectl get volumesnapshot -A | grep cephfs | wc -l" i have 177

> If i check from the subvolumes of Cephfs that are mapped to the K8S PV i have
> 178 via the command "for subvolume in $(cat list.txt | xargs);do ceph fs
> subvolume snapshot ls cephfs $subvolume csi | awk '{if ($2 != "") print $2}' |
> sed 's/"//g' ;done | wc -l” I have 178

> Which seems to be correct and consistent.

> Are my checks valid to identifies subvolumes that are not anymore used by my K8S
> workload ?

> Can i safely delete the subvolumes as they seems not associated with a
> persistent volume in Kubernetes ?

> Thanks in advance for the help

> Have a nice day

> Best Regards, Edouard Fazenda.

> Edouard FAZENDA

> Technical Support

> [ https://www.csti.ch/ | www.csti.ch ]

> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux