Hi,
have you checked if the image is in the trash?
rbd -p {pool} trash ls
You can try to restore the image if there is one, then blocklist the
client to release the watcher, then delete the image again.
I have to do that from time to time on a customer’s openstack cluster.
Zitat von Devender Singh <devender@xxxxxxxxxx>:
Hello
Seeking some help if I can clean the client mounting my volume?
rbd status pool/image
Watchers:
watcher=10.160.0.245:0/2076588905 client.12541259 cookie=140446370329088
Issue: pod is failing in init- state.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 96s (x508 over 24h) kubelet
MountVolume.MountDevice failed for volume "pvc-3a2048f1" : rpc
error: code = Internal desc = rbd image
k8s-rgnl-disks/csi-vol-945c6a66-9129 is still being used
It shows above client, but there is no such volume…
Another similar issue… on dashboard…
CephNodeDiskspaceWarning
Mountpoint /mnt/dst-volume on sea-prod-host01 will be full in less
than 5 days based on the 48 hour trailing fill rate.
Whereas nothing is mounted, I mapped one image yesterday using red
map and then unmapped and unmounted everything but it been more than
12hours now, still showing the message..
CEPH version: 18.2.4
Regards
Dev
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx