Am 2023-02-10 09:13, schrieb Victor Rodriguez:
I've seen that happen when a rbd image or a snapshot is being removed
and you cancel the operation, specially if they are big or storage is
relatively slow. The rbd image will stay "half removed" in the pool.
Check "rbd ls -p POOL" vs "rbd ls -l -p POOL" outputs: the first may
have one or more lines in it's output. Those extra lines are the half
removed images that rbd du or rbd ls -l are complaining about. Make
absolutely sure that you don't need them and remove them manually with
"rbd rm IMAGE -p POOL".
Hello Victor,
thank you very much!
Your commands was an easy solution to see which "vdisk" is affected:
root@node35:~# rbd ls -p cephhdd-001-mypool > rbdlsA.txt
root@node35:~# rbd ls -l -p cephhdd-001-mypool > rbdlaB.txt
rbd: error opening vm-44815-disk-0: (2) No such file or directory
rbd: error opening vm-44815-disk-1: (2) No such file or directory
rbd: listing images failed: (2) No such file or directory
I don't know why we have an issue with this disk, but i guess that a
coleague had done a mistake ^^
if removed the disks like "rbd rm cephhdd-001-mypool/vm-44815-disk-0"
and my rbd du doesnt show any error anymore.
Have a nice weekend
Mehmet
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx