On Thu, Jul 25, 2024 at 10:10 PM Dan O'Brien <dobrie2@xxxxxxx> wrote: > > Ilya - > > I don't think images-pubos/144ebab3-b2ee-4331-9d41-8505bcc4e19b is the problem; it was just the last RBD image listed in the log before the crash. The commands you suggested work fine when using that image: > > [root@os-storage ~]# rbd info images-pubos/144ebab3-b2ee-4331-9d41-8505bcc4e19b > rbd image '144ebab3-b2ee-4331-9d41-8505bcc4e19b': > size 0 B in 0 objects > order 23 (8 MiB objects) > snapshot_count: 1 > id: f01052f76969e7 > block_name_prefix: rbd_data.f01052f76969e7 > format: 2 > features: layering, exclusive-lock, object-map, fast-diff, deep-flatten > op_features: > flags: > create_timestamp: Mon Feb 12 17:50:54 2024 > access_timestamp: Mon Feb 12 17:50:54 2024 > modify_timestamp: Mon Feb 12 17:50:54 2024 > [root@os-storage ~]# rbd diff --whole-object images-pubos/144ebab3-b2ee-4331-9d41-8505bcc4e19b I'm sorry, I meant "rbd du images-pubos/144ebab3-b2ee-4331-9d41-8505bcc4e19b". I suspect that you are hitting [1]. One workaround would be to go through all images in all RBD pools that you have and remove any of them that are 0-sized, meaning that "rbd info" reports "size 0 B in 0 objects". > > The other 2 images, related to 2 OpenStack volumes stuck in "error_deleting" state, appear to be the cause of the problem: > > [root@os-storage ~]# rbd info volumes-gpu/volume-28bbca8c-fec5-4a33-bbe2-30408f1ea37f > rbd: error opening image volume-28bbca8c-fec5-4a33-bbe2-30408f1ea37f: (2) No such file or directory > > [root@os-storage ~]# rbd diff --whole-object volumes-gpu/volume-28bbca8c-fec5-4a33-bbe2-30408f1ea37f > rbd: error opening image volume-28bbca8c-fec5-4a33-bbe2-30408f1ea37f: (2) No such file or directory I don't think these ENOENT errors are related -- the image isn't there so I don't see a way for the assert to be reached. [1] https://tracker.ceph.com/issues/66418 Thanks, Ilya _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx