Update: I wonder if I cah follow advice here: http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-image There is shown how to delete rbd objects directly via rados: $rados -p rbd rm rbd_id.rbdname $rados -p rbd rm rbd_header.18b3c2ae8944a $rados -p temp1 ls | grep '^rbd_data.18b3c2ae8944a.' Could that help and can I run this in parallel with stuck "rbd rm"? I can reboot client that issued rbd rm command(if rbd rm is not simply killable proces) but would ceph cluster drop that operation there or I would get some hanged forever operations internal to cluster? I need to delete that rbd anyway. Ugis 2015-06-06 8:53 GMT+03:00 Ugis <ugis22@xxxxxxxxx>: > Hi, > > I had recent problem with flapping hdd and in result I need to delete > broken rbd. > Problem is all operations towards this rbd stuck. I even cannot delete > rbd - it sits on 6% done and I found this line in one of osds logs: > 2015-06-06 08:03:31.770812 7fe5002c2700 0 log_channel(default) log > [WRN] : slow request 30720.717642 seconds old, received at 2015-06-05 > 23:31:31.032740: osd_op(client.2457394.0:8430 > rbd_data.18b3c2ae8944a.00000000000020e5 [delete] 4.fac8e26 > ack+ondisk+write+known_if_redirected e136905) currently reached_pg > > How can I remove broken rbd? Fast way would be desirable, but any way > will do that eventually helps to delete that rbd. > > Ugis -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html