Am 27.04.18 um 22:33 schrieb Jason Dillaman: > Do you have any reason for why the OSDs crash? Anything the logs? Can > you provide an "rbd info noc_tobedeleted"? The reason why they are crashing is this assert: https://github.com/ceph/ceph/blob/luminous/src/osd/PrimaryLogPG.cc#L353 With debug 20 we see this right before the OSD crashes: 2018-04-24 13:59:38.047697 7f929ba0d700 20 osd.4 pg_epoch: 144994 pg[0.103( v 140091'469328 (125640'467824,140091'469328] lb 0:c0e04acc:::rbd_data.221bf2eb141f2.0000000000016379:head (bitwise) local-lis/les=137681/137682 n=9535 ec=115/115 lis/c 144979/49591 les/c/f 144980/49596/0 144978/144979/144979) [4,17,2]/[2,17] r=-1 lpr=144979 pi=[49591,144979)/3 luod=0'0 crt=140091'469328 lcod 0'0 active+remapped] snapset 0=[]:[] legacy_snaps [] 2018-04-24 16:34:54.558159 7f1c40e32700 20 osd.11 pg_epoch: 145549 pg[0.103( v 140091'469328 (125640'467824,140091'469328] lb 0:c0e04acc:::rbd_data.221bf2eb141f2.0000000000016379:head (bitwise) local-lis/les=138310/138311 n=9535 ec=115/115 lis/c 145548/49591 les/c/f 145549/49596/0 145547/145548/145548) [11,17,2]/[2,17] r=-1 lpr=145548 pi=[49591,145548)/3 luod=0'0 crt=140091'469328 lcod 0'0 active+remapped] snapset 0=[]:[] legacy_snaps [] Which is caused from this code: https://github.com/ceph/ceph/blob/luminous/src/osd/PrimaryLogPG.cc#L349-L350 Unfortunately rbd info is not available anymore for this image, because I already followed the instructions under http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-image until 'Remove all rbd data', which seems to be hanging, too. > On Thu, Apr 26, 2018 at 9:24 AM, Jan Marquardt <jm@xxxxxxxxxxx> wrote: >> Hi, >> >> I am currently trying to delete an rbd image which is seemingly causing >> our OSDs to crash, but it always gets stuck at 3%. >> >> root@ceph4:~# rbd rm noc_tobedeleted >> Removing image: 3% complete... >> >> Is there any way to force the deletion? Any other advices? >> >> Best Regards >> >> Jan >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com