Helllo, answering to myself in case some else sutmbles upon this thread in the future. I was able to remove the unexpected snap, here is the recipe: How to remove the unexpected snapshots: 1.) Stop the OSD ceph-osd -i 14 --flush-journal ... flushed journal /var/lib/ceph/osd/ceph-14/journal for object store /var/lib/ceph/osd/ceph-14 2.) List the Object in question ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-14 --journal-path /dev/disk/by-partuuid/212e9db1-943b-45f9-9d83-cffaeb777db7 --op list rbd_data.59cb9c679e2a9e3.0000000000003096 [wait ... it might take minutes] ["7.374",{"oid":"rbd_data.59cb9c679e2a9e3.0000000000003096","key":"","snapid":171076,"hash":2728045428,"max":0,"pool":7,"namespace":""}] ["7.374",{"oid":"rbd_data.59cb9c679e2a9e3.0000000000003096","key":"","snapid":171797,"hash":2728045428,"max":0,"pool":7,"namespace":""}] ["7.374",{"oid":"rbd_data.59cb9c679e2a9e3.0000000000003096","key":"","snapid":-2,"hash":2728045428,"max":0,"pool":7,"namespace":""}] 3.) Remove the snap from the object ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-14 --journal-path /dev/disk/by-partuuid/212e9db1-943b-45f9-9d83-cffaeb777db7 ["7.374",{"oid":"rbd_data.59cb9c679e2a9e3.0000000000003096","key":"","snapid":171076,"hash":2728045428,"max":0,"pool":7,"namespace":""}] remove [wait ... it might take minutes] remove 7/a29aab74/rbd_data.59cb9c679e2a9e3.0000000000003096/29c44 4.) Start the OSD Again 5.) Do this for all OSD on which the snap it exists. If it still exists on one of the other OSDs, it will be synced before repair starts and thus cause harm again. 6.) ceph pg repair 7.374 Happy again and in need of sleep, derjohn _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx