Hi, thanks to derJohn I was send to the right path. Our setup is a bit more difficult as we're using more than one OSD per drive. The setup is done using rook. Therefore I used the docs at https://github.com/rook/rook/blob/master/Documentation/ceph-osd-mgmt.md#remove-an-osd But we ended up to remove all OSDs of the drive: - using "lsblk" to get the UUIDs of the OSDs placed on that drive - getting the UUIDs of the OSDs via "ceph osd dump" - matching them and remove the OSDs deployments - waiting for ceph to shift all data zapping the whole drive (https://github.com/rook/rook/blob/master/Documentation/ceph-teardown.md#delete-the-data-on-hosts) and reinstalling it into the rook deployment. Hope it helps other people looking for issues like this. /Fabian Am Mittwoch, dem 13.01.2021 um 22:35 +0100 schrieb Andreas John: > Hello, > > I suspect there was unwritten data in RAM which didn't make it to the > disk. This shoudn't happen, that's why the journal is in place. > > If you have size=2 in you pool, there is one copy on the other host. > Do > delete the OSD you could probably do > > ceph osd crush remove osd.x > > ceph osd rm osd.x > > ceph auth del osd.x > > maybe "wipefs -a /dev/sdxxx" or dd if=/dev/zero of=dev/sdxx count=1 > bs=1m ... > > > Then you should be able deploy the disk again with the tool that you > used originally. The disk should be "fresh". > > > rgds, > > derjohn. > > > > > > > On 13.01.21 15:45, Pfannes, Fabian wrote: > > failed: (22) Invalid argument > > -- > Andreas John > net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach > Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832 > Tel: +49 69 8570033-1 | Fax: -2 | http://www.net-lab.net > > Facebook: https://www.facebook.com/netlabdotnet > Twitter: https://twitter.com/netlabdotnet > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx -- Dipl.-Ing. Fabian Pfannes Maon GmbH Bismarckstraße 10-12, 10625 Berlin fabian.pfannes@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx