RBD pool damaged, repair options?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

a customer lost 5 OSDs at the same time (and replaced them with new disks before we could do anything…). 4 PGs were incomplete but could be repaired with ceph-objectstore-tool. The cluster itself is healthy again.

Now some RBDs are missing. They are still listed in the rbd_directory object but cannot be access with e.g. rbd info.

rbd ls says: no such file or directory

I assume that is because the associated rbd_id.name object has been lost.

I tried to create this object by means of "rados put" but still no luck:

2020-08-13 11:47:27.557 7fa1e2ffd700 -1 librbd::image::OpenRequest: failed to retrieve image id: (5) Input/output error
2020-08-13 11:47:27.557 7fa1e27fc700 -1 librbd::ImageState: 0x55b9ebb75b10 failed to open image: (5) Input/output error
2020-08-13 11:47:27.557 7fa1e27fc700 -1 librbd::io::AioCompletion: 0x55b9ebadd3a0 fail: (5) Input/output error
rbd: error opening vm-501-disk-2: (5) Input/output error

What are the options to repair an RBD pool so that at least the
RBDs (most data objects are still there) are available again.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG: 
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux