Re: Rocksdb: Corruption: missing start of fragmented record(1)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

"Input/output error"
This is an indication for a hardware error.
So you should check the disk an create a new osd...

Hth
Mehmet

Am 26. November 2021 11:02:55 MEZ schrieb "huxiaoyu@xxxxxxxxxxxx" <huxiaoyu@xxxxxxxxxxxx>:
>Dear Cephers,
>
>I just had one Ceph osd node (Luminous 12.2.13) power-loss unexpectedly, and after restarting that node, two OSDs out of 10 can not be started, issuing the following errors (see below image), in particular, i see
>
>Rocksdb: Corruption: missing start of fragmented record(1)
>Bluestore(/var/lib/ceph/osd/osd-21) _open_db erroring opening db:
>...
>**ERROR: OSD init failed: (5)  Input/output error
>
>I checked the db/val SSDs, and they are working fine. So I am wondering the following
>1) Is there a method to restore the OSDs?
>2) what could be the potential causes of the corrupted db/wal? The db/wal SSDs have PLP and not been damaged during the power loss
>
>Your help would be highly appreciated.
>
>best regards,
>
>samuel 
>
>
>
>
>huxiaoyu@xxxxxxxxxxxx
>_______________________________________________
>ceph-users mailing list -- ceph-users@xxxxxxx
>To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux