Re: active+recovery_unfound+degraded in Pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/29/21 4:58 AM, Lomayani S. Laizer wrote:
Hello,

Any advice on this. Am stuck because one VM is not working now. Looks there
is a read error in primary osd(15) for this pg. Should i mark osd 15 down
or out? Is there any risk of doing this?

Apr 28 20:22:31 ceph-node3 kernel: [369172.974734] sd 0:2:4:0: [sde]
tag#358 CDB: Read(16) 88 00 00 00 00 00 51 be e7 80 00 00 00 80 00 00
Apr 28 20:22:31 ceph-node3 kernel: [369172.974739] blk_update_request: I/O
error, dev sde, sector 1371465600 op 0x0:(READ) flags 0x0 phys_seg 16 prio
class 0
Apr 28 21:14:11 ceph-node3 kernel: [372273.275801] sd 0:2:4:0: [sde] tag#28
FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
Apr 28 21:14:11 ceph-node3 kernel: [372273.275809] sd 0:2:4:0: [sde] tag#28
CDB: Read(16) 88 00 00 00 00 00 51 be e7 80 00 00 00 80 00 00
Apr 28 21:14:11 ceph-node3 kernel: [372273.275813] blk_update_request: I/O
error, dev sde, sector 1371465600 op 0x0:(READ) flags 0x0 phys_seg 16 prio
class 0

So this looks like a broken disk. I would take it out and let the cluster recover (ceph osd out 15).

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux