Re: Cephfs too many repaired copies on osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Your disk is most likely slowly failing, you should check smart values and dmesg output for read/write errors and probably replace the disk.

Zitat von zxcs <zhuxiongcs@xxxxxxx>:

Also osd frequently report these ERROR logs, lead this osd has slow request. how to stop these log ?

“full object read crc *** != expected oxffffffff on ****:head”
“missing primary copy of ***: will try to read copies on **”



Thanks
xz

2023年12月13日 01:20,zxcs <zhuxiongcs@xxxxxxx> 写道:

Hi, Experts,

we are using cephfs with 16.2.* with multi active mds, and recently we see an osd report

“full object read crc *** != expected oxffffffff on ****:head”
“missing primary copy of ***: will try to read copies on **”

from `ceph -s`, could see

OSD_TOO_MANY_REPAIRS: Too many repaired reads on ** OSDs.


we don’t know how to fix this , could you are please help shed some light here? Thanks a ton!


Thanks
xz
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux