Re: [Suspicious newsletter] Re: How to clear Health Warning status?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Restart the osd.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: jinguk.kwon@xxxxxxxxxxx <jinguk.kwon@xxxxxxxxxxx>
Sent: Monday, March 29, 2021 10:41 AM
To: Anthony D'Atri <anthony.datri@xxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: [Suspicious newsletter]  Re: How to clear Health Warning status?

Hello there,

Thank you for your response.
There is no error at syslog, dmesg, or SMART.

# ceph health detail
HEALTH_WARN Too many repaired reads on 2 OSDs OSD_TOO_MANY_REPAIRS Too many repaired reads on 2 OSDs
    osd.29 had 38 reads repaired
    osd.16 had 17 reads repaired

How can i clear this waning ?
My ceph is version 14.2.9(clear_shards_repaired is not supported.)



/dev/sdh1 on /var/lib/ceph/osd/ceph-16 type xfs (rw,relatime,attr2,inode64,noquota)

# cat dmesg | grep sdh
[   12.990728] sd 5:2:3:0: [sdh] 19531825152 512-byte logical blocks: (10.0 TB/9.09 TiB)
[   12.990728] sd 5:2:3:0: [sdh] Write Protect is off
[   12.990728] sd 5:2:3:0: [sdh] Mode Sense: 1f 00 00 08
[   12.990728] sd 5:2:3:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   13.016616]  sdh: sdh1 sdh2
[   13.017780] sd 5:2:3:0: [sdh] Attached SCSI disk

# ceph tell osd.29 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 6.464404,
    "bytes_per_sec": 166100668.21318716,
    "iops": 39.60148530320815
}
# ceph tell osd.16 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 9.6168945000000008,
    "bytes_per_sec": 111651617.26584397,
    "iops": 26.619819942914003
}

Thank you


> On 26 Mar 2021, at 16:04, Anthony D'Atri <anthony.datri@xxxxxxxxx> wrote:
>
> Did you look at syslog, dmesg, or SMART?  Mostly likely the drives are failing.
>
>
>> On Mar 25, 2021, at 9:55 PM, jinguk.kwon@xxxxxxxxxxx wrote:
>>
>> Hello there,
>>
>> Thank you for advanced.
>> My ceph is ceph version 14.2.9
>> I have a repair issue too.
>>
>> ceph health detail
>> HEALTH_WARN Too many repaired reads on 2 OSDs OSD_TOO_MANY_REPAIRS
>> Too many repaired reads on 2 OSDs
>>   osd.29 had 38 reads repaired
>>   osd.16 had 17 reads repaired
>>
>> ~# ceph tell osd.16 bench
>> {
>>   "bytes_written": 1073741824,
>>   "blocksize": 4194304,
>>   "elapsed_sec": 7.1486738159999996,
>>   "bytes_per_sec": 150201541.10217974,
>>   "iops": 35.81083800844663
>> }
>> ~# ceph tell osd.29 bench
>> {
>>   "bytes_written": 1073741824,
>>   "blocksize": 4194304,
>>   "elapsed_sec": 6.9244327500000002,
>>   "bytes_per_sec": 155065672.9246161,
>>   "iops": 36.970537406114602
>> }
>>
>> But it looks like those osds are ok. how can i clear this warning ?
>>
>> Best regards
>> JG
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
>> email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux