Re: Very high read IO during backfilling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Are you a victim of bluefs_buffered_io=false: https://www.mail-archive.com/ceph-users@xxxxxxx/msg05550.html ?

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Kamil Szczygieł <kamil@xxxxxxxxxxxx>
Sent: 27 October 2020 21:39:22
To: ceph-users@xxxxxxx
Subject:  Very high read IO during backfilling

Hi,

We're running Octopus and we've 3 control plane nodes (12 core, 64 GB memory each) that are running mon, mds and mgr and also 4 data nodes (12 core, 256 GB memory, 13x10TB HDDs each). We've increased number of PGs inside our pool, which resulted in all OSDs going crazy and reading the average of 900 M/s constantly (based on iotop).

This has resulted in slow ops and very low recovery speed. Any tips on how to handle this kind of situation? We've osd_recovery_sleep_hdd set to 0.2, osd_recovery_max_active set to 5 and osd_max_backfills set to 4. Some OSDs are reporting slow ops constantly and iowait on machines is at 70-80% constantly.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux