Hi,
After some testing, it looks like when the deep scrubbing is running, the performance is greatly affected. Even when only 1 active deep scrubbing is running.
After some googling, suggestion to enable the kernel CFQ scheduler.
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_priority 7' ceph tell osd.* injectargs '--osd_disk_thread_ioprio_class idle'However on the ceph config documentation http://docs.ceph.com/docs/mimic/rados/configuration/osd-config-ref/
"Since Jewel scrubbing is no longer carried out by the disk iothread, see osd priority options instead "
Does anyone have info on this osd priority setting for Luminous release?
Thanks
On Wed, Jun 27, 2018 at 9:47 AM, Phang WM <phang@xxxxxxxxxxxxxxxxxxx> wrote:
On Wed, Jun 27, 2018 at 4:02 AM, Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:Have you dumped ops-in-flight to see if the slow requests happen to correspond to scrubs or snap trims?
Hi Anthony,Yes, we have tried the ops-in-flight, what we get is osd_op with flag_point=delayed and event initiated, queued_for_pg, reached_pg, waiting for rw locks...There is no scrubs or snap trims.ThanksThis electronic mail transmission and any accompanying attachments contain confidential information intended only for the use of the individual or entity named above. Any dissemination, distribution, copying or action taken in reliance on the contents of this communication by anyone other than the intended recipient is strictly prohibited. If you have received this communication in error please immediately delete the E-mail and notify the sender at the above E-mail address. Thank you.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com