effectively reducing scrub io impact

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we have here globally:

osd_client_op_priority = 63
osd_disk_thread_ioprio_class = idle
osd_disk_thread_ioprio_priority = 7
osd_max_scrubs = 1

to influence the scrubbing performance and

osd_scrub_begin_hour = 1
osd_scrub_end_hour = 7

to influence the scrubbing time frame


Now, as it seems, this time frame is/was not enough, so ceph started
scrubbing all the time, i assume because of the age of the objects.

And it does it with:

4 active+clean+scrubbing+deep

( instead of the configured 1 )


So now, we experience a situation, where the spinning drives are so
busy, that the IO performance got too bad.

The only reason that its not a catastrophy is, that we have a cache tier
in front of it, which loweres the IO needs on the spnning drives.

Unluckily we have also some pools going directly on the spinning drives.

So these pools experience a very bad IO performance.

So we had to disable scrubbing during business houres ( which is not
really a solution ).

So any idea why

1. 4-5 scrubs we can see, while osd_max_scrubs = 1 is set ?
2. Why the impact on the spinning drives is so hard, while we lowered
the IO priority for it ?


Thank you !



-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux