re enable scrubbing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It will stick to the config. If you limit the amount of work scrub does
at a time, then you can let it do whatever it wants without issues
(except 10.2.x which had a bug fixed in 10.2.4, but skip to 10.2.5 to
fix a regression).

For example:
> # less scrub work at a time, with delay
> osd scrub chunk min = 1  # default 5
> osd scrub chunk max = 1 # default 25
> osd scrub sleep = 0.5       # default 0
>
> # lower scrub priority (possibly no effect since Jewel)
> osd disk thread ioprio class = idle
> osd disk thread ioprio priority = 3

And this is already default:
> osd deep scrub stride = 524288  # 512 KiB
> osd max scrubs = 1

And I set this, but not recommending it. The reason I post it here is
just to show that the above is slowed down enough that everything is
scrubbed within this long scrub interval, but might need adjustment for
a more normal setting here:
> # 60 days ... default is 7 days
> osd deep scrub interval = 5259488

And more inline answers below


On 03/08/17 10:46, Laszlo Budai wrote:
> Hello,
>
> is there any risk related to cluster overload when the scrub is re
> enabled after a certain amount of time being disabled?
>
> I am thinking of the following scenario:
> 1. scrub/deep scrub are disabled.
> 2. after a while (few days) we re enable them. How will the cluster
> perform? 
should be as normal during normal scrubbing... just no/short breaks in
between. (use osd scrub sleep for this)
> Will it run all the scrub jobs that were supposed to be running in the
> meantime, or it will just start scheduling scrub jobs according to the
> scrub related parameters?
It will run them 1 at a time, or however you have configured it, until
all are within the target time range. Why shouldn't it obey its config?

And maybe as a side effect, the next time they are scrubbed will also be
timed closely together too.
>
>
> Can you point me to some documentation about this topic?
Nothing interesting with descriptions, just the reference manual for the
options listed above. Someone on IRC gave me the above options and I
tested and fiddled with them to see how ceph behaves.
>
> Thank you,
> Laszlo
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 

--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney at brockmann-consult.de
Internet: http://www.brockmann-consult.de
--------------------------------------------



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux