Limit scrub impact

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

During scrub I see slow ops like this:

osd.31 [WRN] slow request osd_op(client.115442393.0:263257613728.76s0 28:6ed54dc8:::9213182a-14ba-48ad-bde9-289a1c0c0de8.6034919.1_%2fWHITELABEL-1%2fPAGETPYE-7%2fDEVICE-4%2fLANGUAGE-46%2fSUBTYPE-0%2f492210:head [create,setxattr user.rgw.idtag (57) in=71b,setxattr user.rgw.tail_tag (57) in=74b,writefull 0~36883 in=36883b,setxattr user.rgw.manifest (375) in=392b,setxattr user.rgw.acl (123) in=135b,setxattr user.rgw.content_type (10) in=31b,setxattr user.rgw.etag (32) in=45b,setxattr user.rgw.x-amz-meta-storagetimestamp (40) in=76b,call rgw.obj_store_pg_ver in=44b,setxattr user.rgw.source_zone (4) in=24b] snapc 0=[] ondisk+write+known_if_redirected e34043) initiated 2021-10-16T20:01:19.846240+0700 currently started

which not sure why make rgw down, guess because wants to write to that specific osd which is busy.
I saw a suse article (https://www.suse.com/support/kb/doc/?id=000019684) where they say if the system load average is above 0.5, worth to set  something like this:

osd_max_scrubs=2
osd_scrub_load_threshold=3

My load average is around 1.5-1.7, not really clear if the load average is actually higher, if I set more scrub it will even make it worth, isn’t it?

Or how to limit scrub?

Thx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux