Hello JC,
in short for the records:
What you can try doing is to change the following settings on all the
OSDs that host this particular PG and see if it makes things better
[osd]
[...]
osd_scrub_chunk_max = 5 #
maximum number of chunks the scrub will process in one go. Defaults to
25.
osd_deep_scrub_stride = 1048576 # Read size
during scrubbing operations. The idea here is to do less chunks but
bigger sequential reads. Defaults to 512KB=524288.
[...]
I have tried this to see if it would help when the object sizes needs to
be realy big ^^ - i know this should not and i guess will normaly not be
:)
In ceph.conf (all nodes!) i have add
[...]
[osd]
osd_scrub_chunk_max = 5
osd_deep_scrub_stride = 1048576
and restartet
systemctl restart ceph-mon@*
systemctl restart ceph-osd@*
on each osd and mon node.
Then i have done "ceph pg deep-scrub 0.223" again.
But again slow/blocked requests :*(
the running config
root@:~# ceph daemon osd.17 config show | grep scrub
[...]
"osd_scrub_chunk_min": "5",
"osd_scrub_chunk_max": "5",
[...]
"osd_deep_scrub_stride": "1048576",
[...]
root@:~# ceph --show-config | grep scrub
[...]
osd_scrub_chunk_min = 5
osd_scrub_chunk_max = 25
[...]
osd_deep_scrub_stride = 524288
[...]
the last command seems not to be load the actual config - but this is
another story.
This was only to keep you informed.
Ah :) i see you wrote me an answear in this minutes ^^. Will response
tomorrow - family is waiting ;)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com