Thanks, Nick. One other data point that has come up is that nearly all of the blocked requests that are waiting on subops are waiting for OSDs with more PGs than the others. My test cluster has 184 OSDs, 177 of which are 3TB,
with 7 4TB OSDs. The cluster is well balanced based on OSD capacity, so those 7 OSDs individually have 33% more PGs than the others and are causing almost all of the blocked requests. It appears that maps updates are generally not blocking long enough to show
up as blocked requests. I set the reweight on those 7 OSDs to 0.75 and things are backfilling now. I’ll test some more when the PG counts per OSD are more balanced and see what I get. I’ll also play with the filestore queue. I was telling
some of my colleagues yesterday that this looked likely to be related to buffer bloat somewhere. I appreciate the suggestion.
From: Nick Fisk [mailto:nick@xxxxxxxxxx] Hi Steve, From what I understand, the issue is not with the queueing in Ceph, which is correctly moving client IO to the front of the queue. The problem lies below what Ceph controls, ie the scheduler and
disk layer in Linux. Once the IO’s leave Ceph it’s a bit of a free for all and the client IO’s tend to get lost in large disk queues surrounded by all the snap trim IO’s. The workaround Sam is working on will limit the amount of snap trims that are allowed to run, which I believe will have a similar effect to the sleep parameters in pre-jewel clusters, but without
pausing the whole IO thread. Ultimately the solution requires Ceph to be able to control the queuing of IO’s at the lower levels of the kernel. Whether this is via some sort of tagging per IO (currently CFQ is only per thread/process)
or some other method, I don’t know. I was speaking to Sage and he thinks the easiest method might be to shrink the filestore queue so that you don’t get buffer bloat at the disk level. You should be able to test this out pretty easily now by changing the parameter,
probably around a queue of 5-10 would be about right for spinning disks. It’s a trade off of peak throughput vs queue latency though. Nick From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Steve Taylor As I look at more of these stuck ops, it looks like more of them are actually waiting on subops than on osdmap updates, so maybe there is still some headway to be made with the weighted priority queue settings.
I do see OSDs waiting for map updates all the time, but they aren’t blocking things as much as the subops are. Thoughts?
From: Steve Taylor Sorry, I lost the previous thread on this. I apologize for the resulting incomplete reply. The issue that we’re having with Jewel, as David Turner mentioned, is that we can’t seem to throttle snap trimming sufficiently to prevent it from blocking I/O requests. On further investigation, I encountered osd_op_pq_max_tokens_per_priority,
which should be able to be used in conjunction with ‘osd_op_queue = wpq’ to govern the availability of queue positions for various operations using costs if I understand correctly. I’m testing with RBDs using 4MB objects, so in order to leave plenty of room
in the weighted priority queue for client I/O, I set osd_op_pq_max_tokens_per_priority to 64MB and osd_snap_trim_cost to 32MB+1. I figured this should essentially reserve 32MB in the queue for client I/O operations, which are prioritized higher and therefore
shouldn’t get blocked. I still see blocked I/O requests, and when I dump in-flight ops, they show ‘op must wait for map.’ I assume this means that what’s blocking the I/O requests at this point is all of the osdmap updates caused by snap trimming, and not the
actual snap trimming itself starving the ops of op threads. Hammer is able to mitigate this with osd_snap_trim_sleep by directly throttling snap trimming and therefore causing less frequent osdmap updates, but there doesn’t seem to be a good way to accomplish
the same thing with Jewel. First of all, am I understanding these settings correctly? If so, are there other settings that could potentially help here, or do we just need something like Sam already mentioned that can sort of reserve threads for client I/O requests?
Even then it seems like we might have issues if we can’t also throttle snap trimming. We delete a LOT of RBD snapshots on a daily basis, which we recognize is an extreme use case. Just wondering if there’s something else to try or if we need to start working
toward implementing something new ourselves to handle our use case better.
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com