Could the 4 GB GET limit saturate the connection from rgw to Ceph? Simple to test: just rate-limit the health check GET Did you increase "objecter inflight ops" and "objecter inflight op bytes"? You absolutely should adjust these settings for large RGW setups, defaults of 1024 and 100 MB are way too low for many RGW setups, we default to 8192 and 800MB Sometimes "ms async op threads" and "ms async max op threads" might help as well (we adjust them by default, but for other reasons) Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Mon, Oct 14, 2019 at 9:54 PM Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote: > > We set up a new Nautilus cluster and only have RGW on it. While we had > a job doing 200k IOPs of really small objects, I noticed that HAProxy > was kicking out RGW backends because they were taking more than 2 > seconds to return. We GET a large ~4GB file each minute and use that > as a health check to determine if the system is taking too long to > service requests. It seems that other IO is being blocked by this > large transfer. This seems to be the case with both civetweb and > beast. But I'm double checking beast at the moment because I'm not > 100% sure we were using it at the start. > > Any ideas how to mitigate this? It seems that IOs are being scheduled > on a thread and if they get unlucky enough to be scheduled behind a > big IO, they are just stuck, in this case HAProxy could kick out the > backend before the IO is returned and it has to re-request it. > > Thank you, > Robert LeBlanc > > > ---------------- > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx