Hi, I have a fresh Nautilus Ceph cluster with radosgw as a front end. I've been testing with a slightly modified version of https://github.com/wasabi-tech/s3-benchmark/ I have 5 storage nodes with 4 osds each, for a total of 20 osds. I am testing locally on a single rgw node. First, I uploaded a bunch of 1GB objects. Now I'm attempting to download them in random order and measure the time it takes to fetch them. My problem is that during the download phase rgw will hang and the process will suck up 100% CPU on the civitwed-worker thread (according to top). The logs show that it downloads segments of the object but then stops part way though and never continues. I tried using beast instead of civitweb as a front-end, but it still hangs in the same way, leading me to believe that this is a back-end issue. This is the end of the logs, as you can see the first three lines show a successful read, and the last line show that it starts a read attempt but never completes: 2019-10-08 13:35:42.673 7fc6cec40700 20 rados->get_obj_iterate_cb oid=2217f6c8-5a9f-4cfc-a1a7-1ced740afb81.127425.2__shadow_.SCoV2VuKnMkiOqi2n3FcWgveOJYu4Io_18 obj-ofs=75497472 read_ofs=0 len=4194304 2019-10-08 13:35:42.673 7fc6cec40700 20 RGWObjManifest::operator++(): rule->part_size=0 rules.size()=1 2019-10-08 13:35:42.673 7fc6cec40700 20 RGWObjManifest::operator++(): result: ofs=79691776 stripe_ofs=79691776 part_ofs=0 rule->part_size=0 2019-10-08 13:35:42.673 7fc6cec40700 20 rados->get_obj_iterate_cb oid=2217f6c8-5a9f-4cfc-a1a7-1ced740afb81.127425.2__shadow_.SCoV2VuKnMkiOqi2n3FcWgveOJYu4Io_19 obj-ofs=79691776 read_ofs=0 len=4194304 Can someone advise me if I've misconfigured something, or happened to find a bug? Thanks, Mike _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx