thanks for the reminder. i turned Tim's initial email into a tracker issue at https://tracker.ceph.com/issues/53003. you can add any more details there, and follow the progress On Thu, Oct 21, 2021 at 1:42 AM <ceph-users@xxxxxxxxxxxxx> wrote: > > Hi! > > I'm just copying this request from my colleague to this mailing list: > (Source > https://lists.ceph.io/hyperkitty/list/dev@xxxxxxx/thread/NUPCDV7BC3NEBUPIDYFBSNAEY4KSDOGS/) > > We've noticed a massive latency increase on object copy since the > pacific > release. Prior pacific the copy operation finished in always less than a > second. The reproducer is quite simple: > > ``` > s3cmd mb s3://test > truncate -s 10G test.img > s3cmd put test.img s3://test/test --multipart-chunk-size-mb=5000 > > # expect the time to be less than a second (at least for our env) > # there is a huge gap between latest and latest-octopus ) > time s3cmd modify s3://test/test --add-header=x-amz-meta-foo3:Bar > ``` > > We followed the developer instructions to spin-up the cluster and > bisected the following commit. > > https://github.com/ceph/ceph/commit/99f7c4aa1286edfea6961b92bb44bb8fe22bd599 > > I'm not that involved to easily identify the cause from this commit, so > it looks more or less like the issue were introduced earlier it's just > getting used after the wide refactoring. > > We've also tested with latest pacific release, the behavior is the same. > > Jonas > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx