Re: Performance regression on rgw/s3 copy operation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Howdy guys,

we noticed a massive latency increase on object copy since the pacific
release. Prior pacific the copy operation finished in always less than a
second. The reproducer is quite simple:

```
s3cmd mb s3://moo
truncate -s 10G moo.img
s3cmd put moo.img s3://moo/hui --multipart-chunk-size-mb=5000

# expect the time to be less than a second (at least for our env)
# there is a huge gap between latest and latest-octopus )
time s3cmd modify s3://moo/hui --add-header=x-amz-meta-foo3:Bar
```

We followed the developer instructions to spin-up the cluster and
bisected the following commit.

https://github.com/ceph/ceph/commit/99f7c4aa1286edfea6961b92bb44bb8fe22bd599

I'm not that involved to easily identify the cause from this commit, so
it looks more or less like the issue were introduced earlier it's just
getting used after the wide refactoring.

Cheers,

Tim




Attachment: OpenPGP_signature
Description: OpenPGP digital signature

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux