Thank you Robin.
Looking at the video it doesn't seem like a fix is anywhere near ready.
Am I correct in concluding that Ceph is not the right tool for my use-case?
Cheers,
Christian
On Oct 3 2019, at 6:07 am, Robin H. Johnson <robbat2@xxxxxxxxxx> wrote:
On Wed, Oct 02, 2019 at 01:48:40PM +0200, Christian Pedersen wrote:Hi Martin,Even before adding cold storage on HDD, I had the cluster with SSD only. That also could not keep up with deleting the files.I am no where near I/O exhaustion on the SSDs or even the HDDs.Please see my presentation from Cephalic on 2019 about RGW S3 where Itouch on slowness in Lifecycle processing and deletion.The efficiency of the code is very low: it requires a full scan ofthe bucket index every single day. Depending on the traversal order(unordered listing helps), this might mean it takes a very long time tofind the items that can be deleted, and even when it gets to them, it'sbound by the deletion time, which is also slow (that the head of theobjects is a synchronous deletion in many cases, while the tails areasync garbage-collected).Fixing this isn't trivial: either you have to scan the entire bucket, oryou have to maintain a secondary index in insertion-order for EACHprefix in a lifecycle policy.--Robin Hugh JohnsonGentoo Linux: Dev, Infra Lead, Foundation TreasurerE-Mail : robbat2@xxxxxxxxxxGnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com