Hi Martin,
Even before adding cold storage on HDD, I had the cluster with SSD only. That also could not keep up with deleting the files.
I am no where near I/O exhaustion on the SSDs or even the HDDs.
Cheers,
Christian
On Oct 2 2019, at 1:23 pm, Martin Verges <martin.verges@xxxxxxxx> wrote:
Hello Christian,the problem is, that HDD is not capable of providing lots of IOs required for "~4 million small files".--Martin VergesManaging directorMobile: +49 174 9335695E-Mail: martin.verges@xxxxxxxxcroit GmbH, Freseniusstr. 31h, 81247 MunichCEO: Martin Verges - VAT-ID: DE310638492Com. register: Amtsgericht Munich HRB 231263Web: https://croit.ioYouTube: https://goo.gl/PGE1BxAm Mi., 2. Okt. 2019 um 11:56 Uhr schrieb Christian Pedersen <chripede@xxxxxxxxx>:Hi,Using the S3 gateway I store ~4 million small files in my cluster every day. I have a lifecycle setup to move these files to cold storage after a day and delete them after two days.The default storage is SSD based and the cold storage is HDD.However the rgw lifecycle process cannot keep up with this. In a 24 hour period. A little less than a million files are moved per day ( https://imgur.com/a/H52hD2h ). I have tried only enabling the delete part of the lifecycle, but even though it deleted from SSD storage, the result is the same. The screenshots are taken while there is no incoming files to the cluster.I'm running 5 rgw servers, but that doesn't really change anything from when I was running less. I've tried adjusting rgw lc max objs, but again no change in performance.Any suggestions on how I can tune the lifecycle process?Cheers,Christian_______________________________________________ceph-users mailing list
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com