Hello Christian,
the problem is, that HDD is not capable of providing lots of IOs required for "~4 million small files".
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
Am Mi., 2. Okt. 2019 um 11:56 Uhr schrieb Christian Pedersen <chripede@xxxxxxxxx>:
Hi,_______________________________________________
Using the S3 gateway I store ~4 million small files in my cluster every day. I have a lifecycle setup to move these files to cold storage after a day and delete them after two days.
The default storage is SSD based and the cold storage is HDD.
However the rgw lifecycle process cannot keep up with this. In a 24 hour period. A little less than a million files are moved per day ( https://imgur.com/a/H52hD2h ). I have tried only enabling the delete part of the lifecycle, but even though it deleted from SSD storage, the result is the same. The screenshots are taken while there is no incoming files to the cluster.
I'm running 5 rgw servers, but that doesn't really change anything from when I was running less. I've tried adjusting rgw lc max objs, but again no change in performance.
Any suggestions on how I can tune the lifecycle process?
Cheers,
Christian
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com