Re: tuning for backup target cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have certainly seen cases where the OMAPS have not stayed within the RocksDB/WAL NVME space and have been going down to disk.

This was on a large cluster with a lot of objects but the disks that where being used for the non-ec pool where seeing a lot more actual disk activity than the other disks in the system.

Moving the non-ec pool onto NVME helped with a lot of operations that needed to be done to cleanup a lot of orphaned objects.

Yes this was a large cluster with a lot of ingress data admitedly.

Darren Soothill

Want a meeting with me: https://calendar.app.google/MUdgrLEa7jSba3du9

Looking for help with your Ceph cluster? Contact us at https://croit.io/
 
croit GmbH, Freseniusstr. 31h, 81247 Munich 
CEO: Martin Verges - VAT-ID: DE310638492 
Com. register: Amtsgericht Munich HRB 231263 
Web: https://croit.io/ | YouTube: https://goo.gl/PGE1Bx




> On 29 May 2024, at 21:24, Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:
> 
> 
> 
>> You also have the metadata pools used by RGW that ideally need to be on NVME.
> 
> The OP seems to intend shared NVMe for WAL+DB, so that the omaps are on NVMe that way.
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux