Re: Best practice and expected benefits of using separate WAL and DB devices with Bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Anthony,

Do you have any data on the reliability of QLC NVMe drives? How old is
your deep archive cluster, how many NVMes it has, and how many did you
have to replace?

On Sun, Apr 21, 2024 at 11:06 PM Anthony D'Atri <anthony.datri@xxxxxxxxx> wrote:
>
> A deep archive cluster benefits from NVMe too.  You can use QLC up to 60TB in size, 32 of those in one RU makes for a cluster that doesn’t take up the whole DC.
>
> > On Apr 21, 2024, at 5:42 AM, Darren Soothill <darren.soothill@xxxxxxxx> wrote:
> >
> > Hi Niklaus,
> >
> > Lots of questions here but let me tray and get through some of them.
> >
> > Personally unless a cluster is for deep archive then I would never suggest configuring or deploying a cluster without Rocks DB and WAL on NVME.
> > There are a number of benefits to this in terms of performance and recovery. Small writes go to the NVME first before being written to the HDD and it makes many recovery operations far more efficient.
> >
> > As to how much faster it makes things that very much depends on the type of workload you have on the system. Lots of small writes will make a significant difference. Very large writes not as much of a difference.
> > Things like compactions of the RocksDB database are a lot faster as they are now running from NVME and not from the HDD.
> >
> > We normally work with  a upto 1:12 ratio so 1 NVME for every 12 HDD’s. This is assuming the NVME’s being used are good mixed use enterprise NVME’s with power loss protection.
> >
> > As to failures yes a failure of the NVME would mean a loss of 12 OSD’s but this is no worse than a failure of an entire node. This is something Ceph is designed to handle.
> >
> > I certainly wouldn’t be thinking about putting the NVME’s into raid sets as that will degrade the performance of them when you are trying to get better performance.
> >
> >
> >
> > Darren Soothill
> >
> >
> > Looking for help with your Ceph cluster? Contact us at https://croit.io/
> >
> > croit GmbH, Freseniusstr. 31h, 81247 Munich
> > CEO: Martin Verges - VAT-ID: DE310638492
> > Com. register: Amtsgericht Munich HRB 231263
> > Web: https://croit.io/ | YouTube: https://goo.gl/PGE1Bx
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



-- 
Alexander E. Patrakov
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux