Best practice and expected benefits of using separate WAL and DB devices with Bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear all

We have an HDD ceph cluster that could do with some more IOPS. One solution we are considering is installing NVMe SSDs into the storage nodes and using them as WAL- and/or DB devices for the Bluestore OSDs.

However, we have some questions about this and are looking for some guidance and advice.

The first one is about the expected benefits. Before we undergo the efforts involved in the transition, we are wondering if it is even worth it. How much of a performance boost one can expect when adding NVMe SSDs for WAL-devices to an HDD cluster? Plus, how much faster than that does it get with the DB also being on SSD. Are there rule-of-thumb number of that? Or maybe someone has done benchmarks in the past?

The second question is of more practical nature. Are there any best-practices on how to implement this? I was thinking we won't do one SSD per HDD - surely an NVMe SSD is plenty fast to handle the traffic from multiple OSDs. But what is a good ratio? Do I have one NVMe SSD per 4 HDDs? Per 6 or even 8? Also, how should I chop-up the SSD, using partitions or using LVM? Last but not least, if I have one SSD handle WAL and DB for multiple OSDs, losing that SSD means losing multiple OSDs. How do people deal with this risk? Is it generally deemed acceptable or is this something people tend to mitigate and if so how? Do I run multiple SSDs in RAID?

I do realize that for some of these, there might not be the one perfect answer that fits all use cases. I am looking for best practices and in general just trying to avoid any obvious mistakes.

Any advice is much appreciated.

Sincerely

Niklaus Hofer
--
stepping stone AG
Wasserwerkgasse 7
CH-3011 Bern

Telefon: +41 31 332 53 63
www.stepping-stone.ch
niklaus.hofer@xxxxxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux