Re: Ceph All-SSD Cluster & Wal/DB Separation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Depends.

In theory, each OSD will have access to 1/4 of the separate WAL/DB device, so to get better performance you need to find an NVMe device that delivers significantly more than 4x the IOPS rate of the pm1643 drives, which is not common.

That assumes the pm1643 devices are connected to a high-quality well-configured 12Gb SAS controller that really can deliver the full IOPS rate of 4 drives combined. The only way to find that out is likely to benchmark.

Having said that, for a storage cluster where write performance is expected to be the main bottleneck, I would be hesitant to use drives that only have 1DWPD endurance since Ceph has fairly high write amplification factors. If you use 3-fold replication, this cluster might only be able to handle a few TB of writes per day without wearing out the drives prematurely.

In practice we've been quite happy with Samsung drives that have often far exceeded their warranty endurance, but that's not something I would like to rely on when providing a commercial service.

Cheers,

Erik




--
Erik Lindahl <erik.lindahl@xxxxxxxxx>
On 2 Jan 2023 at 10:25 +0100, hosseinz8050@xxxxxxxxx <hosseinz8050@xxxxxxxxx>, wrote:
> Hi Experts,I am seeking for if there is achievable significant write performance improvements when separating WAL/DB in a ceph cluster with all SSD type OSD.I have a cluster with 40 SSD (PM1643 1.8 TB SSD Enterprise Samsung). I have 10 Storage node each with 4 OSD. I want to know that can I get better write IOPs and throughput if I add one NVMe OSD per node and separate WAL/DB on it?Is the result of this separation, meaningful performance improvement or not?
> My ceph cluster is block storage back-end of Openstack cinder in a public cloud service.
>
> Thanks in advance.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux