Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16/09/2020 07:26, Danni Setiawan wrote:
Hi all,

I'm trying to find performance penalty with OSD HDD when using WAL/DB in faster device (SSD/NVMe) vs WAL/DB in same device (HDD) for different workload (RBD, RGW with index bucket in SSD pool, and CephFS with metadata in SSD pool). I want to know if giving up disk slot for WAL/DB device is worth vs adding more OSD.

Unfortunately I cannot find the benchmark for these kind workload. Has anyone ever done this benchmark?

For everything except CephFS, fio looks like a best tool for benchmarking. It can benchmark ceph on all levels: rados, rbd, http/S3. Moreover, it has excellent configuration options, detailed metrics and it can run with multi-server workload (one fio client forcing many fio servers to do benchmarking). The own fio performance is at about 15M IOPS (null engine per fio-server), and it scales horizontally.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux