Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Does fio handle S3 objects spread across many buckets well? I think  bucket listing performance was maybe missing too, but It's been a while since I looked at fio's S3 support.  Maybe they have those use cases covered now.  I wrote a go based benchmark called hsbench based on the wasabi-tech benchmark a while back that tries to cover some of those cases, but I haven't touched it in a while:


https://github.com/markhpc/hsbench


FWIW fio can be used for cephfs as well and it works reasonably well if you give it a long enough run time and only expect hero run scenarios from it.  For metadata intensive workloads you'll need to use mdtest or smallfile.  At this point I mostly just use the io500 suite that includes both ior for hero runs and mdtest for metadata (but you need mpi to coordinate it across multiple nodes).


Mark


On 9/17/20 3:35 AM, George Shuklin wrote:
On 16/09/2020 07:26, Danni Setiawan wrote:
Hi all,

I'm trying to find performance penalty with OSD HDD when using WAL/DB in faster device (SSD/NVMe) vs WAL/DB in same device (HDD) for different workload (RBD, RGW with index bucket in SSD pool, and CephFS with metadata in SSD pool). I want to know if giving up disk slot for WAL/DB device is worth vs adding more OSD.

Unfortunately I cannot find the benchmark for these kind workload. Has anyone ever done this benchmark?

For everything except CephFS, fio looks like a best tool for benchmarking. It can benchmark ceph on all levels: rados, rbd, http/S3. Moreover, it has excellent configuration options, detailed metrics and it can run with multi-server workload (one fio client forcing many fio servers to do benchmarking). The own fio performance is at about 15M IOPS (null engine per fio-server), and it scales horizontally.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux