Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17/09/2020 17:37, Mark Nelson wrote:
Does fio handle S3 objects spread across many buckets well? I think bucket listing performance was maybe missing too, but It's been a while since I looked at fio's S3 support.  Maybe they have those use cases covered now.  I wrote a go based benchmark called hsbench based on the wasabi-tech benchmark a while back that tries to cover some of those cases, but I haven't touched it in a while:


https://github.com/markhpc/hsbench

The way to spread across many buckets is to use 'farm' for servers under one client manage. You just give each server a different bucket to torture inside jobfile. iodepth=1 restriction for http ioengine is actually encouraging this.



FWIW fio can be used for cephfs as well and it works reasonably well if you give it a long enough run time and only expect hero run scenarios from it.  For metadata intensive workloads you'll need to use mdtest or smallfile.  At this point I mostly just use the io500 suite that includes both ior for hero runs and mdtest for metadata (but you need mpi to coordinate it across multiple nodes).
Yep, I've talked about metadata intensive workloads. Romping within a file or two is not a true fs-specific benchmark.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux