Re: ceph-osd performance on ram disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah, of course... but RBD is primarily used for KVM VMs, so the results from a VM are the thing that real clients see. So they do mean something... :)
I know. I tested fio before testing cephwith fio. On null ioengine fio can handle up to 14M IOPS (on my dusty lab's R220). On blk_null to gets down to 2.4-2.8M IOPS.  On brd it drops to sad 700k IOPS.  BTW, never run synthetic high-performance benchmarks on kvm. My old server with 'makelinuxfastagain' fixes make one io request in 3.4us, and on KVM VM it become 24us. Some guy said it got about 8.5us on vmware. It's all on purely software stack without any hypervisor IO.  
24us sounds like a small number, but if your synthetics makes 200k iops, it's 4us. You can't make 200k on VM with 24us syscall time.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux