On 03/06/2013 07:28 AM, Wolfgang Hennerbichler wrote:
Hi, I've set up (latest bobtail) ceph between 2 Machines, 2 OSDs each. Then I've started up a VM backed by LVM with windows something in it, and i ran the crappy ATTO Disk Benchmark to get performance information (my customer wants me to do it like that, sorry). So with 4MB Blocks being written and read, this tool claims that this certain setup writes with 6.5 MB/s and reads with 182MB/s. Done the same with ceph, same virtual machine, same everything (no other stuff is running in this testing environment), except for this in libvirt: <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='rbd' name='rd/korfu:rbd_cache=1'> <host name='rd-clusternode21' port='6789'/> <host name='rd-clusternode22' port='6789'/> </source> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> Write-Speed is way better than LVM (probably because of caching): 111MB/s. But Read-Speed is really bad: 37MB/s (vs. 182MB/s with lvm). The virtual Machine can somehow no longer maintain time, it drifts by various minutes. Ceph setup is pretty basic. Can anybody give me some hints where I could turn some knobs? I'd rather trade some write-speed to get better read-speed.
If you are doing sequential reads, you may benefit by increasing the read_ahead_kb value for each device in /sys/block/<device>/queue on the OSD hosts.
Wolfgang
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com