Read_ahead_kb should help you in case of seq workload, but, if you are saying it is helping your workload in random case also, try to do it both in VM as well
as in OSD side as well and see if it is making any difference. Thanks & Regards Somnath From: Tuomas Juntunen [mailto:tuomas.juntunen@xxxxxxxxxxxxxxx]
Hi This is something I was thinking too. But it doesn’t take away the problem. Can you share your setup and how many VM’s you are running, that would give us some starting point on sizing our setup. Thanks Br, Tuomas From: Stephen Mercier [mailto:stephen.mercier@xxxxxxxxxxxx]
I ran into the same problem. What we did, and have been using since, is increased the read ahead buffer in the VMs to 16MB (The sweet spot we settled on after testing). This isn't a solution for all scenarios, but for our
uses, it was enough to get performance inline with expectations. In Ubuntu, we added the following udev config to facilitate this: root@ubuntu:/lib/udev/rules.d# vi /etc/udev/rules.d/99-virtio.rules SUBSYSTEM=="block", ATTR{queue/rotational}=="1", ACTION="" KERNEL=="vd[a-z]", ATTR{bdi/read_ahead_kb}="16384", ATTR{queue/read_ahead_kb}="16384", ATTR{queue/scheduler}="deadline" Cheers, -- Stephen Mercier Senior Systems Architect Attainia, Inc. Phone: 866-288-2464 ext. 727 Email:
stephen.mercier@xxxxxxxxxxxx Web:
www.attainia.com Capital equipment lifecycle planning & budgeting solutions for healthcare On Jun 30, 2015, at 10:18 AM, Tuomas Juntunen wrote: Hi It’s not probably hitting the disks, but that really doesn’t matter. The point is we have very responsive VM’s while writing and that is what the users will see. The iops we get with sequential read is good, but the random read is way too low. Is using SSD’s as OSD’s the only way to get it up? or is there some tunable which would enhance it? I would assume Linux caches reads in memory and serves them
from there, but atleast now we don’t see it. Br, Tuomas From: Somnath
Roy [mailto:Somnath.Roy@xxxxxxxxxxx] Break it down, try fio-rbd to see what is the performance you getting.. But, I am really surprised you are getting > 100k iops for write, did you check it is hitting the disks ? Thanks & Regards Somnath From: ceph-users
[mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Tuomas Juntunen Hi I have been trying to figure out why our 4k random reads in VM’s are so bad. I am using fio to test this. Write : 170k iops Random write : 109k iops Read : 64k iops Random read : 1k iops Our setup is: 3 nodes with 36 OSDs, 18 SSD’s one SSD for two OSD’s, each node has 64gb mem & 2x6core cpu’s 4 monitors running on other servers 40gbit infiniband with IPoIB Openstack : Qemu-kvm for virtuals Any help would be appreciated Thank you in advance. Br, Tuomas
_______________________________________________ |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com