I second that comment !
Boy, I was chasing ghosts until very recently, regarding disk configurations (physical RAID).
I had a 4 x 1 TB SAS disks, configured as RAID5 on a Dell PERC 6, and thought that would be good enough.
My network throughput would not go over 280 Mbps and I was blaming Linux.
Then I finally decided to test disk speed with:
dd if=/dev/zero of=/localvolume bs=512k count=17000
Much to my surprise it reported something like 35-40 Mbytes/s.
I then changed my layout to RAID 10 and the same test bumped results to 400 Mbytes/sec. Average network throughput now is around 600 mbps, with spikes of 750 mbps.
Ah... You might want to make sure your network is set to use jumbo frames.
Cheers,
Carlos
On Thu, Apr 3, 2014 at 3:52 PM, Josh Boon <gluster@xxxxxxxxxxxx> wrote:
Hey David,Can you provide the qemu command to run each of them? What's your gluster/disk/network layout look like?Depending on your disk and network setup you may be hitting a bottleneck there that would prevent gfapi from performing at capacity. Lots of options here that could impact things.From: "Dave Christianson" <davidchristianson3@xxxxxxxxx>
To: gluster-users@xxxxxxxxxxx
Sent: Thursday, April 3, 2014 6:05:51 AM
Subject: No performance difference using libgfapi?_______________________________________________Good Morning,In my earlier experience invoking a VM using qemu/libgfapi, I reported that it was noticeably faster than the same VM invoked from libvirt using a FUSE mount; however, this was erroneous as the qemu/libgfapi-invoked image was created using 2x the RAM and cpu's...So, invoking the image using both methods using consistent specs of 2GB RAM and 2 cpu's, I attempted to check drive performance using the following commands:(For regular FUSE mount I have the gluster volume mounted at /var/lib/libvirt/images.)(For libgfapi I specify -disk file=gluster://gfs-00/gfsvol/tester1/img.)Using libvirt/FUSE mount:[root@tester1 ~]# hdparm -Tt /dev/vda1/dev/vda1:Timing cached reads: 11346 MB in 2.00 seconds = 5681.63 MB/secTiming buffered disk reads: 36 MB in 3.05 seconds = 11.80 MB/sec[root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
10240+0 records in10240+0 records out41943040 bytes (42MB) copied, 0.0646241 s, 649 MB/secUsing qemu/libgfapi:[root@tester1 ~]# hdparm -Tt /dev/vda1/dev/vda1:Timing cached reads: 11998 MB in 2.00 seconds = 6008.57 MB/secTiming buffered disk reads: 36 MB in 3.03 seconds = 11.89 MB/sec[root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
10240+0 records in10240+0 records out41943040 bytes (42MB) copied, 0.0621412 s, 675 MB/secShould I be seeing a bigger difference, or am I doing something wrong?I'm also curious whether people have found that the performance difference is greater as the size of the gluster cluster scales up.Thanks,David
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users