Re: No performance difference using libgfapi?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A possible reason that you do not see a performance difference is the buffer cache on the client. This cache is unavailable to libgfapi, but may be utilized if you use FUSE.

Take a look at the mount option: fopen-keep-cache. If you have the source, you can find that option in fuse-bridge.c. By default, this is enabled. Gluster will invalidate cache entries (using FUSE interfaces) when it detects changes using STAT.

You could try to disable that mount option to see what difference it makes.

----- Original Message -----
From: "Humble Devassy Chirammal" <humble.devassy@xxxxxxxxx>
To: "Dave Christianson" <davidchristianson3@xxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Friday, April 4, 2014 3:05:20 AM
Subject: Re:  No performance difference using libgfapi?

Hi David, 

Regarding hdparm: 

'hdparm' has to be used against SATA/IDE device. 

--snip-- 
hdparm - get/set SATA/IDE device parameters 

hdparm provides a command line interface to various kernel interfaces supported by the Linux SATA/PATA/SAS "libata" subsystem and the older 
IDE driver subsystem. Many newer (2008 and later) USB drive enclosures now also support "SAT" (SCSI-ATA Command Translation) and therefore 
may also work with hdparm. E.g. recent WD "Passport" models and recent NexStar-3 enclosures. Some options may work correctly only with 
the latest kernels. 

--/snip-- 

Here in your guest , its 'virtio' disk ( /dev/vd{a,b,c..} which uses 'virtio' bus and virtio-blk is not ATA, so this looks to be an incorrect way of using 'hdparm'. 

More or less, now the virt software allows you to use "virtio-scsi" ( the disk shown inside the guest will be sd{a,b,..}, where most of featureset is respected from scsi protocol point of view.. you may look into that as well. 

--Humble 





On Thu, Apr 3, 2014 at 3:35 PM, Dave Christianson < davidchristianson3@xxxxxxxxx > wrote: 



Good Morning, 

In my earlier experience invoking a VM using qemu/libgfapi, I reported that it was noticeably faster than the same VM invoked from libvirt using a FUSE mount; however, this was erroneous as the qemu/libgfapi-invoked image was created using 2x the RAM and cpu's... 

So, invoking the image using both methods using consistent specs of 2GB RAM and 2 cpu's, I attempted to check drive performance using the following commands: 

(For regular FUSE mount I have the gluster volume mounted at /var/lib/libvirt/images.) 

(For libgfapi I specify -disk file=gluster://gfs-00/gfsvol/tester1/img.) 

Using libvirt/FUSE mount: 
[root@tester1 ~]# hdparm -Tt /dev/vda1 
/dev/vda1: 
Timing cached reads: 11346 MB in 2.00 seconds = 5681.63 MB/sec 
Timing buffered disk reads: 36 MB in 3.05 seconds = 11.80 MB/sec 
[root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output 
10240+0 records in 
10240+0 records out 
41943040 bytes (42MB) copied, 0.0646241 s, 649 MB/sec 

Using qemu/libgfapi: 
[root@tester1 ~]# hdparm -Tt /dev/vda1 
/dev/vda1: 
Timing cached reads: 11998 MB in 2.00 seconds = 6008.57 MB/sec 
Timing buffered disk reads: 36 MB in 3.03 seconds = 11.89 MB/sec 
[root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output 
10240+0 records in 
10240+0 records out 
41943040 bytes (42MB) copied, 0.0621412 s, 675 MB/sec 

Should I be seeing a bigger difference, or am I doing something wrong? 

I'm also curious whether people have found that the performance difference is greater as the size of the gluster cluster scales up. 

Thanks, 
David 


_______________________________________________ 
Gluster-users mailing list 
Gluster-users@xxxxxxxxxxx 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux