On Fri, May 22, 2015 at 06:50:40PM +0800, Paul Guo wrote:
Hello,
I wrote two simple single-process seq read test case to compare libgfapi and
fuse. The logic looks like this.
char buf[32768];
while (1) {
cnt = read(fd, buf, sizeof(buf));
if (cnt == 0)
break;
else if (cnt > 0)
total += cnt;
// No "cnt < 0" was found during testing.
}
Following is the time which is needed to finish reading a large file.
fuse libgfapi
direct io: 40s 51s
non direct io: 40s 47s
The version is 3.6.3 on centos6.5. The result shows that libgfapi is
obviously slower than the fuse interface although the cpu cycles were saved
a lot during libgfapi testing. Each test was run before cleaning up all
kernel pageche&inode&dentry caches and stopping and then starting
glusterd&gluster (to clean up gluster cache).
I tested direct io because I suspected that fuse kernel readahead
helped more than the read optimization solutions in gluster. I searched
a lot but I did not find much about the comparison between fuse and
libgfapi. Anyone has known about this and known why?
Does your testing include the mount/unmount and/or libgfapi:glfs_init()
parts? Maybe you can share your test programs so that others can try and
check it too?
Yes, but mount or gfapi init code really does not need many cpu cycles.
(~100ms in my environment).
https://github.com/axboe/fio supports Gluster natively too. That tool
has been developed to test and compare I/O performance results. Does it
give you similar differences?
No surprise when comparing using fio.
# ../fio-master/fio -directory=/mnt -direct=0 -bs=32k -rw=read
-ioengine=sync -numjobs=1 -group_reporting -size=1G -name=mysync
-runtime=30 --exitall
READ: io=1024.0MB, aggrb=40054KB/s, minb=40054KB/s, maxb=40054KB/s,
mint=26179msec, maxt=26179msec
# ../fio-master/fio -direct=0 -bs=32k -rw=read -ioengine=gfapi
-numjobs=1 -group_reporting -size=1G -name=mysync -runtime=30 --exitall
-volume=gs -brick=$VOL_IP
READ: io=815008KB, aggrb=27164KB/s, minb=27164KB/s, maxb=27164KB/s,
mint=30003msec, maxt=30003msec
Thanks.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users