[Gluster-devel] iostat not showing data transfer while doing read operation with libgfapi

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am running performance  test between fuse vs libgfapi.  I have a single node, client and server are running on same node. I have NVMe SSD device as a storage. 

My volume info::

[root@sys04 ~]# gluster vol info
Volume Name: vol1
Type: Distribute
Volume ID: 9f60ceaf-3643-4325-855a-455974e36cc7
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 172.16.71.19:/mnt_nvme/brick1
Options Reconfigured:
performance.cache-size: 0
performance.write-behind: off
performance.read-ahead: off
performance.io-cache: off
performance.strict-o-direct: on


fio Job file::

[global]
direct=1
runtime=20
time_based
ioengine=gfapi
iodepth=1
volume=vol1
brick=172.16.71.19
rw=read
size=128g
bs=32k
group_reporting
numjobs=1
filename=128g.bar

While doing sequential read test, I am not seeing any data transfer on device with iostat tool.  Looks like gfapi engine is reading from the cache because i am reading from same file with different block sizes. 

But i disabled  io cache  for my volume. Can someone help me  from where fio is reading the data?


Sateesh

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux