Hi,
How was the profile data collected? Was it a cumulative profile output
or an incremental profile output?
How did the initial data get written to the 20 bricks, before the read
workload was started?
What I suspect here is that, you are collecting cumulative output, which
is possibly showing up the initial writes that were performed to
populate the volume.
From the code I do not see seek tracked in iostats, so it is not the
*seek* that is bloating these numbers up.
Regards,
Shyam
On 08/08/2016 04:47 PM, Jackie Tung wrote:
Hi,
I’m doing some benchmarking vs our trial GlusterFS setup (distributed replicated, 20 bricks configured as 10 pairs). I’m running 3.6.9 currently. Our benchmarking load involves a large number of concurrent readers that continuously pick random file / offsets to read. No writes are ever issued. However I’m seeing the following gluster profiling output:
Block Size: 32768b+ 65536b+ 131072b+
No. of Reads: 0 0 6217035
No. of Writes: 87967 25443 4372800
Why are there writes at all? Are seeks being shown here as writes?
Thanks,
Jackie
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users