Hi all
It's said[1] that profile based on io-stats, if you enable this feature,
it can affect system performance while the profile information is being
collected.
I do some tests on my two linux+vmware virtual machine with replica(lack
of resources ). And the results shows no diffrence to me, following is
the test case
#dd if=/dev/zero of=test bs=4k count=524288
#fio --filename=test -iodepth=64 -ioengine=libaio --direct=1 --rw=read
--bs=1m --size=2g --numjobs=4 --runtime=10 --group_reporting
--name=test-read
#fio -filename=test -iodepth=64 -ioengine=libaio -direct=1 -rw=write
-bs=1m -size=2g -numjobs=4 -runtime=20 -group_reporting -name=test-write
#fio -filename=test -iodepth=64 -ioengine=libaio -direct=1 -rw=randread
-bs=4k -size=2G -numjobs=64 -runtime=20 -group_reporting
-name=test-rand-read
#fio -filename=test -iodepth=64 -ioengine=libaio -direct=1 -rw=randwrite
-bs=4k -size=2G -numjobs=64 -runtime=20 -group_reporting
-name=test-rand-write
It's said that fio is only for lagre files, also i suspect that the test
infrastructure is too small. The question is that, if you guys have
detailed data for how profile affect performance?
More, we wanna gain the detail r/w iops/bandwidth data for each brick.
It seems that only profile can provide relatived data to be
calculated?if i'm wrong pls corrent me.
If profile really affect peformance so much, would you mind a new
command such as "gluster volume io [nfs]" to acquire brick r/w
fops/data? Or just help us review it?
[1]
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/chap-monitoring_red_hat_storage_workload#sect-Running_the_Volume_Profile_Command
--
Thanks
-Xie
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel