Hi,
I'd go through all the gluster and ganesha-nfs documentation, but cannot found a reference setting specific for NFS Ganseha on Gluster for VM workload. There are oVIrt profile which sound like they optimize for fuse/libgfapi. and there are bugs reported quite a bit on cache problems which lead to corruption. I'm using the below setting which is not satisfied with it's performance. The latency most are WRITE fop in general from volume profile. However, compared with NFS mount and Local disk performance, it is quite slow.
Does anyone have suggestions on improving overall performance?
Thank you
Regards,
Levin
fio -filename=./testfile.bin -direct=1 -iodepth 16 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=8k -size=1000M -numjobs=30 -runtime=100 -group_reporting -name=mytest
[In Guest, Virt SCSI]
READ: bw=11.0MiB/s (12.6MB/s), 11.0MiB/s-11.0MiB/s (12.6MB/s-12.6MB/s), io=1198MiB (1257MB), run=100013-100013msec
WRITE: bw=5267KiB/s (5394kB/s), 5267KiB/s-5267KiB/s (5394kB/s-5394kB/s), io=514MiB (539MB), run=100013-100013msec
WRITE: bw=5267KiB/s (5394kB/s), 5267KiB/s-5267KiB/s (5394kB/s-5394kB/s), io=514MiB (539MB), run=100013-100013msec
[NFS4.1 @10GbE]
READ: bw=86.9MiB/s (91.2MB/s), 86.9MiB/s-86.9MiB/s (91.2MB/s-91.2MB/s), io=8694MiB (9116MB), run=100001-100001msec
WRITE: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=3728MiB (3909MB), run=100001-100001msec
WRITE: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=3728MiB (3909MB), run=100001-100001msec
[Local ZFS sync=always recordsize=128k, compression=on]
READ: bw=585MiB/s (613MB/s), 585MiB/s-585MiB/s (613MB/s-613MB/s), io=20.5GiB (22.0GB), run=35913-35913msec
WRITE: bw=251MiB/s (263MB/s), 251MiB/s-251MiB/s (263MB/s-263MB/s), io=9000MiB (9438MB), run=35913-35913msec
WRITE: bw=251MiB/s (263MB/s), 251MiB/s-251MiB/s (263MB/s-263MB/s), io=9000MiB (9438MB), run=35913-35913msec
Volume Name: vol1
Type: Replicate
Volume ID: dfdb919e-cbf2-4f57-b6f2-1035459ef8fc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: sds-2:/hdpool1/hg2/brick1
Brick2: sds-3:/hdpool1/hg2/brick1
Brick3: arb-1:/arbiter/hg2/brick1 (arbiter)
Options Reconfigured:
cluster.eager-lock: on
features.cache-invalidation-timeout: 15
features.shard: on
features.shard-block-size: 512MB
ganesha.enable: on
features.cache-invalidation: on
performance.io-cache: off
cluster.choose-local: true
performance.low-prio-threads: 32
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
user.cifs: off
client.event-threads: 16
server.event-threads: 16
network.ping-timeout: 20
server.tcp-user-timeout: 20
cluster.lookup-optimize: off
performance.write-behind: off
performance.flush-behind: off
performance.cache-size: 0
performance.io-thread-count: 64
performance.high-prio-threads: 64
performance.normal-prio-threads: 64
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable
nfs-ganesha: enable
Type: Replicate
Volume ID: dfdb919e-cbf2-4f57-b6f2-1035459ef8fc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: sds-2:/hdpool1/hg2/brick1
Brick2: sds-3:/hdpool1/hg2/brick1
Brick3: arb-1:/arbiter/hg2/brick1 (arbiter)
Options Reconfigured:
cluster.eager-lock: on
features.cache-invalidation-timeout: 15
features.shard: on
features.shard-block-size: 512MB
ganesha.enable: on
features.cache-invalidation: on
performance.io-cache: off
cluster.choose-local: true
performance.low-prio-threads: 32
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
user.cifs: off
client.event-threads: 16
server.event-threads: 16
network.ping-timeout: 20
server.tcp-user-timeout: 20
cluster.lookup-optimize: off
performance.write-behind: off
performance.flush-behind: off
performance.cache-size: 0
performance.io-thread-count: 64
performance.high-prio-threads: 64
performance.normal-prio-threads: 64
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable
nfs-ganesha: enable
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users