On 11/25/21 17:53, Seyed Mohammad Fakhraie wrote:
* The output of 'gluster volume info' is as follows: *Volume Name: gv0 Type: Distributed-Replicate Volume ID: 75946a3e-f670-4f58-a61e-c3c61e3d977d Status: Started Snapshot Count: 0 Number of Bricks: 5 x (2 + 1) = 15 Transport-type: tcp Bricks: Brick1: storage-node0:/data/brick0/gv0 Brick2: storage-node1:/data/brick0/gv0 Brick3: storage-node2:/data/arbit0/gv0 (arbiter) Brick4: storage-node3:/data/brick0/gv0 Brick5: storage-node4:/data/brick0/gv0 Brick6: storage-node0:/data/arbit0/gv0 (arbiter) Brick7: storage-node1:/data/brick1/gv0 Brick8: storage-node2:/data/brick0/gv0 Brick9: storage-node3:/data/arbit0/gv0 (arbiter) Brick10: storage-node4:/data/brick1/gv0 Brick11: storage-node0:/data/brick1/gv0 Brick12: storage-node1:/data/arbit0/gv0 (arbiter) Brick13: storage-node2:/data/brick1/gv0 Brick14: storage-node3:/data/brick1/gv0 Brick15: storage-node4:/data/arbit0/gv0 (arbiter) Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet
To track down an issue, I would recommend to try to reproduce it on the very basic replicated volume first. It is important to have all of the bricks on the same host, e.g.: Volume Name: test0 Type: Replicate Volume ID: a71f90a1-4136-4c87-bfc5-18b1da477864 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: node0:/pool/0 ;; same host Brick2: node0:/pool/1 ;; same host Brick3: node0:/pool/2 ;; same host Options Reconfigured: cluster.granular-entry-heal: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off Also limit your 'fio' workload to 'numjobs=1'. Finally you forgot to mention GlusterFS version you're using. Dmitry