Please help troubleshooting glusterfs with the following setup:
Distributed volume without replication. Sharding enabled.
# cat /etc/centos-release
CentOS release 6.9 (Final)
# glusterfs --version
glusterfs 3.12.3
[root@master-5f81bad0054a11e8b
f7d0671029ed6b8 uploads]# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637
031925 Status: Started
Snapshot Count: 0
Number of Bricks: 27
Transport-type: tcp
Bricks:
Brick1: gluster3.qencode.com:/var/stor
age/brick/gv0 Brick2: encoder-376cac0405f311e8847006
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick3: encoder-ee6761c0091c11e891ba06
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick4: encoder-ee68b8ea091c11e89c2d06
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick5: encoder-ee663700091c11e8b48f06
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick6: encoder-efcf113e091c11e8995206
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick7: encoder-efcd5a24091c11e8963a06
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick8: encoder-099f557e091d11e882f706
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick9: encoder-099bdda4091d11e8810906
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick10: encoder-099dca56091d11e8b34106
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick11: encoder-09a1ba4e091d11e8a3c206
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick12: encoder-099a826a091d11e8959406
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick13: encoder-0998aa8a091d11e8a81606
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick14: encoder-0b582724091d11e8b3b406
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick15: encoder-0dff527c091d11e896f206
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick16: encoder-0e0d5c14091d11e886cf06
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick17: encoder-7f1bf3d4093b11e8a35806
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick18: encoder-7f70378c093b11e8852606
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick19: encoder-7f19528c093b11e88f1006
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick20: encoder-7f76c048093b11e8a74706
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick21: encoder-7f7fc90e093b11e8a74e06
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick22: encoder-7f6bc382093b11e8b8a306
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick23: encoder-7f7b44d8093b11e8906f06
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick24: encoder-7f72aa30093b11e89a8e06
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick25: encoder-7f7d735c093b11e8b46506
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick26: encoder-7f1a5006093b11e89bcb06
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Brick27: encoder-95791076093b11e8af1706
71029ed6b8.qencode.com:/var/ storage/brick/gv0 Options Reconfigured:
cluster.min-free-disk: 10%
performance.cache-max-file-siz
e: 1048576 nfs.disable: on
transport.address-family: inet
features.shard: on
performance.client-io-threads: on
Each brick is 15Gb size.
After using volume for several hours with intensive read/write operations (~300GB written and then deleted) an attempt to write to volume results in an Input/Output error:
# wget https://speed.hetzner.de/1GB.b
in --2018-02-04 12:02:34-- https://speed.hetzner.de/1GB.b
in Resolving speed.hetzner.de... 88.198.248.254, 2a01:4f8:0:59ed::2
Connecting to speed.hetzner.de|88.198.248.25
4|:443... connected. HTTP request sent, awaiting response... 200 OK
Length: 1048576000 (1000M) [application/octet-stream]
Saving to: `1GB.bin'
38% [=============================
============================== ==> ] 403,619,518 27.8M/s in 15s
Cannot write to `1GB.bin' (Input/output error).
I don't see anything written to glusterd.log, or any other logs in /var/log/glusterfs/* when this error occurs.
Deleting partially downloaded file works without error.
Thanks,Nikita Yeryomin
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users