gluster 3.7.13 with shards won't heal with io-thread-count 16

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was testing this with VMware workstation with 3 node v3.7.13 3Gram 2vcpu each,

Volume Name: v1
Type: Replicate
Volume ID: 52451d84-4176-4ec1-96e8-7e60d02a37f5
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.3.71:/gfs/b1/v1
Brick2: 192.168.3.72:/gfs/b1/v1
Brick3: 192.168.3.73:/gfs/b1/v1
Options Reconfigured:
network.ping-timeout: 10
performance.cache-refresh-timeout: 1
cluster.server-quorum-type: server
performance.quick-read: off
performance.stat-prefetch: off
features.shard-block-size: 16MB
features.shard: on
performance.readdir-ahead: on
performance.cache-size: 128MB
performance.write-behind-window-size: 4MB
performance.io-cache: off
performance.write-behind: on
performance.flush-behind: on
performance.io-thread-count: 16
nfs.rpc-auth-allow: 192.168.3.65
cluster.server-quorum-ratio: 51%

But since I had one running on my production 9Gram 6vcpu 3Gnicbond with no error but of course difference settings like

performance.cache-size: 1GB
performance.io-thread-count: 32
features.shard-block-size: 64MB
performance.write-behind-window-size: 16MB

I figured it out that the performance.io-thread-count: 16 is the problem, once I put it to 32 like my prod, the healing healed right away.

anymore I need to keep in mind, lol it's really freaking crazy to run this right away without more testing...
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux