Dear Team,
In the event of the failure of master1, master 2 glusterfs home directory will become read only fs.
If we manually shutdown the master 2, then there is no impact on the file system and all io operation will complete with out any problem.
can you please provide some guidance to isolate the problem.
# gluster peer status
Number of Peers: 2
Hostname: master1-ib.dbt.au
Uuid: a5608d66-a3c6-450e-a239-108668083ff2
State: Peer in Cluster (Connected)
Hostname: compute01-ib.dbt.au
Uuid: d2c47fc2-f673-4790-b368-d214a58c59f4
State: Peer in Cluster (Connected)
# gluster vol info home
Volume Name: home
Type: Replicate
Volume ID: 2403ddf9-c2e0-4930-bc94-734772ef099f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp,rdma
Bricks:
Brick1: master1-ib.dbt.au:/glusterfs/home/brick1
Brick2: master2-ib.dbt.au:/glusterfs/home/brick2
Options Reconfigured:
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
network.remote-dio: enable
cluster.quorum-type: auto
nfs.disable: on
performance.readdir-ahead: on
cluster.server-quorum-type: server
config.transport: tcp,rdma
network.ping-timeout: 10
cluster.server-quorum-ratio: 51%
cluster.enable-shared-storage: disable
# gluster vol heal home info
Brick master1-ib.dbt.au:/glusterfs/home/brick1
Status: Connected
Number of entries: 0
Brick master2-ib.dbt.au:/glusterfs/home/brick2
Status: Connected
Number of entries: 0
# gluster vol heal home info heal-failed
Gathering list of heal failed entries on volume home has been unsuccessful on bricks that are down. Please check if all brick processes are running[root@master2
Thank You
Atul Yadav
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users