I run GlusterFS 3.7.3 on RHEL 6, with one volume replicated on two nodes (server4 and server5).
I changed the IPs of the peers and now server5 seems to be stuck with the old IP of server4, as shown in 'gluster peer status' (see at the bottom) . It seemed to work fine otherwise, but that could cause confusion, and I wonder if it may break things in the future.
I tried to stop gluster on server5, remove the IP from:
$ sudo cat /var/lib/glusterd/peers/16d75a23-5174-4926-bcc8-942aabd5ff61
uuid=16d75a23-5174-4926-bcc8-942aabd5ff61
state=3
hostname1=server4
hostname2=server4.domain
hostname3= aaa.bbb.ccc.ddd
and start gluster again on that node, but Gluster adds it back in there.
How can I make Gluster understand that that IP is no longer valid and it should only use the hostnames instead?
Thanks,
Thibault.
Some information on my setup:
Volume Info:
Volume Name: home
Type: Replicate
Volume ID: 2299a204-a1dc-449d-8556-bc65197373c7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server4.domain:/gluster/home-brick-1
Brick2: server5.domain:/gluster/home-brick-1
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
Peer Status:
Number of Peers: 3
Hostname: server1.domain
Uuid: ca709a01-b137-427d-a345-dcfcc7dd3539
State: Peer in Cluster (Connected)
Hostname: aaa.bbb.ccc.ddd
Uuid: 16d75a23-5174-4926-bcc8-942aabd5ff61
State: Peer in Cluster (Connected)
Other names:
server4.domain
aaa.bbb.ccc.ddd
Hostname: server2
Uuid: 989da253-acfb-418e-81ca-0440dc80df10
State: Peer in Cluster (Connected)
Other names:
server2.domain
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users