Seems I found a problem. The problem was in the gluster_shared_storage. All nodes had gluster_shared_storage and when I tried to remove a node from the cluster it hadn't took any effects in shared storage. In shared storage it still was in peer list.
I.e. this command won't work with eneabled shared storage:
gluster peer detach 192.168.0.124
So, I removed shared storage and then removed a node:
gluster volume set all cluster.enable-shared-storage disable
gluster peer detach 192.168.0.124
gluster volume set all cluster.enable-shared-storage enable
It isn't very convenient. Also, now the main problem is when I adding a new node. How to add shared_storage to that node without removing shared storage and creating a new shared storage?
Sincerely,
Alexandr
On Sun, Nov 27, 2016 at 3:45 PM, Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx> wrote:
bricks are not peers and vica versa.
Your peers are the nodes, bricks are the disks on the nodes. When you remove a bricks from the cluster you don't remove the peer.
# gluster peer detach 192.168.0.124:/data/brick1
Incorrect syntax, the command is for removing the peer, not the brick. It should be:
# gluster peer detach 192.168.0.124
On 27/11/2016 8:49 PM, Alexandr Porunov wrote:
# gluster volume status gv0
Status of volume: gv0Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------ ------------------ Brick 192.168.0.123:/data/brick1/gv0 N/A N/A N N/ABrick 192.168.0.125:/data/brick1/gv0 49152 0 Y 1396Self-heal Daemon on localhost N/A N/A Y 3252Self-heal Daemon on 192.168.0.125 N/A N/A Y 13339Task Status of Volume gv0------------------------------------------------------------ ------------------ There are no active volume tasks
It doesn't show that 192.168.0.124 is in the volume but it is in the cluster. Here is why:
When I try to add it back to peer list it doesn't do anything. Because it says that it is already in a peer list:
# gluster peer probe 192.168.0.124peer probe: success. Host 192.168.0.124 port 24007 already in peer list
OK. I go to the machine 192.168.0.124 and try to show a peer list:# gluster peer statusNumber of Peers: 0
OK. I go to the machine 192.168.0.123 and try to show peer status:# gluster peer statusNumber of Peers: 2
Hostname: 192.168.0.125Uuid: a6ed1da8-3027-4400-afed-96429380fdc9 State: Peer in Cluster (Connected)
Hostname: 192.168.0.124Uuid: b7d829f3-80d9-4a78-90b8-f018bc758df0 State: Peer Rejected (Connected)
As we see machine with ip 192.168.0.123 thinks that 192.168.0.124 is in the cluster. OK lets remove it from the cluster:
# gluster peer detach 192.168.0.124:/data/brick1peer detach: failed: 192.168.0.124:/data/brick1 is not part of cluster
# gluster peer detach 192.168.0.124peer detach: failed: Brick(s) with the peer 192.168.0.124 exist in cluster
Isn't it strange? It is in the cluster and it isn't in the cluster. I can't neither add machine with IP 192.168.0.124 nor remove machine with IP 192.168.0.124
Do you know what is wrong with it?
Sincerely,Alexandr
On Sun, Nov 27, 2016 at 12:29 PM, Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx> wrote:
On 27/11/2016 7:28 PM, Alexandr Porunov wrote:
# Above command showed success but in reality brick is still in the cluster.
What makes you think this? what does a "gluster v gv0" show?
--
Lindsay Mathieson
-- Lindsay Mathieson
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users