Hi guys,
I managed to get gluster running but im having a couple of issues with my setup 1) my peer status is rejected but connected 2) my self heal daemon is not running on one server and im getting split brain files.
My setup is two gluster volumes (gfs1 and gfs2) on replicate each with a brick
1) My peer status doesnt go into Peer in Cluster. running a peer status command gives me State:Peer Rejected (Connected) . At this point, the brick on gfs2 does not go online and i get this output
#gluster volume statusStatus of volume: gfsvolumeGluster process Port Online Pid------------------------------------------------------------------------------Brick gfs1:/export/sda/brick 49153 Y 15025NFS Server on localhost 2049 Y 15039Self-heal Daemon on localhost N/A Y 15044Task Status of Volume gfsvolume------------------------------------------------------------------------------There are no active volume tasks
I have followed the methods used in one of the threads and performed the following
a) stop glusterdb) rm all files in /var/lib/glusterd/ except for glusterd.infoc) start glusterd and probe gfs1 from gfs2 and peer status which gives me
# gluster peer statusNumber of Peers: 1Hostname: gfs1Uuid: 49acc9c2-4809-4da5-a6f0-6a3d48314070State: Sent and Received peer request (Connected)
the same thread mentioned that changing the status of the peer in /var/lib/glusterd/peer/{UUID} from status=5 to status=3 fixes this and on restart of gfs1 the peer status goes to
This fixes the connection between the peers and the volume status shows#gluster peer statusNumber of Peers: 1Hostname: gfs1Uuid: 49acc9c2-4809-4da5-a6f0-6a3d48314070State: Peer in Cluster (Connected)
Status of volume: gfsvolumeGluster process Port Online Pid------------------------------------------------------------------------------Brick gfs1:/export/sda/brick 49153 Y 10852Brick gfs2:/export/sda/brick 49152 Y 17024NFS Server on localhost N/A N N/ASelf-heal Daemon on localhost N/A N N/ANFS Server on gfs2 N/A N N/ASelf-heal Daemon on gfs2 N/A N N/ATask Status of Volume gfsvolume------------------------------------------------------------------------------There are no active volume tasks
Which brings us to problem 2
2) My self-heal demon is not alive
I fixed the self heal on gfs1 by running
and running a volume status command gives me#find <gluster-mount> -noleaf -print0 | xargs --null stat >/dev/null 2>/var/log/gluster/<gluster-mount>-selfheal.log
# gluster volume statusStatus of volume: gfsvolumeGluster process Port Online Pid------------------------------------------------------------------------------Brick gfs1:/export/sda/brick 49152 Y 16660Brick gfs2:/export/sda/brick 49152 Y 21582NFS Server on localhost 2049 Y 16674Self-heal Daemon on localhost N/A Y 16679NFS Server on gfs2 N/A N 21596Self-heal Daemon on gfs2 N/A N 21600Task Status of Volume gfsvolume------------------------------------------------------------------------------There are no active volume tasks
However, running this on gfs2 doesnt fix the daemon.
Restarting the gfs2 server brings me back to problem 1 and the cycle continues..
Can anyone assist me with this issue(s).. thank you.
Thank You Kindly,
Kaamesh
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users