Is it normal to expect very high server load and clients being unable to access the mounts during this process? It means the application running on this will need to be offline for hours.
From: "Ravishankar N" <ravishankar@xxxxxxxxxx>
To: "Alun James" <ajames@xxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Wednesday, 8 January, 2014 2:37:05 PM
Subject: Re: delete brick / format / add empty brick
Regards,
Ravi
To: "Alun James" <ajames@xxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Wednesday, 8 January, 2014 2:37:05 PM
Subject: Re: delete brick / format / add empty brick
On 01/08/2014 05:57 PM, Alun James
wrote:
The recommended way to heal is using the command mentioned. The gluster self heal daemon takes appropriate file locks before healing. Since clients are accessing the volume, I don't think bypassing that and rsyncing the bricks is a good idea.I have this a go.
gluster volume add-brick myvol replica 2 server2:/brick1gluster volume heal myvol full
It seems to be syncing the files but very slowly. Also the server load on server01 has risen to 200+ and the gluster clients are no longer able to access the mounts. Is there a way to do this that is not as impactful? Could I manually rsync the bricks before adding the second node back in?
Regards,
Ravi
Alun.
From: "Ravishankar N" <ravishankar@xxxxxxxxxx>
To: "Alun James" <ajames@xxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Wednesday, 8 January, 2014 4:00:44 AM
Subject: Re: delete brick / format / add empty brick
On 01/07/2014 09:40 PM, Alun James wrote:
Hi folks,
I had a 2 node (1 brick each) replica, some network meltdown issues seemed to cause problems with second node (server02). glusterfsd process reaching 200-300% and errors relating to split brain possibilities and self heal errors.
Original volume info:
Volume Name: myvolType: ReplicateStatus: StartedNumber of Bricks: 2Transport-type: tcpBricks:Brick1: server01:/brick1Brick2: server02:/brick1
I removed the second brick (that was showing server problems).
gluster volume remove-brick myvol replica 1 server02:/brick1
Now the volume status is:
Volume Name: tsfsvol0Type: DistributeStatus: StartedNumber of Bricks: 1Transport-type: tcpBricks:Brick1: server01:/brick1
All is fine and the data on working server is sound.
The xfs partition for server02:/brick1 has been formatted and therefore the data gone. All other gluster config data has remained untouched. Can I re-add the second server to the volume with an empty brick and the data will auto replicate over from the working server?
gluster volume add-brick myvol replica 2 server2:/brick1 ??
Yes this should work fine. You will need to run a `gluster volume heal myvol full` to manually trigger the replication.
ALUN JAMES
Senior Systems Engineer
Tibus
T: +44 (0)28 9033 1122
E: ajames@xxxxxxxxx
W: www.tibus.com
Follow us on Twitter @tibus
Tibus is a trading name of The Internet Business Ltd, a company limited by share capital and registered in Northern Ireland, NI31325. It is part of UTV Media Plc.
This email and any attachment may contain confidential information for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorised to receive for the recipient), please contact the sender by reply email and delete all copies of this message.
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users