On 01/08/2014 08:10 PM, Alun James
wrote:
Is it normal to expect very high server load and clients
being unable to access the mounts during this process? It
means the application running on this will need to be offline
for hours.
Nope. I never seen glusterfsd reaching 200% cpu . However it depends
on the server hardware configuration also. what is the RAM size of
your servers? Also I will suggest you to check logs and see anything
unusual happening.
From:
"Ravishankar N" <ravishankar@xxxxxxxxxx>
To: "Alun James" <ajames@xxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Wednesday, 8 January, 2014 2:37:05 PM
Subject: Re: delete brick / format /
add empty brick
On 01/08/2014 05:57 PM, Alun
James wrote:
I have this a go.
gluster volume add-brick myvol replica 2
server2:/brick1
gluster volume heal
myvol full
It seems to be syncing
the files but very slowly. Also the server load on
server01 has risen to 200+ and the gluster clients
are no longer able to access the mounts. Is there a
way to do this that is not as impactful? Could I
manually rsync the bricks before adding the second
node back in?
The recommended way to heal is using the command mentioned.
The gluster self heal daemon takes appropriate file locks
before healing. Since clients are accessing the volume, I
don't think bypassing that and rsyncing the bricks is a good
idea.
Regards,
Ravi
Alun.
From:
"Ravishankar N" <ravishankar@xxxxxxxxxx>
To: "Alun James" <ajames@xxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Wednesday, 8 January, 2014 4:00:44
AM
Subject: Re: delete brick
/ format / add empty brick
On 01/07/2014 09:40
PM, Alun James wrote:
Hi
folks,
I had a 2
node (1 brick each) replica, some network
meltdown issues seemed to cause problems
with second node (server02). glusterfsd
process reaching 200-300% and errors
relating to split brain possibilities and
self heal errors.
Original
volume info:
Volume
Name: myvol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: server01:/brick1
Brick2: server02:/brick1
I removed the second
brick (that was showing server problems).
gluster
volume remove-brick myvol replica 1
server02:/brick1
Now the
volume status is:
Volume Name: tsfsvol0
Type: Distribute
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: server01:/brick1
All is fine and the data on working
server is sound.
The xfs partition for server02:/brick1 has
been formatted and therefore the data
gone. All other gluster config data has
remained untouched. Can I re-add the
second server to the volume with an
empty brick and the data will auto
replicate over from the working server?
gluster volume add-brick myvol
replica 2 server2:/brick1 ??
Yes this should
work fine. You will need to run a `gluster
volume heal myvol full` to manually trigger
the replication.
ALUN
JAMES
Senior Systems Engineer
Tibus
T: +44 (0)28 9033 1122
E: ajames@xxxxxxxxx
W: www.tibus.com
Follow us on Twitter @tibus
Tibus is a trading name of The
Internet Business Ltd, a company
limited by share capital and
registered in Northern Ireland,
NI31325. It is part of UTV Media Plc.
This email and any attachment may
contain confidential information for
the sole use of the intended
recipient. Any review, use,
distribution or disclosure by others
is strictly prohibited. If you are not
the intended recipient (or authorised
to receive for the recipient), please
contact the sender by reply email and
delete all copies of this message.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users