Re: Replica repair

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 02/20/2015 11:51 PM, Sam Giraffe wrote:
Hi,

I have a Gluster volume with 20 servers. The volume is setup with a
replica of 2.
Each server has 1 brick on it, so in essence I have 20 bricks, 10 of
which are a replica of the other 10.

One of the servers had a bad hard drive and the brick on the server
stopped responding.
This caused writes to the Gluster volume to slow down.
I am under the impression that one brick crashing should not have a
problem, not sure why writes slowed down? Any clue here?
That is right, bricks of a replica sub-volume going down must certainly not reduce write speeds. Was the file being accessed from the mount residing on that particular replica pair? AFR translator waits for replies from both the bricks (provided they are up) before returning the result of the write. Looking at the client and brick logs would help.

Secondly, in order to restore the brick, I had to remote 2 bricks or 2
servers, since I had setup the volume with a replica of 2. For
removing the 2nd brick, I picked a server randomly, is that ok? I was
afraid I would have picked a server that is the replica of the bad
server and then I would lose data.
If only one of the bricks of the replica had a bad drive, that is the one you would need to replace using the 'gluster vol replace-brick' command with the 'commit-force' option. After that, you can run `gluster volume heal <volname> full` which would copy the data from the healthy replica brick to the one that you just replaced.
Lastly, do I need to heal the volume after removing both bricks? What
happens to the data on the bricks?
If you're removing one distribute leg completely (using the `remove-brick start/status/commit` command sequence) making it a 9x2 (from a 10 x2) volume, then data would be migrated from that leg into other distribute subvols. The re-balance process which does this would automatically read from the correct 'healthy' replica brick for each file.

Hope that helps.
Ravi
I am using Gluster 3.6


Thank you
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux