Re: [IMPORTANT, PLEASE READ] replace-brick problem with all releases till now

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 10/03/2015 05:31 AM, Steve Dainard wrote:
On Thu, Oct 1, 2015 at 2:24 AM, Pranith Kumar Karampuri
<pkarampu@xxxxxxxxxx> wrote:
hi,
      In releases till now from day-1 with replication, there is a corner
case bug which can wipe out the all the bricks in that replica set  when the
disk/brick(s) are  replaced.

Here are the steps that could lead to that situation:
0) Clients are operating on the volume and are actively pumping data.
1) Execute replace-brick command (OR) take down the brick that needs the
disk replaced or re-formatted. and bring the brick back up.
So the better course of action would be to remove-brick <vol> replica
<n-1> start, replace the disk, and then add-brick <vol> replica <n+1>
? Perhaps it would be wise to un-peer the host before adding the brick
back?

Is there any chance that adding a 3rd replica to a 2 replica cluster
with active client writes could cause the same issue?
Yes there is that chance. On 3.7.3 replace-brick issue is already fixed. This mail applies mostly for version before it, i.e. from v3.1.x. till v3.7.2 We are also trying to fix the add-brick issue and providing new command called 'reset-brick' for reformatting of brick usecase. Let us see if we can deliver them by 3.7.6.

For older versions, these are the steps for performing replace-brick: http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick

Pranith

On 3.7.3 I recently lost 2 of 3 bricks all the way down to the XFS
filesystem being corrupted, but I blamed that on the disk controller
which was doing a raid0 pass-through on 2 hosts, but not the new 3rd
host. This occurred after some time though, and client writes were
being blocked while the 3rd brick was being added.

2) Client creates a file/directory just on the root of the brick which
succeeds on the new brick but fails on the bricks that have been online
(May be because the file existed on the bricks that are good copies)
3) Now when self-heal is triggered from the Client/self-heal-daemon it
thinks the just replaced brick is the correct directory and deletes the
files/directories from the bricks that have the actual data.

I have been working on afr for almost 4 years now and never saw any user
complain about this problem. We were working on a document for an  official
way to replace brick/disk but it never occurred to us that this  could
happen until recently. I am going to get a proper document by end of this
week on replacing the bricks/disks in a safe way. And will keep you  guys
posted about fixes to prevent this from happening entirely.

Pranith
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux