Re: Healing completely loss file on replica 3 volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dmitry,

Answers inline.

On Fri, Nov 29, 2019 at 6:26 PM Dmitry Antipov <dmantipov@xxxxxxxxx> wrote:
I'm trying to manually garbage data on bricks
First of all changing data directly on the backend is not recommended and is not supported.  All the operations needs to be done from the client mount point.
Only few special cases needs changing few data about the file directly on the backend.
(when the volume is
stopped) and then check whether healing is possible. For example:

Start:

# glusterd --debug

Bricks (on EXT4 mounted with 'rw,realtime'):

# mkdir /root/data0
# mkdir /root/data1
# mkdir /root/data2

Volume:

# gluster volume create gv0 replica 3 [local-ip]:/root/data0  [local-ip]:/root/data1  [local-ip]:/root/data2 force
volume create: gv0: success: please start the volume to access data
# gluster volume start gv0
volume start: gv0: success

Mount:

# mkdir /mnt/gv0
# mount -t glusterfs [local-ip]:/gv0 /mnt/gv0
WARNING: getfattr not found, certain checks will be skipped..

Create file:

# openssl rand 65536 > /mnt/gv0/64K
# md5sum /mnt/gv0/64K
ca53c9c1b6cd78f59a91cd1b0b866ed9 /mnt/gv0/64K

Umount and down the volume:

# umount /mnt/gv0
# gluster volume stop gv0
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gv0: success

Check data on bricks:

# md5sum /root/data[012]/64K
ca53c9c1b6cd78f59a91cd1b0b866ed9  /root/data0/64K
ca53c9c1b6cd78f59a91cd1b0b866ed9  /root/data1/64K
ca53c9c1b6cd78f59a91cd1b0b866ed9  /root/data2/64K

Seems OK. Then garbage all:

# openssl rand 65536 > /root/data0/64K
# openssl rand 65536 > /root/data1/64K
# openssl rand 65536 > /root/data2/64K
# md5sum /root/data[012]/64K
c69096d15007578dab95d9940f89e167  /root/data0/64K
b85292fb60f1a1d27f1b0e3bc6bfdfae  /root/data1/64K
c2e90335cc2f600ddab5c53a992b2bb6  /root/data2/64K

Restart the volume and start full heal:

# gluster volume start gv0
volume start: gv0: success
# /usr/glusterfs/sbin/gluster volume heal gv0 full
Launching heal operation to perform full self heal on volume gv0 has been successful
Use heal info commands to check status.

Finally:

# gluster volume heal gv0 info summary

Brick [local-ip]:/root/data0
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick [local-ip]:/root/data1
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick [local-ip]:/root/data2
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Since all 3 copies are different from each other, majority voting is useless
and data (IIUC) should be marked as split-brain at least. But I'm seeing just
zeroes everywhere above. Why it is so?
Since the data is changed directly on the backend, gluster will not be knowing these changes. If the changes done from the client mount fails on some bricks, only those will be recognized and marked by gluster so that it can heal those when possible. Since this is a replica 3 volume and if you end up in split-brain when you are doing the operations on the mount pint, then that will be a bug. As far as this is considered it is not a bug or issue on the gluster side.

HTH,
Karthik

Thanks in advance,
Dmitry
________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux