How to trigger a resync of a newly replaced empty brick in replicate config ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


My volume home is configured in replicate mode (version 3.12.4) with the bricks
server1:/data/gluster/brick1
server2:/data/gluster/brick1

server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a
> gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit force

I was expecting that the self-heal daemon would start copying data from server1:/data/gluster/brick1 
(about 7.4 TB) to the empty server2:/data/gluster/brick1, which it only did for directories, but not for files. 

For the moment, I launched on the fuse mount point
> find . | xargs stat
but crawling the whole volume (100 TB) to trigger self-healing of a single brick of 7.4 TB is unefficient.

Is there any trick to only self-heal a single brick, either by setting some attributes to its top directory, for example ?


Many thanks,


Alessandro


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux