Unusable volume after brick re-attach

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all, I have an 8 node, replicated (4 x 2) volume that has a missing node. It fell out of the cluster a few weeks ago and since then I've not been able to bring it back on-line without killing performance to the volume. After my initial attempts to bring the node back online failed I tried disabling the self-heal daemon after finding that recommendation from an archive of the mailing list. I then attempted to rsync the two bricks and they are above 95% in sync but the system still struggles. I lastly tried moving the brick data to a side locate on the server to emulate a brick replace. After doing the extended attribute modification and glusterd restart, it created the directory structure and appeared ok at first but once customer requests started hitting the system the response times slowed to a crawl. Navigating the directories via a FUSE mount was not even usable. 

Any one have any other recommendations for getting this node back on-line?

Other specs: Gluster version 3.5.2, CentOS 7.1, XFS for the bricks, 1 brick per node, 20 TB / brick. 

Thanks in advance, Jon
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux