Thanks for the idea, Poornima. Testing shows that xfsdump and xfsrestore is much faster than rsync since it handles small files much better. I don't have extra space to store the dumps but I was able to figure out how to pipe the xfsdump and restore via ssh. For anyone else that's interested:
On source machine, run:
xfsdump -J - /dev/mapper/[vg]-[brick] | ssh root@[destination fqdn] xfsrestore -J - [/path/to/brick]
-Tom
On Mon, Apr 1, 2019 at 9:56 PM Poornima Gurusiddaiah <pgurusid@xxxxxxxxxx> wrote:
You could also try xfsdump and xfsrestore if you brick filesystem is xfs and the destination disk can be attached locally? This will be much faster.Regards,PoornimaOn Tue, Apr 2, 2019, 12:05 AM Tom Fite <tomfite@xxxxxxxxx> wrote:_______________________________________________Hi all,I have a very large (65 TB) brick in a replica 2 volume that needs to be re-copied from scratch. A heal will take a very long time with performance degradation on the volume so I investigated using rsync to do the brunt of the work.The command:rsync -av -H -X --numeric-ids --progress server1:/data/brick1/gv0 /data/brick1/Running with -H assures that the hard links in .glusterfs are preserved, and -X preserves all of gluster's extended attributes.I've tested this on my test environment as follows:1. Stop glusterd and kill procs2. Move brick volume to backup dir3. Run rsync4. Start glusterd5. Observe gluster statusAll appears to be working correctly. Gluster status reports all bricks online, all data is accessible in the volume, and I don't see any errors in the logs.Anybody else have experience trying this?Thanks-Tom
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users