Rsync on bricks filesystem?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

we have some replica-2 volumes and it works fine at this time.
For some of the volumes I need to setup daily incremental blackups (on an other filesystem, which don't needs to be on glusterfs).

As 'rsync' or similar is not very efficient on glusterfs volumes I tried to use direct rsync beetween brick filesystem and an other filesystem (both on the same storage server(s)). At this time using FUSE or NFS access on storage server with rsync is resp. ~ 80x / 20x slower than direct access (on a ~80Go volume with many small files).

Considering that the glusterfs volume is "clean" (no heal…) is there any problems/drawbacks doing that? Or an other cleaner solution? And what is the best way to check that a volume is "sane"? Parsing output of 'gluster volume heal XX info' seems dangerous if output evolve in the future, and I can't see specific return codes for that.

Thanks.

Regards,
--
Y.


Attachment: smime.p7s
Description: Signature cryptographique S/MIME

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux