Hello,
On gluster client and server, did you disable atime & co ?
Did you check for network bottleneck ? You are now using network twice:
-One way to read data through glusterfs,
-One way to push data remotely somewhere
I am using rsnapshot on my side, so it's doing hardlink to same files, maybe it goes faster than true full copy.
Also problem raise with folder containing a lot of small files in replicated setup.
Which gluster version are you using ?
I also experience memory leakage from server doing rsync (glusterfs client leak), but we are aware and patched has been pushed for version 3.7.8
2016-02-14 10:56 GMT+01:00 Nico Schottelius <nico-gluster-users@xxxxxxxxxxxxxxx>:
Hello everyone,
we have a 2 brick setup running on a raid6 with 19T storage.
We are currently facing the problem that the backup (9.1 TB data in
48126852 files) is taking more than a week when being backed up by
means of rsync (actually, ccollect[0]).
During backup the rsync process is continously in D state (expected),
but cpu load is far from 100% and disk is also only about 15-30% busy.
(this is snapshot from right now)
I have two questions, the second one more important:
a) Is there a good way to identify the bottleneck?
b) Is it "safe" to backup data directly from the underlying
filesystem instead of going via the glusterfs mount?
The reason why I ask about (b) is that we used to backup from those
servers *before* we switched to glusterfs within about a day and thus
I suspect backing up from the xfs filesystem again should do the job.
Thanks for any hints,
Nico
[0] http://www.nico.schottelius.org/software/ccollect/
--
Werde Teil des modernen Arbeitens im Glarnerland auf www.digitalglarus.ch!
Lese Neuigkeiten auf Twitter: www.twitter.com/DigitalGlarus
Diskutiere mit auf Facebook: www.facebook.com/digitalglarus
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users