Hi,
I search to optimize the cache writing of my cluster glusterGFS.
I explain my architecture:
My company has a multiple sites environnement, with, at the center, the
HQ. The differents sites and HQ are linked over wan an VPN. The wan
bandwith of the HQ is more or less 10MBPS. The wan bandwith of the
agencies is moreless 4 mbps.
On each site, there are users who works on local files and network files.
Users of the site 1 can work on file of the site 2 and site 3 and HQ.
Users of the site 2 can work on file of the site 1, site 3 and HQ, etc ...
Everybody can work on every file.
So ...
I'm testing replicated Filesystem...
So ....
I'm testing glusterFS for replicate all the files on all sites. The GFS
volume is shared on each site over NFS and Samba. So each site have his
"samba local shared".
The read action is perfect. Each user can open a document on his network
drive very fast.
The problem is when he want to write a document on the volume. This take
a very long time because the write action is in function of the bandwith
of each volume.
Now, the question :
So, is there a way to writing in cache on the GFS volume and the GFS
process replicate the file in background ?
For the moment, I play with this option but maybe there other for going
in this way ?
performance.write-behind: off
performance.nfs.write-behind: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
performance.client-io-threads: on
performance.stat-prefetch: on
performance.io-cache: on
server.allow-insecure: on
nfs.disable: off
performance.flush-behind: on
performance.io-thread-count: 8
performance.write-behind-window-size: 32MB
performance.cache-size: 32MB
cluster.readdir-optimize: on
performance.cache-refresh-timeout: 30
I hope my explain are clear.
Maybe I have to make a picture of the architecture :)
thanks you in advance,
Alex
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users