Hi Harshavardhana,
thank you for your reply. i will use the performance options for samba.
to your questions:
volume web-data-replicate
type cluster/replicate
subvolumes gfs-01-01 gfs-01-02
end-volume
volume readahead
type performance/read-ahead
option page-count 16 # cache per file = (page-count x
page-size)
subvolumes web-data-replicate
end-volume
what is the client side and server side TOTAL ram ?. How many servers and
clients do you have?. Coz having read-ahead count on 16 is no good for an
ethernet link, you might be choking up the bandwidth unnecessarily.
we have 2 physikal xen server and 2 physikal gfs-server (one gfsserver with 48TB space - 24 x 2TB SATAII)
we use on xenserver 14 domUs
alle domUs are located on glusterfs. And now we try to share in domU1 another gfs-lun (gfs-partition) via samba
each of the physikal xen server have 2x Quadcore with 12GB ram
each of the physikal glusterfs server have 2x Quardcore (hyperthreading) with 12GB ram
xen - domU1 has 1 CPU and 2GB ram and shares another gfs-lun via samba
Even with this we would need to know the backend disk performance with
o-direct to properly analyse the cost of using buffering on server side to
get better performance out of the system.
24 x 2TB - Western Digital WD RE4 - GP with 64MB Cache
Raid Controller = Areca - ARC-1280
Controller Name ARC-1280
Serial Number Y907CAAXAR800316
Main Processor 800MHz IOP341 C1
CPU ICache Size 32KBytes
CPU DCache Size 32KBytes/Write Back
CPU SCache Size 512KBytes/Write Back
System Memory 256MB/533MHz/ECC
Thank you
regards
Roland