On 16.03.2012 20:02, D. Dante Lorenso wrote: > Meanwhile, I'm gonna keep working with Gluster and see if I can get the > performance. Recently I converted to using Linux Raid 10 on 4 1TB drives > and now I'm getting 310 MB/s write speed to my brick using "dd" test. > That's looking better and getting closer to maxing out a 1GbE link. I > need to put gluster on top of this and see if I can continue the > throughput all the way to the client mount. > > As another test, I'm planning to buy some 480 GB SSD drives which can do > 300+ MB/s each. I'm thinking if I build a Raid 10 configuration with > those, I might be able to push upwards of 1000+ MB/s. Then, let Gluster > and/or Samba sit on top of that and we'll see what's what. I think you are confusing the ~100MB/s of (single) disks with the 1Gb/s of single Gigabit-ethernet. The later is 1 giga-bits per second, which comes to 1024 Mega-bits per second and thus to ~120 Mega-bytes per second aka 120 MB/s. Long story short: copying a single disk via gigabit lan leaves only little for protocol-overhead and your ssh connection. No need to go to higher disk rates when your link-speed is only 1GBits... And your (or your users) expirience of nfs/smb over gluster will mostly be influenced by the seek-time for small files or the accesses of many users, not the transfer-time of single-files for single users. Mirroring across network, this seek-time (aka latency) is mostly influenced by the round-trip-latency of your network. And multiplying your network-rate by ten sadly won't give you a tenth of the latency. Have fun, Arnold -- Dieses Email wurde elektronisch erstellt und ist ohne handschriftliche Unterschrift g?ltig.