> -----Original Message----- > From: Stephan von Krawczynski [mailto:skraw at ithnet.com] > Sent: 09 July 2009 13:50 > To: Hiren Joshi > Cc: Liam Slusser; gluster-users at gluster.org > Subject: Re: GlusterFS Preformance > > On Thu, 9 Jul 2009 09:33:59 +0100 > "Hiren Joshi" <josh at moonfruit.com> wrote: > > > > > > > > -----Original Message----- > > > From: Stephan von Krawczynski [mailto:skraw at ithnet.com] > > > Sent: 09 July 2009 09:08 > > > To: Liam Slusser > > > Cc: Hiren Joshi; gluster-users at gluster.org > > > Subject: Re: GlusterFS Preformance > > > > > > On Wed, 8 Jul 2009 10:05:58 -0700 > > > Liam Slusser <lslusser at gmail.com> wrote: > > > > > > > You have to remember that when you are writing with NFS > > > you're writing to > > > > one node, where as your gluster setup below is copying the > > > same data to two > > > > nodes; so you're doubling the bandwidth. Dont expect nfs > > > like performance > > > > on writing with multiple storage bricks. However read > > > performance should be > > > > quite good. > > > > liam > > > > > > Do you think this problem can be solved by using 2 storage > > > bricks on two > > > different network cards on the client? > > > > I'd be surprised if the bottleneck here was the network. > I'm testing on > > a xen network but I've only been given one eth per slice. > > Do you mean your clients and servers are virtual XEN > installations (on the > same physical box) ? They are on different boxes and using different disks (don't ask), this seemed like a good way to evaluate as I setup an NFS server using the same equipment to get relative timings. The plan is to roll it out onto new physical boxes in a month or 2.... > > Regards, > Stephan > >