GlusterFS Preformance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You have to remember that when you are writing with NFS you're writing to
one node, where as your gluster setup below is copying the same data to two
nodes;  so you're doubling the bandwidth.  Dont expect nfs like performance
on writing with multiple storage bricks.  However read performance should be
quite good.
liam

On Wed, Jul 8, 2009 at 5:22 AM, Hiren Joshi <josh at moonfruit.com> wrote:

> Hi,
>
> I'm currently evaluating gluster with the intention of replacing our
> current setup and have a few questions:
>
> At the moment, we have a large SAN which is split into 10 partitions and
> served out via NFS. For gluster, I was thinking 12 nodes to make up
> about 6TB (mirrored so that's 1TB per node) and served out using
> gluster. What sort of filesystem should I be using for the nodes
> (currently on ext3) to give me the best performance and recoverability?
>
> Also, I setup a test with a simple mirrored pair with a client that
> looks like:
> volume glust3
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host glust3
>  option remote-port 6996
>  option remote-subvolume brick
> end-volume
> volume glust4
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host glust4
>  option remote-port 6996
>  option remote-subvolume brick
> end-volume
> volume mirror1
>  type cluster/replicate
>  subvolumes glust3 glust4
> end-volume
> volume writebehind
>  type performance/write-behind
>  option window-size 1MB
>  subvolumes mirror1
> end-volume
> volume cache
>  type performance/io-cache
>  option cache-size 512MB
>  subvolumes writebehind
> end-volume
>
>
> I ran a basic test by writing 1G to an NFS server and this gluster pair:
> [root at glust1 ~]# time dd if=/dev/zero of=/mnt/glust2_nfs/nfs_test
> bs=65536 count=15625
> 15625+0 records in
> 15625+0 records out
> 1024000000 bytes (1.0 GB) copied, 1718.16 seconds, 596 kB/s
>
> real    28m38.278s
> user    0m0.010s
> sys     0m0.650s
> [root at glust1 ~]# time dd if=/dev/zero of=/mnt/glust/glust_test bs=65536
> count=15625
> 15625+0 records in
> 15625+0 records out
> 1024000000 bytes (1.0 GB) copied, 3572.31 seconds, 287 kB/s
>
> real    59m32.745s
> user    0m0.010s
> sys     0m0.010s
>
>
> With it taking almost twice as long, can I expect this sort of
> performance degradation on 'real' servers? Also, what sort of setup
> would you recommend for us?
>
> Can anyone help?
> Thanks,
> Josh.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zresearch.com/pipermail/gluster-users/attachments/20090708/f7e99936/attachment-0001.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux