Slow speed with Gluster Backed Xen DomU's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi sheng,

i had heavy performace issues with 3.0.4.
i recommend using glusterfs version 3.0.5. 

regards
josy

> -----Urspr??ngliche Nachricht-----
> Von: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] Im
> Auftrag von Sheng Yeo
> Gesendet: Dienstag, 20. Juli 2010 00:39
> An: gluster-users at gluster.org
> Betreff: Slow speed with Gluster Backed Xen DomU's
> 
> Hi Everyone,
> 
> Hope your day has been going well. I tried emailing this in a week or so ago, but i do
> not think that it got emailed to the group. I am just trying again, but apologies in
> advanced if you have already read this.
> I am currently using GlusterFS as a distributed SAN backend for the Xen based cloud
> platform we are developing.
> 
> We deploy Xen virtuals on pairs of servers using GlusterFs V3 in replicate mode on
> Debian Stable (Lenny) with Xen 3.2.1 as the hypervisor. I am currently experiencing a
> weird issue where within the virtual machines (DomU) running on a GlusterFS mount
> only receive around 10-18MB/s write speeds, but full speed reads.
> 
> Our hardware for each node is Dual Core Xeon Processors, 8GB of RAM and 4 * High
> Speed SATA drives (RAID 10, around 160MB/s writes and reads).
> 
> If I write a file to the Gluster mount in the Dom0 (host) we receive around 90-100MB/s
> writes (maxing out the GigE link). If I run the virtual machine on the disks without Gluster
> I get much higher speeds within the DomU of around 80-90MB/s.
> 
> This slow down only appears to occur on writes. Does anyone with a better
> understanding of GlusterFS, Fuse and filesystems have an idea why this is slowing
> down. The underlying file system is Ext3 using TAP:AIO within Xen to connect to a file
> image based disk. This is without using gluster fuse client (what benefits does this
> give?) and Gluster version 3.0.4.
> 
> Many Thanks
> Sheng
> 
> Here is the current configuration of the servers in replicate:
> 
> Server:
> 
> volume posix
>    type storage/posix
>    option directory /export
> end-volume
> 
> volume locks
>    type features/locks
>    subvolumes posix
> end-volume
> 
> volume brick
>    type performance/io-threads
>    option thread-count 8
>    subvolumes locks
> end-volume
> 
> volume server
>    type protocol/server
>    option transport-type tcp
>    option auth.addr.brick.allow 10.*.*.*
>    subvolumes brick
> end-volume
> 
> Client:
> 
> volume remote1
>    type protocol/client
>    option transport-type tcp
>    option remote-host node01
>    option remote-subvolume brick
> end-volume
> 
> volume remote2
>    type protocol/client
>    option transport-type tcp
>    option remote-host node02
>    option remote-subvolume brick
> end-volume
> 
> volume replicate1
>    type cluster/replicate
>    subvolumes remote1 remote2
> end-volume
> 
> 
> volume writebehind
>    type performance/write-behind
>    option window-size 1MB
>    subvolumes replicate1
> end-volume
> 
> volume cache
>    type performance/io-cache
>    option cache-size 512MB
>    subvolumes writebehind
> end-volume
> 


gugler* News der Woche: Erfolgreiches ?ko-Audit! Das g* Umweltmanagementsystem erf?llt die h?chsten europ?ischen EMAS-Standards. Lesen Sie mehr dar?ber im neuen gugler* Nachhaltigkeitsbericht: http://www.gugler.at/unternehmen/publikationen/g-nachhaltigkeitsbericht.html



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux