tuning glusterfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 21.01.2010 17:27, schrieb Jiann-Ming Su:
> On Wed, Jan 20, 2010 at 5:49 PM, pkoelle<pkoelle at gmail.com>  wrote:
>>
>> This looks suspicious. Is this a gfs2 mount? what kind of performance do you
>> get wen running bonnie++ directly against /mnt/gluster/gfs2?
>>
>
> Bonnie++ command:  ./bonnie++ -u 99:99 -d /mnt/gluster/gfs2 -s 2000
>
> Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
> localhost.loc 2000M    74  99 39432  12 27308  10   836  99 65104   9 351.5  11
> Latency               179ms    1207ms     429ms     109ms     109ms     215ms
> Version  1.96       ------Sequential Create------ --------Random Create--------
> localhost.localdoma -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>                  16 23991  81 +++++ +++ +++++ +++ 17007  57 +++++ +++ +++++ +++
> Latency              1276us    1189us     787us    1183us     438us    2813us
>
>
Hi Jiann-Ming,

IO for your local disk doesn't look very good either. You are CPU bound 
when using putchar() and you get only 74K/sec with 99% CPU. For 
reference, I get 68808K/sec  with 86% CPU on a XEN vdev on top of cheap 
SATA. With a two-node mirror and glusterFS I still get 31886K/sec  with 
41% CPU (these are the -Per Chr- values from bonnie++).

>>
>> Can you leave out some of the performance/* translators and also try a
>> "real" network interface with lower MTU values to see of that helps? I had
>> terrible results with bigger MTU (6500) and bigger values for
>> net.core.(r/w)mem_max in /etc/sysctl.conf. IOPS where in the range of 5/sec.
>>
>
> GlusterFS server and client are running on the same system, so there's
> no network latency involved at all. I don't see how introducing the
> network layer into this would help.
As a matter of fact, with cluster filesystems things get much more 
complicated for a single write() or read(). I haven't investigated this 
throughoutly but my reasoning is: Data goes through network buffers, 
when the buffer is full an interrupt is triggered and data is copied 
over. With large buffers this kills small reads and IOPS.

  Blame it on gluster or fuse, but
> the reality is glusterfs read performance, in a simple base config, is
> 1/10th of the native file system.  In order to have this be worthwhile
> at all, I'd have to stripe the data across at least 10 servers to get
> the same read performance.  This may be a legitimate setup, but it's
> not what I need out of gluster.  I need the replication capabilities
> out of gluster and the ability for the node to read the local copy
> reasonably fast.
I can't reproduce. I get reasonable performance given the 
hardware,network and all. Do you know any better cluster filesystems? Do 
you have experience with gfs2 or ocfs?

cheers
  Paul
>
>



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux