tuning glusterfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 20.01.2010 19:02, schrieb Jiann-Ming Su:
> On Wed, Jan 20, 2010 at 10:23 AM, pkoelle<pkoelle at gmail.com>  wrote:
>> Am 19.01.2010 18:30, schrieb Jiann-Ming Su:
>>>
>>> I ran a simple bonnie++ benchmark on 3.0.0.  The read performance
>>> through FUSE is horrible.  Are there some parameters that can be tuned
>>> to improve read performance?
>>
>> Hi Jiann,
>>
>> You didn't provide information for anyone to help you. You should at least
>> post the exact bonnie++ command line you used and some information about
>> your setup (config files, transport, etc.).
>>
>
> Bonnie++ command:  ./bonnie++ -u 99:99 -d /var/boot -s 2000
>
> The system I'm testing on has 1GB RAM.  /var/boot is the glusterfs
> mount via fuse.
>
> The transport is tcp but all on localhost.  Here's the glusterfsd.vol:
>
> volume posix1
>    type storage/posix
>    option directory /mnt/gluster/gfs2
This looks suspicious. Is this a gfs2 mount? what kind of performance do 
you get wen running bonnie++ directly against /mnt/gluster/gfs2?

My setup is vastly different so I can't give direct advice. It should be 
possible to watch bonnie++ and the glusterfs processes via strace to 
find out whats happening (especially for the ~8sec read latency).

Can you leave out some of the performance/* translators and also try a 
"real" network interface with lower MTU values to see of that helps? I 
had terrible results with bigger MTU (6500) and bigger values for 
net.core.(r/w)mem_max in /etc/sysctl.conf. IOPS where in the range of 
5/sec.

hth
  Paul

> end-volume
>
> volume locks1
>      type features/locks
>      subvolumes posix1
> end-volume
>
> volume brick1
>      type performance/io-threads
>      option thread-count 8
>      subvolumes locks1
> end-volume
>
> volume server-tcp
>      type protocol/server
>      option transport-type tcp
>      option auth.addr.brick1.allow 127.0.0.1
>      option transport.socket.listen-port 6996
>      option transport.socket.nodelay on
>      subvolumes brick1
> end-volume
>
>
> Here's the client config:
>
> # TRANSPORT-TYPE tcp
> volume localhost-1
>      type protocol/client
>      option transport-type tcp
>      option remote-host localhost
>      option transport.socket.nodelay on
>      option transport.remote-port 6996
>      option remote-subvolume brick1
> end-volume
>
> volume writebehind
>      type performance/write-behind
>      option cache-size 4MB
>      subvolumes localhost-1
> end-volume
>
> volume readahead
>      type performance/read-ahead
>      option page-count 4
>      subvolumes writebehind
> end-volume
>
> volume iocache
>      type performance/io-cache
>      option page-size 1MB
>      option cache-size 1GB
>      option cache-timeout 1
>      subvolumes readahead
> end-volume
>
> volume quickread
>      type performance/quick-read
>      option cache-timeout 1
>      option max-file-size 64kB
>      subvolumes iocache
> end-volume
>
> volume statprefetch
>      type performance/stat-prefetch
>      subvolumes quickread
> end-volume
>
>



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux