Concurrency limitation?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Tuesday 07 February 2012 14:09:33 Brian Candler wrote:
> I appear to be hitting a limitation in either the glusterfs FUSE client or
> the glusterfsd daemon, and I wonder if there are some knobs I can tweak.
> 
> I have a 12-disk RAID10 array. If I access it locally I get the following
> figures (#p = number of concurrent reader processes)
> 
>  #p  files/sec
>   1      35.52
>   2      66.13
>   5     137.73
>  10     215.51
>  20     291.45
>  30     337.01
> 
> If I access it as a single-brick distributed glusterfs volume over 10GE I
> get the following figures:
> 
>  #p  files/sec
>   1      39.09
>   2      70.44
>   5     135.79
>  10     157.48
>  20     179.75
>  30     206.34
> 
> The performance tracks very closely the raw RAID10 performance at 1, 2 and 5
> concurrent readers.  However at 10+ concurrent readers it falls well below
> what the RAID10 volume is capable of.

I did some similar test but with a slower machine and slower disk.
The "problem" with distributed filesystems is the distributed locking. And even 
though your test volume is on one system only, access locking is not only done 
by the fs in kernel but additionally by the fuse-client and/or the glusterd. 
That imposes a limit. And when the volume stretches across several machines, 
even though the reading might be done from the local disk, the locking has to 
be synchronized across all brick-machines. Another limit.

And with gluster, its the client that does all the synchronization, thats why 
there is one fuse-thread on the client that will max out your cpu when running 
dbench and the likes no matter whether the volume is local or 
distributed/replicated or remote.

My conclusion from my tests so far is that the most rewarding target for 
optimizations is the fuse-client of glusterfs. And maybe the way it talks to 
its companions and to the bricks.

But I am not yet finished with my tests and I still hope that glusterfs proves 
usable for distributed vm-image storage.

Have fun,

Arnold
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://gluster.org/pipermail/gluster-users/attachments/20120207/14ff75da/attachment.pgp>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux