Re: GlusterFS vs. NFS performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>
> Right off the bat, it seems that GlusterFS cannot make use of standard
> filesystem caching on the client-side.


That is because GlusterFS is a userspace application on the client side.

Instead, one needs to use the
> iocache performance translator. That works, but according to IOzone
> GlusterFS is just no match for NFS as far as cache performance is
> concerned... ~550MB/s vs. 3GB/s+ in some cases (with the GlusterFS
> FUSE patch). Is this known, planned-for-fix, or am I doing it wrong?
> :)


The io-cache performance is bottlenecked by the context switching overhead.
Better CPU gives better client-side io-cache performance. NFS cache reads do
not have a context switch per system call.



The GlusterFS iocache in general seems to be more picky about what's
> cached, or perhaps I just don't know how to work it. Could someone
> explain the options to me?


By default it has an LRU cache replacement policy over all the files which
are read. This can be customized to a weighted LRU with filename patterns as
the criteria. io-cache is no more picky than this.


For instance...
> It doesn't take a genius to figure out how cache-size works (although
> as a side note it would be very nice if it could actually use the
> standard FS cache- all these questions and performance differences
> might disappear). For the record, I understand it is not per-thread.


There are no provisions from the linux kernel for userspace applications to
populate the cache. Fuse too does not provide an interface to "correctly"
use the page cache (with mechanisms for the filesystem to mark it stale
etc)

What about page-size? For example, if I set page-size to 1MB, and I
> have a 512KB file to cache, obviously it fits in 1 page. What happens
> to the other 512KB of space? Wasted presumably? How about if I have a
> 2MB file... can it consume 2 pages and be fine, or is it not cachable?
> Sorry, I don't have a really good grasp of paging and VM subsystems...


A 512KB file accounts for just 512KB even in a 1MB page-size configuration.
Larger files use many pages, but account just for the size of data in the
cache (not a product of page-size x num of pages used)


What effect does force-revalidate-timeout have? I'm guessing that
> anything cached more recently than this value is automatically trusted
> to be correct/current, and anything over generates an mtime lookup?


Correct. It also validates the cache with the server mtime "whenever
possible" (not just with the explicit revalidate call's reply)


As you might have guessed, my main concern is client-side performance.
> From my testing I'm easily able to saturate the server's gigabit link,
> so I'm trying to work on what can be done to let the client(s) to hit
> that link less often.


io-cache is the answer for now.

avati


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux