On Wed, 28 Nov 2007, Kevan Benson wrote:
Ok, better. Does work on the client side though. Doesn't seem
to be to great on the server side for some reason.
I just tried my simple test with readahead on the cl;ient side.
No difference. Here's what I used.
volume readahead
type performance/read-ahead
option page-size 128kb ### in bytes
option page-count 2 ### memory cache size is page-count x page-size per file
subvolumes client
end-volume
~
Maybe the page size or count needs to be bigger?
Krishna Srinivas wrote:
On Nov 28, 2007 11:11 PM, Chris Johnson <johnson@xxxxxxxxxxxxxxxxxxx>
wrote:
On Wed, 28 Nov 2007, Kevan Benson wrote:
Chris Johnson wrote:
I also tried the io-cache on the client side. MAN does that
work. I had a 256 MB cache defind. A reread of my 24 MB file took 72
MS. I don't think it even bothered with the server much. I need to
try that on the server. Might help if a bunch of computer nodes
hammer on the same file at the same time.
Careful with io-cache and io-threads together, depending on where you
define
it (I think), the cache is per-thread. so if you have 8 threads and a
256 MB
cache defined, be prepared for 2 GB of cache use...
No, If you define one io-cache translator there is only one cache. All the
threads will refer to the same io-cache translator with which it is
associated
Ah. Is this newer? I thought I tried this a few months ago and saw a lot of
memory usage. Maybe I just ASSumed. ;)
--
-Kevan Benson
-A-1 Networks
-------------------------------------------------------------------------------
Chris Johnson |Internet: johnson@xxxxxxxxxxxxxxxxxxx
Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson
NMR Center |Voice: 617.726.0949
Mass. General Hospital |FAX: 617.726.7422
149 (2301) 13th Street |What the country needs is dirtier fingernails and
Charlestown, MA., 02129 USA |cleaner minds. Will Rogers
-------------------------------------------------------------------------------