memory leak in performance/quick-read ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Gluster devs,

the translator performance/quick-read does not have cache-size option 
the way that performance/io-cache does - and therefore doesn't have 
anything like the ioc_prune() functionality of io-cache.

I've discovered this can cause what is effectively a memory leak.

I have a volume of up to 40GB configured with quick-read. When I attempt 
to do something that reads the contents of every file (e.g. take a tar 
of it) the glusterfs mount process just continues to grow until it's 
used all the memory of the client which then becomes unresponsive of 
course. Setting the timeout value doesn't help this because, without a 
max cache size and pruning, old files are not flushed out of the cache 
until they are read or written again - which doesn't happen with a 
sequential access of all files...

This seems like a bug to me %-}  Any comments?

[aside: where is the gluster bug list held/managed?]

Cheers,

Ian

-- 
www.ContactClean.com
Making changing email address as easy as clicking a mouse.
Helping you keep in touch.




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux