Thanks for the explanation. I will create a ticket on the tracker then. Cheers Nick On Tuesday, May 16, 2017 08:16:33 AM Jason Dillaman wrote: > Sorry, I haven't had a chance to attempt to reproduce. > > I do know that the librbd in-memory cache does not restrict incoming > IO to the cache size while in-flight. Therefore, if you are performing > 4MB writes with a queue depth of 256, you might see up to 1GB of > memory allocated from the heap for handling the cache. > > QEMU would also duplicate the IO memory for a bounce buffer > (eliminated in the latest version of QEMU and librbd) and librbd > copies the IO memory again to ensure ownership (known issue we would > like to solve) -- that would account for an additional 2GB of memory > allocations under this scenario. > > These would just be a transient spike of heap usage while the IO is > in-flight, but since I'm pretty sure the default behavior of the glibc > allocator does not return slabs to the OS, I would expect high memory > overhead to remain for the life of the process. > > Please feel free to open a tracker ticket here [1] and I can look into > it when I get some time. > > [1] http://tracker.ceph.com/projects/rbd/issues -- Sebastian Nickel Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich Tel +41 44 637 40 00 | Support +41 44 637 40 40 | www.nine.ch
Attachment:
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com