Re: pipewire memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wim Taymans wrote on Tue, Dec 14, 2021 at 09:09:30AM +0100:
> I can get it as high as that too but then it stays there and doesn't really
> grow anymore so it does not seem like
> it's leaking. Maybe it's the way things are done, there is a lot of ldopen
> and memfd/mmap.

Right, I've had a look with massif and it looks like the memory is
reused properly -- when the next batch of clients come in all previously
used memory is freed and promptly reallocated for the new clients.

The problem seems to be more that there is no sign of memory being
released even after some time, I've left pipewire-pulse run for a while
and it stays at 300ishMB of RSS all this time.
Connecting a single new client at this point does increase memory
(+8-9MB) so it doesn't look like it's reusing the old memory, but
looking at massif the numbers all fell down close to 0 so everything
-is- freed successfully... And it's a bit weird.


FWIW, here's some massif output file if you're curious.
I ran 100 clients, 100 clients, 1 client for a while, then 100 clients
again:
https://gaia.codewreck.org/local/massif.out.pipewire


I've double-checked with traces in load_spa_handle/unref_handle and it
is all free()d as soon as the client disconnects, so there's no reason
the memory would still be used... And I think we're just looking at some
malloc optimisation not releasing the memory.

To confirm, I've tried starting pipewire-pulse with jemalloc loaded,
LD_PRELOAD=/usr/lib64/libjemalloc.so , and interestingly after the 100
clients exit the memory stays at ~3-400MB but as soon as single new
client connects it jumps back down to 20MB, so that seems to confirm it.
(with tcmalloc it stays all the way up at 700+MB...)

So I guess we're just chasing after artifacts from the allocator, and
it'll be hard to tell which it is when I happen to see pipewire-pulse
with high memory later on...



That all being said, I agree with Zbigniew that the allocated amount per
client looks big.

>From what I can see the big allocations are (didn't look at lifetime of each
alloc):
 - load_spa_handle for audioconvert/libspa-audioconvert allocs 3.7MB
 - pw_proxy_new allocates 590k
 - reply_create_playback_stream allocates 4MB
 - spa_buffer_alloc_array allocates 1MB from negotiate_buffers
 - spa_buffer_alloc_array allocates 256K x2 + 128K
   from negotiate_link_buffers

maybe some of these buffers sticking around for the duration of the
connection could be pooled and shared?

-- 
Dominique
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux