[no subject]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Anyway, I may be wrong, CC'ing more experts to see if we can have some
clarifying from them.

> 
> 
>>> complexity, yet doesn't handle all cases (cf your comment about devmem).
>>
>> I am not sure if unmapping only need to be done using its own version DMA API
>> for devmem yet, but it seems waiting might also need to use its own version
>> of kicking/flushing for devmem as devmem might be held from the user space?
>>
>>>
>>> And even if it did handle all cases, force-releasing pages in this way
>>> really feels like it's just papering over the issue. If there are pages
>>> being leaked (or that are outstanding forever, which basically amounts
>>> to the same thing), that is something we should be fixing the root cause
>>> of, not just working around it like this series does.
>>
>> If there is a definite time for waiting, I am probably agreed with the above.
>>  From the previous discussion, it seems the time to do the kicking/flushing
>> would be indefinite depending how much cache to be scaned/flushed.
>>
>> For the 'papering over' part, it seems it is about if we want to paper over
>> different kicking/flushing or paper over unmapping using different DMA API.
>>
>> Also page_pool is not really a allocator, instead it is more like a pool
>> based on different allocator, such as buddy allocator or devmem allocator.
>> I am not sure it makes much to do the flushing when page_pool_destroy() is
>> called if the buddy allocator behind the page_pool is not under memory
>> pressure yet.
>>
> 
> I still see page_pool as an allocator like the SLUB/SLAB allocators,
> where slab allocators are created (and can be destroyed again), which we
> can allocate slab objects from.  SLAB allocators also use buddy
> allocator as their backing allocator.

I am not sure if SLUB/SLAB is that similar to page_pool for the specific
problem here, at least SLUB/SLAB doesn't seems to support dma mapping in
its core and doesn't seem to allow inflight cache when kmem_cache_destroy()
is called as its alloc API doesn't seems to take reference to s->refcount
and doesn't have the inflight cache calculating as page_pool does?
https://elixir.bootlin.com/linux/v6.12-rc6/source/mm/slab_common.c#L512


> 
> The page_pool is of-cause evolving with the addition of the devmem
> allocator as a different "backing" allocator type.






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux