On 4/12/24 7:29 PM, Jianfeng Wang wrote: > > > On 4/12/24 12:48 AM, Vlastimil Babka wrote: >> On 4/11/24 7:02 PM, Christoph Lameter (Ampere) wrote: >>> On Thu, 11 Apr 2024, Jianfeng Wang wrote: >>> >>>> So, the fix is to limit the number of slabs to scan in >>>> count_partial(), and output an approximated result if the list is too >>>> long. Default to 10000 which should be enough for most sane cases. >>> >>> >>> That is a creative approach. The problem though is that objects on the >>> partial lists are kind of sorted. The partial slabs with only a few >>> objects available are at the start of the list so that allocations cause >>> them to be removed from the partial list fast. Full slabs do not need to >>> be tracked on any list. >>> >>> The partial slabs with few objects are put at the end of the partial list >>> in the hope that the few objects remaining will also be freed which would >>> allow the freeing of the slab folio. >>> >>> So the object density may be higher at the beginning of the list. >>> >>> kmem_cache_shrink() will explicitly sort the partial lists to put the >>> partial pages in that order. >>> >>> Can you run some tests showing the difference between the estimation and >>> the real count? > > Yes. > On a server with one NUMA node, I create a case that uses many dentry objects. Could you describe in more detail how do you make dentry cache to grow such a large partial slabs list? Thanks.