On 4/11/24 7:02 PM, Christoph Lameter (Ampere) wrote: > On Thu, 11 Apr 2024, Jianfeng Wang wrote: > >> So, the fix is to limit the number of slabs to scan in >> count_partial(), and output an approximated result if the list is too >> long. Default to 10000 which should be enough for most sane cases. > > > That is a creative approach. The problem though is that objects on the > partial lists are kind of sorted. The partial slabs with only a few > objects available are at the start of the list so that allocations cause > them to be removed from the partial list fast. Full slabs do not need to > be tracked on any list. > > The partial slabs with few objects are put at the end of the partial list > in the hope that the few objects remaining will also be freed which would > allow the freeing of the slab folio. > > So the object density may be higher at the beginning of the list. > > kmem_cache_shrink() will explicitly sort the partial lists to put the > partial pages in that order. > > Can you run some tests showing the difference between the estimation and > the real count? Maybe we could also get a more accurate picture by counting N slabs from the head and N from the tail and approximating from both. Also not perfect, but could be able to answer the question if the kmem_cache is significantly fragmented. Which is probably the only information we can get from the slabinfo <active_objs> vs <num_objs>. IIRC the latter is always accurate, the former never because of cpu slabs, so we never know how many objects are exactly in use. By comparing both we can get an idea of the fragmentation, and if this change won't make that estimate significantly worse, it should be acceptable.