Hi all, recently I've been looking at inconsistent frame times in one of our graphics workloads and it seems the culprit lies within the MM subsystem. During workload execution sporadically some graphics buffers, which are typically single digit megabytes in size, are freed. The pages are freed via __folio_batch_release from drm_gem_put_pages, which means they are put on the pcp and drained back into the zone via free_pcppages_bulk. As the buffers are quite large even a single free triggers the batching optimization added in 3b12e7e97938 ("mm/page_alloc: scale the number of pages that are batch freed"), as there is a huge number of pages that get freed without any intervening allocations. The pcp for the normal zone on this system has a high watermark of 614 pages and batch of 63, which means that the batching optimizations will drive up the number of pages freed per batch to 551 pages. As the cost per page free (including tracing overhead, which isn't negligible on this small ARM system) is around 0.7µs and the batch free is done with zone spinlock held and IRQs disabled, this leads to significant IRQ disabled times of upwards of 250µs even in the production system without tracing. Those long IRQ disabled sections do interfere with the workload of the system. As the larger free batching was added on purpose I don't want to rip it out altogether. But then there are also no tuneables aside from the pcp high watermark, which may have other unintended side effects. I'm hoping to get some ideas on how to proceed here. Should we consider a more conservative maximum of pages for the batching optimization? Should another tunable be added? Or any other clever ideas to minimize those critical sections? Regards, Lucas