On 10.09.20 16:41, Zi Yan wrote: > On 10 Sep 2020, at 10:34, David Hildenbrand wrote: > >>>> As long as we stay in safe zone boundaries you get a benefit in most >>>> scenarios. As soon as we would have a (temporary) workload that would >>>> require more unmovable allocations we would fallback to polluting some >>>> pageblocks only. >>> >>> The idea would work well until unmoveable pages begin to overflow into >>> ZONE_PREFER_MOVABLE or we move the boundary of ZONE_PREFER_MOVABLE to >>> avoid unmoveable page overflow. The issue comes from the lifetime of >>> the unmoveable pages. Since some long-live ones can be around the boundary, >>> there is no guarantee that ZONE_PREFER_MOVABLE cannot grow back >>> even if other unmoveable pages are deallocated. Ultimately, >>> ZONE_PREFER_MOVABLE would be shrink to a small size and the situation is >>> back to what we have now. >> >> As discussed this would not happen in the usual case in case we size it >> reasonable. Of course, if you push it to the extreme (which was never >> suggested!), you would create mess. There is always a way to create a >> mess if you abuse such mechanism. Also see Rik's reply regarding reclaim. >> >>> >>> OK. I have a stupid question here. Why not just grow pageblock to a larger >>> size, like 1GB? So the fragmentation of unmoveable pages will be at larger >>> granularity. But it is less likely unmoveable pages will be allocated at >>> a movable pageblock, since the kernel has 1GB pageblock for them after >>> a pageblock stealing. If other kinds of pageblocks run out, moveable and >>> reclaimable pages can fall back to unmoveable pageblocks. >>> What am I missing here? >> >> Oh no. For example pageblocks have to completely fit into a single >> section (that's where metadata is maintained). Please refrain from >> suggesting to increase the section size ;) > > Thank you for the explanation. I have no idea about the restrictions on > pageblock and section. Out of curiosity, what prevents the growth of > the section size? The section size (and based on that the Linux memory block size) defines - the minimum size in which we can add_memory() - the alignment requirement in which we can add_memory() This is applicable - in physical environments, where the bios will decide where to place DIMMs/NVDIMMs. The coarser the granularity, the less memory we might be able to make use of in corner cases. - in virtualized environments, where we want to add memory in fairly small granularity. The coarser the granularity, the less flexibility we have. arm64 has a section size of 1GB (and a THP/MAX_ORDER - 1 size of 512MB with 64k base pages :/ ). That already turned out to be a problem - see [1] regarding thoughts on how to shrink the section size. I once read about thoughts of switching to 2MB THP on arm64 with any base page size, not sure if that will become real at one point (and we might be able to reduce the pageblock size there as well ... ) [1] https://lkml.kernel.org/r/AM6PR08MB40690714A2E77A7128B2B2ADF7700@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx See [1] as > > — > Best Regards, > Yan Zi > -- Thanks, David / dhildenb