On Fri, Mar 05, 2021 at 05:06:17PM +0100, Michal Hocko wrote: > On Wed 03-03-21 12:23:22, Minchan Kim wrote: > > On Wed, Mar 03, 2021 at 01:49:36PM +0100, Michal Hocko wrote: > > > On Tue 02-03-21 13:09:48, Minchan Kim wrote: > > > > LRU pagevec holds refcount of pages until the pagevec are drained. > > > > It could prevent migration since the refcount of the page is greater > > > > than the expection in migration logic. To mitigate the issue, > > > > callers of migrate_pages drains LRU pagevec via migrate_prep or > > > > lru_add_drain_all before migrate_pages call. > > > > > > > > However, it's not enough because pages coming into pagevec after the > > > > draining call still could stay at the pagevec so it could keep > > > > preventing page migration. Since some callers of migrate_pages have > > > > retrial logic with LRU draining, the page would migrate at next trail > > > > but it is still fragile in that it doesn't close the fundamental race > > > > between upcoming LRU pages into pagvec and migration so the migration > > > > failure could cause contiguous memory allocation failure in the end. > > > > > > > > To close the race, this patch disables lru caches(i.e, pagevec) > > > > during ongoing migration until migrate is done. > > > > > > > > Since it's really hard to reproduce, I measured how many times > > > > migrate_pages retried with force mode below debug code. > > > > > > > > int migrate_pages(struct list_head *from, new_page_t get_new_page, > > > > .. > > > > .. > > > > > > > > if (rc && reason == MR_CONTIG_RANGE && pass > 2) { > > > > printk(KERN_ERR, "pfn 0x%lx reason %d\n", page_to_pfn(page), rc); > > > > dump_page(page, "fail to migrate"); > > > > } > > > > > > > > The test was repeating android apps launching with cma allocation > > > > in background every five seconds. Total cma allocation count was > > > > about 500 during the testing. With this patch, the dump_page count > > > > was reduced from 400 to 30. > > > > > > Have you seen any improvement on the CMA allocation success rate? > > > > Unfortunately, the cma alloc failure rate with reasonable margin > > of error is really hard to reproduce under real workload. > > That's why I measured the soft metric instead of direct cma fail > > under real workload(I don't want to make some adhoc artificial > > benchmark and keep tunes system knobs until it could show > > extremly exaggerated result to convice patch effect). > > > > Please say if you belive this work is pointless unless there is > > stable data under reproducible scenario. I am happy to drop it. > > Well, I am not saying that this is pointless. In the end the resulting > change is relatively small and it provides a useful functionality for > other users (e.g. hotplug). That should be a sufficient justification. Yub, that was my impression to worth upstreaming rather than keeping downstream tree so made divergent. > > I was asking about CMA allocation success rate because that is a much > more reasonable metric than how many times something has retried because > retries can help to increase success rate and the patch doesn't really > remove those. If you want to use number of retries as a metric then the > average allocation latency would be more meaningful. I believe the allocation latency would be pretty big and retrial part would be marginal so doubt it's meaningful. Let me send next revision with as-is descripion once I fix places you pointed out. Thanks.