On 03/08/2017 03:16 AM, Yisheng Xie wrote: > Hi Vlastimil , > > On 2017/2/11 1:23, Vlastimil Babka wrote: >> @@ -1977,7 +1978,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, >> unsigned int current_order = page_order(page); >> struct free_area *area; >> int free_pages, good_pages; >> - int old_block_type; >> + int old_block_type, new_block_type; >> >> /* Take ownership for orders >= pageblock_order */ >> if (current_order >= pageblock_order) { >> @@ -1991,11 +1992,27 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, >> if (!whole_block) { >> area = &zone->free_area[current_order]; >> list_move(&page->lru, &area->free_list[start_type]); >> - return; >> + free_pages = 1 << current_order; >> + /* TODO: We didn't scan the block, so be pessimistic */ >> + good_pages = 0; >> + } else { >> + free_pages = move_freepages_block(zone, page, start_type, >> + &good_pages); >> + /* >> + * good_pages is now the number of movable pages, but if we >> + * want UNMOVABLE or RECLAIMABLE, we consider all non-movable >> + * as good (but we can't fully distinguish them) >> + */ >> + if (start_type != MIGRATE_MOVABLE) >> + good_pages = pageblock_nr_pages - free_pages - >> + good_pages; >> } >> >> free_pages = move_freepages_block(zone, page, start_type, >> &good_pages); > It seems this move_freepages_block() should be removed, if we can steal whole block > then just do it. If not we can check whether we can set it as mixed mt, right? > Please let me know if I miss something.. Right. My results suggested this patch was buggy, so this might be the bug (or one of the bugs), thanks for pointing it out. I've reposted v3 without the RFC patches 9 and 10 and will return to them later. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>