On 10/20/2023 10:30 AM, Yin, Fengwei wrote:
On 10/20/2023 10:09 AM, Baolin Wang wrote:
On 10/19/2023 8:07 PM, Yin, Fengwei wrote:
On 10/19/2023 4:51 PM, Baolin Wang wrote:
On 10/19/2023 4:22 PM, Yin Fengwei wrote:
Hi Baolin,
On 10/19/23 15:25, Baolin Wang wrote:
On 10/19/2023 2:09 PM, Huang, Ying wrote:
Zi Yan <ziy@xxxxxxxxxx> writes:
On 18 Oct 2023, at 9:04, Baolin Wang wrote:
When doing compaction, I found the lru_add_drain() is an obvious hotspot
when migrating pages. The distribution of this hotspot is as follows:
- 18.75% compact_zone
- 17.39% migrate_pages
- 13.79% migrate_pages_batch
- 11.66% migrate_folio_move
- 7.02% lru_add_drain
+ 7.02% lru_add_drain_cpu
+ 3.00% move_to_new_folio
1.23% rmap_walk
+ 1.92% migrate_folio_unmap
+ 3.20% migrate_pages_sync
+ 0.90% isolate_migratepages
The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate:
__unmap_and_move() push good newpage to LRU") to drain the newpage to LRU
immediately, to help to build up the correct newpage->mlock_count in
remove_migration_ptes() for mlocked pages. However, if there are no mlocked
pages are migrating, then we can avoid this lru drain operation, especailly
for the heavy concurrent scenarios.
lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch
have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru().
lru_add_drain() is called after the page reference count checking in
move_to_new_folio(). So, I don't this is an issue.
Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch.
I agree with your. My understanding also is that the lru_add_drain() is only needed
for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge.
But I have question: why do we need use page_was_mlocked instead of check
folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks.
Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio().
Yes. This will clear mlock bit.
What about set dst folio mlocked if source is before try_to_migrate_one()? And
then check whether dst folio is mlocked after? And need clear mlocked if migration
fails. I suppose the change is minor. Just a thought. Thanks.
IMO, this will break the mlock related statistics in mlock_folio() when the remove_migration_pte() rebuilds the mlock status and mlock count.
Another concern I can see is that, during the page migration, a concurrent munlock() can be called to clean the VM_LOCKED flags for the VMAs, so the remove_migration_pte() should not rebuild the mlock status and mlock count. But the dst folio's mlcoked status is still remained, which is wrong.
So your suggested apporach seems not easy, and I think my patch is simple with re-using existing __migrate_folio_record() and __migrate_folio_extract() :)
Can these concerns be addressed by clear dst mlocked after lru_add_drain() but before
remove_migration_pte()?
IMHO, that seems too hacky to me. I still prefer to rely on the
migration process of the mlcock pages.