Huang Ying <ying.huang@xxxxxxxxx> writes: > From: "Huang, Ying" <ying.huang@xxxxxxxxx> > > Now, migrate_pages() migrate pages one by one, like the fake code as > follows, > > for each page > unmap > flush TLB > copy > restore map > > If multiple pages are passed to migrate_pages(), there are > opportunities to batch the TLB flushing and copying. That is, we can > change the code to something as follows, > > for each page > unmap > for each page > flush TLB > for each page > copy > for each page > restore map We use a very similar sequence for the migrate_vma_*() set of calls. It would be good if we could one day consolidate the two. I believe the biggest hindrance to that is migrate_vma_*() operates on arrays of pfns rather than a list of pages. The reason for that is it needs to migrate non-lru pages and hence can't use page->lru to create a list of pages to migrate. So from my perspective I think this direction is good as it would help with that. One thing to watch out for is deadlocking if locking multiple pages though. > The total number of TLB flushing IPI can be reduced considerably. And > we may use some hardware accelerator such as DSA to accelerate the > page copying. > > So in this patch, we refactor the migrate_pages() implementation and > implement the TLB flushing batching. Base on this, hardware > accelerated page copying can be implemented. > > If too many pages are passed to migrate_pages(), in the naive batched > implementation, we may unmap too many pages at the same time. The > possibility for a task to wait for the migrated pages to be mapped > again increases. So the latency may be hurt. To deal with this > issue, the max number of pages be unmapped in batch is restricted to > no more than HPAGE_PMD_NR. That is, the influence is at the same > level of THP migration. > > We use the following test to measure the performance impact of the > patchset, > > On a 2-socket Intel server, > > - Run pmbench memory accessing benchmark > > - Run `migratepages` to migrate pages of pmbench between node 0 and > node 1 back and forth. > > With the patch, the TLB flushing IPI reduces 99.1% during the test and > the number of pages migrated successfully per second increases 291.7%. > > This patchset is based on v6.0-rc5 and the following patchset, > > [PATCH -V3 0/8] migrate_pages(): fix several bugs in error path > https://lore.kernel.org/lkml/20220817081408.513338-1-ying.huang@xxxxxxxxx/ > > The migrate_pages() related code is converting to folio now. So this > patchset cannot apply recent akpm/mm-unstable branch. This patchset > is used to check the basic idea. If it is OK, I will rebase the > patchset on top of folio changes. > > Best Regards, > Huang, Ying