Matthew Wilcox <willy@xxxxxxxxxxxxx> writes: > On Thu, Feb 15, 2024 at 09:38:42AM +1100, Alistair Popple wrote: >> > +++ b/mm/migrate_device.c >> > @@ -377,33 +377,33 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns, >> > continue; >> > } >> > >> > + folio = page_folio(page); >> >> Instead of open coding the migrate pfn to folio conversion I think we >> should define a migrate_pfn_to_folio() and get rid of the intermediate >> local variable. This would also allow a minor clean up to the final for >> loop in migrate_device_unmap(). > > I think we should stop passing pfns into migrate_device_unmap(). > Passing an array of folios would make more sense to every function > involved, afaict. Maybe I overlooked something ... Note these are migration pfns. The main reason we do this is we need to track and possibly modify some per-pfn state around between all these functions during the migration process. > Also, have you had any thoughts on whether device memory is a type of > folio like anon/file memory, or is it its own type? I don't quite follow what the precise distinction there is but I think of them as normal pages/folios like anon/file memory folios because we rely on the same kernel paths and rules to manage them (ie. they get refcounted the same as normal pages, CoWed, etc.). Currently we only allow these to be mapped into private/anon VMAs but I have an experiemental series to allow them to be mapped into shared or filebacked VMAs which basically involves putting them into the page-cache. Most drivers also have a 1:1 mapping of struct page to a physical page of device memory and due to all the folio work it's fairly easy to extend this to support higher order folios. I will try and post the first half of my changes that convert all the page based handling to folios. I got caught up trying figuring out a sane API for splitting/merging during migration but maybe I should just post the folio conversion as a simpler first step.