On 12/4/18 5:44 PM, Jerome Glisse wrote: > On Tue, Dec 04, 2018 at 05:15:19PM -0800, Matthew Wilcox wrote: >> On Tue, Dec 04, 2018 at 04:58:01PM -0800, John Hubbard wrote: >>> On 12/4/18 3:03 PM, Dan Williams wrote: >>>> Except the LRU fields are already in use for ZONE_DEVICE pages... how >>>> does this proposal interact with those? >>> >>> Very badly: page->pgmap and page->hmm_data both get corrupted. Is there an entire >>> use case I'm missing: calling get_user_pages() on ZONE_DEVICE pages? Said another >>> way: is it reasonable to disallow calling get_user_pages() on ZONE_DEVICE pages? >>> >>> If we have to support get_user_pages() on ZONE_DEVICE pages, then the whole >>> LRU field approach is unusable. >> >> We just need to rearrange ZONE_DEVICE pages. Please excuse the whitespace >> damage: >> >> +++ b/include/linux/mm_types.h >> @@ -151,10 +151,12 @@ struct page { >> #endif >> }; >> struct { /* ZONE_DEVICE pages */ >> + unsigned long _zd_pad_2; /* LRU */ >> + unsigned long _zd_pad_3; /* LRU */ >> + unsigned long _zd_pad_1; /* uses mapping */ >> /** @pgmap: Points to the hosting device page map. */ >> struct dev_pagemap *pgmap; >> unsigned long hmm_data; >> - unsigned long _zd_pad_1; /* uses mapping */ >> }; >> >> /** @rcu_head: You can use this to free a page by RCU. */ >> >> You don't use page->private or page->index, do you Dan? > > page->private and page->index are use by HMM DEVICE page. > OK, so for the ZONE_DEVICE + HMM case, that leaves just one field remaining for dma-pinned information. Which might work. To recap, we need: -- 1 bit for PageDmaPinned -- 1 bit, if using LRU field(s), for PageDmaPinnedWasLru. -- N bits for a reference count Those *could* be packed into a single 64-bit field, if really necessary. thanks, -- John Hubbard NVIDIA