On Thu, Dec 17, 2020 at 6:52 AM Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote: > > On 12/16/20 2:25 PM, Oscar Salvador wrote: > > On Wed, Dec 16, 2020 at 02:08:30PM -0800, Mike Kravetz wrote: > >>> + * vmemmap_rmap_walk - walk vmemmap page table > >>> + > >>> +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, > >>> + unsigned long end, struct vmemmap_rmap_walk *walk) > >>> +{ > >>> + pte_t *pte; > >>> + > >>> + pte = pte_offset_kernel(pmd, addr); > >>> + do { > >>> + BUG_ON(pte_none(*pte)); > >>> + > >>> + if (!walk->reuse) > >>> + walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]); > >> > >> It may be just me, but I don't like the pte[-1] here. It certainly does work > >> as designed because we want to remap all pages in the range to the page before > >> the range (at offset -1). But, we do not really validate this 'reuse' page. > >> There is the BUG_ON(pte_none(*pte)) as a sanity check, but we do nothing similar > >> for pte[-1]. Based on the usage for HugeTLB pages, we can be confident that > >> pte[-1] is actually a pte. In discussions with Oscar, you mentioned another > >> possible use for these routines. > > > > Without giving it much of a thought, I guess we could duplicate the > > BUG_ON for the pte outside the loop, and add a new one for pte[-1]. > > Also, since walk->reuse seems to not change once it is set, we can take > > it outside the loop? e.g: > > > > pte *pte; > > > > pte = pte_offset_kernel(pmd, addr); > > BUG_ON(pte_none(*pte)); > > BUG_ON(pte_none(pte[VMEMMAP_TAIL_PAGE_REUSE])); > > walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]); > > do { > > .... > > } while... > > > > Or I am not sure whether we want to keep it inside the loop in case > > future cases change walk->reuse during the operation. > > But to be honest, I do not think it is realistic of all future possible > > uses of this, so I would rather keep it simple for now. > > I was thinking about possibly passing the 'reuse' address as another parameter > to vmemmap_remap_reuse(). We could add this addr to the vmemmap_rmap_walk > struct and set walk->reuse when we get to the pte for that address. Of > course this would imply that the addr would need to be part of the range. Maybe adding another one parameter is unnecessary. How about doing this in the vmemmap_remap_reuse? The 'reuse' address just is start + PAGE_SIZE. void vmemmap_remap_free(unsigned long start, unsigned long size) { unsigned long end = start + size; unsigned long reuse_addr = start + PAGE_SIZE; LIST_HEAD(vmemmap_pages); struct vmemmap_remap_walk walk = { .remap_pte = vmemmap_remap_pte, .vmemmap_pages = &vmemmap_pages, .reuse_addr = reuse_addr. }; } > > Ideally, we would walk the page table to get to the reuse page. My concern > was not explicitly about adding the BUG_ON. In more general use, *pte could > be the first entry on a pte page. And, then pte[-1] may not even be a pte. > > Again, I don't think this matters for the current HugeTLB use case. Just a > little concerned if code is put to use for other purposes. > -- > Mike Kravetz -- Yours, Muchun