RE: [PATCH v5 3/5] mm/gup: Introduce pin_user_pages_fd() for pinning shmem/hugetlbfs file pages (v5)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christoph,

> 
> > +static struct page *alloc_file_page(struct file *file, pgoff_t idx)
> 
> alloc_file_pages seems like a weird name for something that assumes
> it is called either on a hugetlbfs or shmemfs file (without any
I see your concern. The word "file" does make it look like this API works
with all kinds of files although it is meant to specifically work with files that
belong to shmemfs or hugetlbfs. Since it is intended to work with memfds
in particular, I'll rename this helper alloc_memfd_page(). I think it also
makes sense to do s/file/memfd in this whole patch. Does this sound ok?

> asserts that this is true).  gup.c also seems like a very odd place
> for such a helper.
I only created this helper to cleanly separate lookup and creation and to
reduce the level of indentation in pin_user_pages_fd(). Anyway, would
mm/memfd.c be a more appropriate location?

> 
> > + * Attempt to pin pages associated with a file that belongs to either
> shmem
> > + * or hugetlb.
> 
> Why do we need a special case for hugetlb or shmemfs?
As mentioned above, this API is mainly intended for memfds and FWICS,
memfds are backed by files from either shmemfs or hugetlbfs.

> 
> > +	if (!file)
> > +	    return -EINVAL;
> > +
> > +	if (!shmem_file(file) && !is_file_hugepages(file))
> > +	    return -EINVAL;
> 
> Indentation is messed up here.
Ok, will fix it in next version.

> 
> > +		for (i = 0; i < nr_pages; i++) {
> > +			/*
> > + 			 * In most cases, we should be able to find the page
> > +			 * in the page cache. If we cannot find it, we try to
> > +			 * allocate one and add it to the page cache.
> > +			 */
> > +retry:
> > +			folio = __filemap_get_folio(file->f_mapping,
> > +						    start + i,
> > +						    FGP_ACCESSED, 0);
> 
> __filemap_get_folio is a very inefficient way to find a
> contiguous range of folios, I'd suggest to look into something that
> batches instead.
Ok, I will try to explore using filemap_get_folios_contig() or other
related APIs to make the lookup more efficient.

> 
> > +			page = IS_ERR(folio) ? NULL: &folio->page;
> > +			if (!page) {
> > +				page = alloc_file_page(file, start + i);
> > +				if (IS_ERR(page)) {
> > +					ret = PTR_ERR(page);
> > +					if (ret == -EEXIST)
> > +						goto retry;
> > +					goto err;
> > +				}
> 
> This mix of folios and pages is odd.  Especially as hugetlbfs by
> definitions uses large folios.
Yeah, it does look odd but I ultimately need a list of pages to call
check_and_migrate_movable_pages() and also to populate a scatterlist.

Thanks,
Vivek





[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux