Re: [PATCH v6 5/8] mm: Device exclusive memory access

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> +Not all devices support atomic access to system memory. To support atomic
> +operations to a shared virtual memory page such a device needs access to that
> +page which is exclusive of any userspace access from the CPU. The
> +``make_device_exclusive_range()`` function can be used to make a memory range
> +inaccessible from userspace.

s/Not all devices/Some devices/ ?

>  static inline int mm_has_notifiers(struct mm_struct *mm)
> @@ -528,7 +534,17 @@ static inline void mmu_notifier_range_init_migrate(
>  {
>  	mmu_notifier_range_init(range, MMU_NOTIFY_MIGRATE, flags, vma, mm,
>  				start, end);
> -	range->migrate_pgmap_owner = pgmap;
> +	range->owner = pgmap;
> +}
> +
> +static inline void mmu_notifier_range_init_exclusive(
> +			struct mmu_notifier_range *range, unsigned int flags,
> +			struct vm_area_struct *vma, struct mm_struct *mm,
> +			unsigned long start, unsigned long end, void *owner)
> +{
> +	mmu_notifier_range_init(range, MMU_NOTIFY_EXCLUSIVE, flags, vma, mm,
> +				start, end);
> +	range->owner = owner;

Maybe just replace mmu_notifier_range_init_migrate with a
mmu_notifier_range_init_owner helper that takes the owner but does
not hard code a type?

>  		}
> +	} else if (is_device_exclusive_entry(entry)) {
> +		page = pfn_swap_entry_to_page(entry);
> +
> +		get_page(page);
> +		rss[mm_counter(page)]++;
> +
> +		if (is_writable_device_exclusive_entry(entry) &&
> +		    is_cow_mapping(vm_flags)) {
> +			/*
> +			 * COW mappings require pages in both
> +			 * parent and child to be set to read.
> +			 */
> +			entry = make_readable_device_exclusive_entry(
> +							swp_offset(entry));
> +			pte = swp_entry_to_pte(entry);
> +			if (pte_swp_soft_dirty(*src_pte))
> +				pte = pte_swp_mksoft_dirty(pte);
> +			if (pte_swp_uffd_wp(*src_pte))
> +				pte = pte_swp_mkuffd_wp(pte);
> +			set_pte_at(src_mm, addr, src_pte, pte);
> +		}

Just cosmetic, but I wonder if should factor this code block into
a little helper.

> +
> +static bool try_to_protect_one(struct page *page, struct vm_area_struct *vma,
> +			unsigned long address, void *arg)
> +{
> +	struct mm_struct *mm = vma->vm_mm;
> +	struct page_vma_mapped_walk pvmw = {
> +		.page = page,
> +		.vma = vma,
> +		.address = address,
> +	};
> +	struct ttp_args *ttp = (struct ttp_args *) arg;

This cast should not be needed.

> +	return ttp.valid && (!page_mapcount(page) ? true : false);

This can be simplified to:

	return ttp.valid && !page_mapcount(page);

> +	npages = get_user_pages_remote(mm, start, npages,
> +				       FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD,
> +				       pages, NULL, NULL);
> +	for (i = 0; i < npages; i++, start += PAGE_SIZE) {
> +		if (!trylock_page(pages[i])) {
> +			put_page(pages[i]);
> +			pages[i] = NULL;
> +			continue;
> +		}
> +
> +		if (!try_to_protect(pages[i], mm, start, arg)) {
> +			unlock_page(pages[i]);
> +			put_page(pages[i]);
> +			pages[i] = NULL;
> +		}

Should the trylock_page go into try_to_protect to simplify the loop
a little?  Also I wonder if we need make_device_exclusive_range or
should just open code the get_user_pages_remote + try_to_protect
loop in the callers, as that might allow them to also deduct other
information about the found pages.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@xxxxxx>



[Index of Archives]     [KVM Development]     [KVM ARM]     [KVM ia64]     [Linux Virtualization]     [Linux USB Devel]     [Linux Video]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Big List of Linux Books]

  Powered by Linux