Re: [PATCH v26 2/5] fs/proc/task_mmu: Implement IOCTL to get and optionally clear info about PTEs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 27, 2023 at 2:37 AM Muhammad Usama Anjum
<usama.anjum@xxxxxxxxxxxxx> wrote:
>

<snip>

> +static long do_pagemap_scan(struct mm_struct *mm, unsigned long uarg)
> +{
> +       unsigned long walk_start, walk_end;
> +       struct mmu_notifier_range range;
> +       struct pagemap_scan_private p;
> +       size_t n_ranges_out = 0;
> +       int ret;
> +
> +       memset(&p, 0, sizeof(p));
> +       ret = pagemap_scan_get_args(&p.arg, uarg);
> +       if (ret)
> +               return ret;
> +
> +       ret = pagemap_scan_init_bounce_buffer(&p);
> +       if (ret)
> +               return ret;
> +
> +       /* Protection change for the range is going to happen. */
> +       if (p.arg.flags & PM_SCAN_WP_MATCHING) {
> +               mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,
> +                                       mm, p.arg.start, p.arg.end);
> +               mmu_notifier_invalidate_range_start(&range);
> +       }
> +
> +       walk_start = walk_end = p.arg.start;
> +       for (; walk_end != p.arg.end; walk_start = walk_end) {
> +               int n_out;
> +
> +               walk_end = min_t(unsigned long,
> +                                (walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK,
> +                                p.arg.end);

This approach has performance implications. The basic program that scans
its address space takes around 20-30 seconds, but it has just a few
small mappings. The first optimization that comes to mind is to remove
the PAGEMAP_WALK_SIZE limit and instead halt walk_page_range when the
bounce buffer is full. After draining the buffer, the walk_page_range
function can be restarted.

The test program and perf data can be found here:
https://gist.github.com/avagin/c5a22f3c78f8cb34281602dfe9c43d10

> +
> +               ret = mmap_read_lock_killable(mm);
> +               if (ret)
> +                       break;
> +               ret = walk_page_range(mm, walk_start, walk_end,
> +                                     &pagemap_scan_ops, &p);
> +               mmap_read_unlock(mm);
> +
> +               n_out = pagemap_scan_flush_buffer(&p);
> +               if (n_out < 0)
> +                       ret = n_out;
> +               else
> +                       n_ranges_out += n_out;
> +
> +               if (ret)
> +                       break;
> +       }
> +

Thanks,
Andrei




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux