Re: [RFC PATCH 1/6] mm: huge_memory: add new debugfs interface to trigger split huge page on any page range.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16 Nov 2020, at 11:06, Kirill A. Shutemov wrote:

> On Wed, Nov 11, 2020 at 03:40:03PM -0500, Zi Yan wrote:
>> From: Zi Yan <ziy@xxxxxxxxxx>
>>
>> Huge pages in the process with the given pid and virtual address range
>> are split. It is used to test split huge page function. In addition,
>> a testing program is added to tools/testing/selftests/vm to utilize the
>> interface by splitting PMD THPs.
>>
>> Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
>> ---
>>  mm/huge_memory.c                              |  98 +++++++++++
>>  mm/internal.h                                 |   1 +
>>  mm/migrate.c                                  |   2 +-
>>  tools/testing/selftests/vm/Makefile           |   1 +
>>  .../selftests/vm/split_huge_page_test.c       | 161 ++++++++++++++++++
>>  5 files changed, 262 insertions(+), 1 deletion(-)
>>  create mode 100644 tools/testing/selftests/vm/split_huge_page_test.c
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 207ebca8c654..c4fead5ead31 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -7,6 +7,7 @@
>>
>>  #include <linux/mm.h>
>>  #include <linux/sched.h>
>> +#include <linux/sched/mm.h>
>>  #include <linux/sched/coredump.h>
>>  #include <linux/sched/numa_balancing.h>
>>  #include <linux/highmem.h>
>> @@ -2935,10 +2936,107 @@ static int split_huge_pages_set(void *data, u64 val)
>>  DEFINE_DEBUGFS_ATTRIBUTE(split_huge_pages_fops, NULL, split_huge_pages_set,
>>  		"%llu\n");
>>
>> +static ssize_t split_huge_pages_in_range_pid_write(struct file *file,
>> +		const char __user *buf, size_t count, loff_t *ppops)
>> +{
>> +	static DEFINE_MUTEX(mutex);
>> +	ssize_t ret;
>> +	char input_buf[80]; /* hold pid, start_vaddr, end_vaddr */
>> +	int pid;
>> +	unsigned long vaddr_start, vaddr_end, addr;
>> +	nodemask_t task_nodes;
>> +	struct mm_struct *mm;
>> +
>> +	ret = mutex_lock_interruptible(&mutex);
>> +	if (ret)
>> +		return ret;
>> +
>> +	ret = -EFAULT;
>> +
>> +	memset(input_buf, 0, 80);
>> +	if (copy_from_user(input_buf, buf, min_t(size_t, count, 80)))
>> +		goto out;
>> +
>> +	input_buf[80] = '\0';
>
> Hm. Out-of-buffer access?

Sorry. Will fix it.

>
>> +	ret = sscanf(input_buf, "%d,%lx,%lx", &pid, &vaddr_start, &vaddr_end);
>
> Why hex without 0x prefix?

No particular reason. Let me add the prefix.

>
>> +	if (ret != 3) {
>> +		ret = -EINVAL;
>> +		goto out;
>> +	}
>> +	vaddr_start &= PAGE_MASK;
>> +	vaddr_end &= PAGE_MASK;
>> +
>> +	ret = strlen(input_buf);
>> +	pr_debug("split huge pages in pid: %d, vaddr: [%lx - %lx]\n",
>> +		 pid, vaddr_start, vaddr_end);
>> +
>> +	mm = find_mm_struct(pid, &task_nodes);
>
> I don't follow why you need nodemask.

I don’t need it. I just reuse the find_mm_struct function from
mm/migrate.c.

>
>> +	if (IS_ERR(mm)) {
>> +		ret = -EINVAL;
>> +		goto out;
>> +	}
>> +
>> +	mmap_read_lock(mm);
>> +	for (addr = vaddr_start; addr < vaddr_end;) {
>> +		struct vm_area_struct *vma = find_vma(mm, addr);
>> +		unsigned int follflags;
>> +		struct page *page;
>> +
>> +		if (!vma || addr < vma->vm_start || !vma_migratable(vma))
>> +			break;
>> +
>> +		/* FOLL_DUMP to ignore special (like zero) pages */
>> +		follflags = FOLL_GET | FOLL_DUMP;
>> +		page = follow_page(vma, addr, follflags);
>> +
>> +		if (IS_ERR(page))
>> +			break;
>> +		if (!page)
>> +			break;
>> +
>> +		if (!is_transparent_hugepage(page))
>> +			goto next;
>> +
>> +		if (!can_split_huge_page(page, NULL))
>> +			goto next;
>> +
>> +		if (!trylock_page(page))
>> +			goto next;
>> +
>> +		addr += page_size(page) - PAGE_SIZE;
>
> Who said it was mapped as huge? mremap() allows to construct an PTE page
> table that filled with PTE-mapped THPs, each of them distinct.

I forgot about this. I was trying to be smart to skip the rest of
subpages if we split a THP. I will increase addr always by PAGE_SIZE
to handle this situation.

>> +
>> +		/* reset addr if split fails */
>> +		if (split_huge_page(page))
>> +			addr -= (page_size(page) - PAGE_SIZE);
>> +
>> +		unlock_page(page);
>> +next:
>> +		/* next page */
>> +		addr += page_size(page);
>
> Isn't it the second time if split_huge_page() succeed.

If split_huge_page() succeeds, page_size(page) would be PAGE_SIZE
and addr was increased by THP size - PAGE_SIZE above, so addr now should
be at the end of the original THP.

Anyway, I will change the code to something like:

        /*
         * always increase addr by PAGE_SIZE, since we could have a PTE page
         * table filled with PTE-mapped THPs, each of which is distinct.
         */
        for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) {
				
				...

                if (!trylock_page(page))
                        continue;

                split_huge_page(page);

                unlock_page(page);
                put_page(page);
        }
        mmap_read_unlock(mm);


Thanks for reviewing the patch.

—
Best Regards,
Yan Zi

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux