Hi Dave, Thank you very much for merging the infrastructure. I rewrote the patch based on it and tested this with some dumpfiles. --- Changes from v1: - rewrite based on the per-architecture function call Dave provided - remove the part which used page.flags for non-VMEMMAP kernels - add address range/position check first - remove/optimize the calculations of mem_map and phys address - modify the patch description The "kmem -[sS]" commands can take several minutes to complete with the following conditions: - The system has a lot of memory sections with CONFIG_SPARSEMEM, and - The kernel uses SLUB and it has a very long partial slab list. crash> kmem -s dentry CACHE NAME OBJSIZE ALLOCATED TOTAL SLABS SSIZE ffff88017fc78a00 dentry 192 9038949 10045728 239184 8k crash> kmem -s dentry | bash -c 'cat >/dev/null ; echo $SECONDS' 133 crash> kmem -S dentry | bash -c 'cat >/dev/null ; echo $SECONDS' 656 One of the causes is that is_page_ptr() in count_partial() determines whether a given slub page address is a page struct by searching all mem_sections available for the one which includes it. With CONFIG_SPARSEMEM_VMEMMAP on x86_64, we can do that by checking its address range and whether its calculated mem_section is valid. With this patch, the computation amount can be significantly reduced in that case. crash> kmem -s dentry | bash -c 'cat >/dev/null ; echo $SECONDS' 1 crash> kmem -S dentry | bash -c 'cat >/dev/null ; echo $SECONDS' 1 Signed-off-by: Kazuhito Hagio <k-hagio@xxxxxxxxxxxxx> --- defs.h | 1 + x86_64.c | 23 +++++++++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/defs.h b/defs.h index 9663bd8..7998ebf 100644 --- a/defs.h +++ b/defs.h @@ -5133,6 +5133,7 @@ int vaddr_type(ulong, struct task_context *); char *format_stack_entry(struct bt_info *bt, char *, ulong, ulong); int in_user_stack(ulong, ulong); int dump_inode_page(ulong); +ulong valid_section_nr(ulong); /* diff --git a/x86_64.c b/x86_64.c index 7449571..67cc528 100644 --- a/x86_64.c +++ b/x86_64.c @@ -77,6 +77,7 @@ static void x86_64_calc_phys_base(void); static int x86_64_is_module_addr(ulong); static int x86_64_is_kvaddr(ulong); static int x86_64_is_uvaddr(ulong, struct task_context *); +static int x86_64_is_page_ptr(ulong, physaddr_t *); static ulong *x86_64_kpgd_offset(ulong, int, int); static ulong x86_64_upgd_offset(struct task_context *, ulong, int, int); static ulong x86_64_upgd_offset_legacy(struct task_context *, ulong, int, int); @@ -624,6 +625,7 @@ x86_64_init(int when) _MAX_PHYSMEM_BITS_2_6_26; } } + machdep->is_page_ptr = x86_64_is_page_ptr; if (XEN()) { if (kt->xen_flags & WRITABLE_PAGE_TABLES) { @@ -802,6 +804,7 @@ x86_64_dump_machdep_table(ulong arg) fprintf(fp, " get_smp_cpus: x86_64_get_smp_cpus()\n"); fprintf(fp, " is_kvaddr: x86_64_is_kvaddr()\n"); fprintf(fp, " is_uvaddr: x86_64_is_uvaddr()\n"); + fprintf(fp, " is_page_ptr: x86_64_is_page_ptr()\n"); fprintf(fp, " verify_paddr: x86_64_verify_paddr()\n"); fprintf(fp, " get_kvaddr_ranges: x86_64_get_kvaddr_ranges()\n"); fprintf(fp, " init_kernel_pgd: x86_64_init_kernel_pgd()\n"); @@ -1594,6 +1597,26 @@ x86_64_is_uvaddr(ulong addr, struct task_context *tc) return (addr < USERSPACE_TOP); } +static int +x86_64_is_page_ptr(ulong addr, physaddr_t *phys) +{ + ulong pfn, nr; + + if (IS_SPARSEMEM() && (machdep->flags & VMEMMAP) && + (addr >= VMEMMAP_VADDR && addr <= VMEMMAP_END) && + !((addr - VMEMMAP_VADDR) % SIZE(page))) { + + pfn = (addr - VMEMMAP_VADDR) / SIZE(page); + nr = pfn_to_section_nr(pfn); + if (valid_section_nr(nr)) { + if (phys) + *phys = PTOB(pfn); + return TRUE; + } + } + return FALSE; +} + /* * Find the kernel pgd entry.. * pgd = pgd_offset_k(addr); -- 1.8.3.1 -- Crash-utility mailing list Crash-utility@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/crash-utility