On Tue, Nov 24, 2020 at 6:54 PM Song Bao Hua (Barry Song) <song.bao.hua@xxxxxxxxxxxxx> wrote: > > > > > -----Original Message----- > > From: Muchun Song [mailto:songmuchun@xxxxxxxxxxxxx] > > Sent: Tuesday, November 24, 2020 10:53 PM > > To: corbet@xxxxxxx; mike.kravetz@xxxxxxxxxx; tglx@xxxxxxxxxxxxx; > > mingo@xxxxxxxxxx; bp@xxxxxxxxx; x86@xxxxxxxxxx; hpa@xxxxxxxxx; > > dave.hansen@xxxxxxxxxxxxxxx; luto@xxxxxxxxxx; peterz@xxxxxxxxxxxxx; > > viro@xxxxxxxxxxxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx; paulmck@xxxxxxxxxx; > > mchehab+huawei@xxxxxxxxxx; pawan.kumar.gupta@xxxxxxxxxxxxxxx; > > rdunlap@xxxxxxxxxxxxx; oneukum@xxxxxxxx; anshuman.khandual@xxxxxxx; > > jroedel@xxxxxxx; almasrymina@xxxxxxxxxx; rientjes@xxxxxxxxxx; > > willy@xxxxxxxxxxxxx; osalvador@xxxxxxx; mhocko@xxxxxxxx; Song Bao Hua > > (Barry Song) <song.bao.hua@xxxxxxxxxxxxx> > > Cc: duanxiongchun@xxxxxxxxxxxxx; linux-doc@xxxxxxxxxxxxxxx; > > linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; > > linux-fsdevel@xxxxxxxxxxxxxxx; Muchun Song <songmuchun@xxxxxxxxxxxxx> > > Subject: [PATCH v6 14/16] mm/hugetlb: Add a kernel parameter > > hugetlb_free_vmemmap > > > > Add a kernel parameter hugetlb_free_vmemmap to disable the feature of > > freeing unused vmemmap pages associated with each hugetlb page on boot. > > > > Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> > > --- > > Documentation/admin-guide/kernel-parameters.txt | 9 +++++++++ > > Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ > > mm/hugetlb_vmemmap.c | 19 > > ++++++++++++++++++- > > 3 files changed, 30 insertions(+), 1 deletion(-) > > > > diff --git a/Documentation/admin-guide/kernel-parameters.txt > > b/Documentation/admin-guide/kernel-parameters.txt > > index 5debfe238027..d28c3acde965 100644 > > --- a/Documentation/admin-guide/kernel-parameters.txt > > +++ b/Documentation/admin-guide/kernel-parameters.txt > > @@ -1551,6 +1551,15 @@ > > Documentation/admin-guide/mm/hugetlbpage.rst. > > Format: size[KMG] > > > > + hugetlb_free_vmemmap= > > + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, > > + this controls freeing unused vmemmap pages associated > > + with each HugeTLB page. > > + Format: { on | off (default) } > > + > > + on: enable the feature > > + off: disable the feature > > + > > We've a parameter here. but wouldn't it be applied to "x86/mm/64/:disable > Pmd page mapping of vmemmap" as well? > If (hugetlb_free_vmemmap_enabled) > Do Basepage mapping? Oh, yeah, we can. Thanks. > > > hung_task_panic= > > [KNL] Should the hung task detector generate panics. > > Format: 0 | 1 > > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst > > b/Documentation/admin-guide/mm/hugetlbpage.rst > > index f7b1c7462991..6a8b57f6d3b7 100644 > > --- a/Documentation/admin-guide/mm/hugetlbpage.rst > > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst > > @@ -145,6 +145,9 @@ default_hugepagesz > > > > will all result in 256 2M huge pages being allocated. Valid default > > huge page size is architecture dependent. > > +hugetlb_free_vmemmap > > + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables > > freeing > > + unused vmemmap pages associated each HugeTLB page. > > > > When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` > > indicates the current number of pre-allocated huge pages of the default size. > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > > index 509ca451e232..b2222f8d1245 100644 > > --- a/mm/hugetlb_vmemmap.c > > +++ b/mm/hugetlb_vmemmap.c > > @@ -131,6 +131,22 @@ typedef void (*vmemmap_pte_remap_func_t)(struct > > page *reuse, pte_t *ptep, > > unsigned long start, unsigned long end, > > void *priv); > > > > +static bool hugetlb_free_vmemmap_enabled __initdata; > > + > > +static int __init early_hugetlb_free_vmemmap_param(char *buf) > > +{ > > + if (!buf) > > + return -EINVAL; > > + > > + if (!strcmp(buf, "on")) > > + hugetlb_free_vmemmap_enabled = true; > > + else if (strcmp(buf, "off")) > > + return -EINVAL; > > + > > + return 0; > > +} > > +early_param("hugetlb_free_vmemmap", > > early_hugetlb_free_vmemmap_param); > > + > > static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) > > { > > return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; > > @@ -322,7 +338,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) > > unsigned int order = huge_page_order(h); > > unsigned int vmemmap_pages; > > > > - if (!is_power_of_2(sizeof(struct page))) { > > + if (!is_power_of_2(sizeof(struct page)) || > > + !hugetlb_free_vmemmap_enabled) { > > pr_info("disable freeing vmemmap pages for %s\n", h->name); > > return; > > } > > -- > > 2.11.0 > > Thanks > Barry > -- Yours, Muchun