Restructure the code comment inside flush_dcache_page() to make it more clear. Reviewed-by: Anshuman Khandual <anshuman.khandual@xxxxxxx> Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> --- This is based on next-20220407. Hi Andrew, Would you mind help me squash this patch into "arm64: mm: hugetlb: Enable HUGETLB_PAGE_FREE_VMEMMAP for arm64"? Because there is some conflicts with the patchset of hugetlb_vmemmap releated cleanup when you merge if I resend the original patch. arch/arm64/mm/flush.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 9e39598bbc21..fc4f710e9820 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -81,9 +81,10 @@ void flush_dcache_page(struct page *page) * read-only when CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is enabled (more * details can refer to vmemmap_remap_pte()). Although * __sync_icache_dcache() only set PG_dcache_clean flag on the head - * page struct, some tail page structs still can be seen the flag is - * set since the head vmemmap page frame is reused (more details can - * refer to the comments above page_fixed_fake_head()). + * page struct, there is more than one page struct with PG_dcache_clean + * associated with the HugeTLB page since the head vmemmap page frame + * is reused (more details can refer to the comments above + * page_fixed_fake_head()). */ if (hugetlb_optimize_vmemmap_enabled() && PageHuge(page)) page = compound_head(page); -- 2.11.0