On 6/6/22 16:49, Qian Cai wrote: > On Fri, Jun 03, 2022 at 07:19:43AM +0300, Vasily Averin wrote: > This triggers a few boot warnings like those. > > virt_to_phys used for non-linear address: ffffd8efe2d2fe00 (init_net) > WARNING: CPU: 87 PID: 3170 at arch/arm64/mm/physaddr.c:12 __virt_to_phys ... > Call trace: > __virt_to_phys > mem_cgroup_from_obj > __register_pernet_operations @@ -1143,7 +1144,13 @@ static int __register_pernet_operations(struct list_head *list, * setup_net() and cleanup_net() are not possible. */ for_each_net(net) { + struct mem_cgroup *old, *memcg; + + memcg = mem_cgroup_or_root(get_mem_cgroup_from_obj(net)); <<<< Here + old = set_active_memcg(memcg); error = ops_init(ops, net); + set_active_memcg(old); + mem_cgroup_put(memcg); ... +static inline struct mem_cgroup *get_mem_cgroup_from_obj(void *p) +{ + struct mem_cgroup *memcg; + + rcu_read_lock(); + do { + memcg = mem_cgroup_from_obj(p); <<<< + } while (memcg && !css_tryget(&memcg->css)); ... struct mem_cgroup *mem_cgroup_from_obj(void *p) { struct folio *folio; if (mem_cgroup_disabled()) return NULL; folio = virt_to_folio(p); <<<< here ... static inline struct folio *virt_to_folio(const void *x) { struct page *page = virt_to_page(x); <<< here ... (arm64) #define virt_to_page(x) pfn_to_page(virt_to_pfn(x)) ... #define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x))) ... phys_addr_t __virt_to_phys(unsigned long x) { WARN(!__is_lm_address(__tag_reset(x)), "virt_to_phys used for non-linear address: %pK (%pS)\n", from arch/x86/include/asm/page.h: * virt_to_page(kaddr) returns a valid pointer if and only if * virt_addr_valid(kaddr) returns true. As far as I understand this report means that 'init_net' have incorrect virtual address on arm64. Roman, Shakeel, I need your help Should we perhaps verify kaddr via virt_addr_valid() before using virt_to_page() If so, where it should be checked? Thank you, Vasily Averin