On 4/23/24 14:08, Yosry Ahmed wrote:
On Tue, Apr 23, 2024 at 12:00 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
On Tue, Apr 23, 2024, Yosry Ahmed wrote:
On Tue, Apr 23, 2024 at 11:43 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
But snp_safe_alloc_page() should again flow alloc_pages() and pass numa_node_id(),
not NUMA_NO_NODE.
alloc_pages_node() will use numa_node_id() if you pass in NUMA_NO_NODE.
The code uses numa_mem_id()
if (nid == NUMA_NO_NODE)
nid = numa_mem_id();
which is presumably the exact same thing as numa_node_id() on x86. But I don't
want to have to think that hard :-)
Uh yes, I missed numa_node_id() vs numa_mem_id(). Anyway, using
NUMA_NO_NODE with alloc_pages_node() is intended as an abstraction
such that you don't need to think about it :P
In other words, all I'm saying is that if we want to mirror alloc_pages() and
alloc_pages_node(), then we should mirror them exactly.
It's different though, these functions are core MM APIs used by the
entire kernel. snp_safe_alloc_page() is just a helper here wrapping
those core MM APIs rather than mirroring them.
In this case snp_safe_alloc_page() would wrap
snp_safe_alloc_page_node() which would wrap alloc_pages_node(). So it
should use alloc_pages_node() as intended: pass in a node id or
NUMA_NO_NODE if you just want the closest node.
Just my 2c, not that it will make a difference in practice.
Pretty much agree with everything everyone has said. The per-CPU
shouldn't be GFP_KERNEL_ACCOUNT and having a snp_safe_alloc_page() that
wraps snp_safe_alloc_page_node() which then calls alloc_pages_node()
seems the best way to me.
Thanks,
Tom