Re: [RFC PATCH v2 0/3] mm: mempolicy: Multi-tier weighted interleaving

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gregory Price <gregory.price@xxxxxxxxxxxx> writes:

> On Mon, Oct 30, 2023 at 10:20:14AM +0800, Huang, Ying wrote:
>> Gregory Price <gregory.price@xxxxxxxxxxxx> writes:
>> 
>> The extending adds complexity to the kernel code and changes the kernel
>> ABI.  So, IMHO, we need some real life use case to prove the added
>> complexity is necessary.
>> 
>> For example, in [1], Johannes showed the use case to support to add
>> per-memory-tier interleave weight.
>> 
>> [1] https://lore.kernel.org/all/20220607171949.85796-1-hannes@xxxxxxxxxxx/
>> 
>> --
>> Best Regards,
>> Huang, Ying
>
> Sorry, I misunderstood your question.
>
> The use case is the same as the N:M interleave strategy between tiers,
> and in fact the proposal for weights was directly inspired by the patch
> you posted. We're searching for the best way to implement weights.
>
> We've discussed placing these weights in:
>
> 1) mempolicy :
>    https://lore.kernel.org/linux-cxl/20230914235457.482710-1-gregory.price@xxxxxxxxxxxx/
>
> 2) tiers
>    https://lore.kernel.org/linux-cxl/20231009204259.875232-1-gregory.price@xxxxxxxxxxxx/
>
> and now
> 3) the nodes themselves
>    RFC not posted yet
>
> The use case is the exact same as the patch you posted, which is to enable
> optimal distribution of memory to maximize memory bandwidth usage.
>
> The use case is straight forward - Consider a machine with the following
> numa nodes:
>
> 1) Socket 0 - DRAM - ~400GB/s bandwidth local, less cross-socket
> 2) Socket 1 - DRAM - ~400GB/s bandwidth local, less cross socket
> 3) CXL Memory Attached to Socket 0 with ~64GB/s per link.
> 4) CXL Memory Attached to Socket 1 with ~64GB/s per link.
>
> The goal is to enable mempolicy to implement weighted interleave such
> that a thread running on socket 0 can effectively spread its memory
> across each numa node (or some subset there-of) such that it maximizes
> its bandwidth usage across the various devices.
>
> For example, lets consider a system with only 1 & 2 (2 sockets w/ DRAM).
>
> On an Intel System with UPI, the "effective" bandwidth available for a
> task on Socket 0 is not 800GB/s, it's about 450-500GB/s split about
> 300/200 between the sockets (you never get the full amount, and UPI limits
> cross-socket bandwidth).
>
> Today `numactl --interleave` will split your memory 50:50 between
> sockets, which is just blatantly suboptimal.  In this case you would
> prefer a 3:2 distribution (literally weights of 3 and 2 respectively).
>
> The extension to CXL becomes obvious then, as each individual node,
> respective to its CPU placement, has a different optimal weight.
>
>
> Of course the question becomes "what if a task uses more threads than a
> single socket has to offer", and the answer there is essentially the
> same as the answer today:  Then that process must become "numa-aware" to
> make the best use of the available resources.
>
> However, for software capable of exhausting bandwidth with from a single
> socket (which on intel takes about 16-20 threads with certain access
> patterns), then a weighted-interleave system provided via some interface
> like `numactl --weighted-interleave` with weights either set in numa
> nodes or mempolicy is sufficient.

I think that these are all possible in theory.  Thanks for detailed
explanation!

Now the question is whether these issues are relevant in practice.
Whether are all workloads with the extreme high memory bandwidth
requirement NUMA-aware?  Or multi-process instead of multi-thread?
Whether is the cross-socket traffic avoided as much as possible in
practice?  I have no answer to these questions.  Do you have?  Or
someone else can answer them?

--
Best Regards,
Huang, Ying




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux