On 2019/9/24 19:28, Peter Zijlstra wrote: > On Tue, Sep 24, 2019 at 07:07:36PM +0800, Yunsheng Lin wrote: >> On 2019/9/24 17:25, Peter Zijlstra wrote: >>> On Tue, Sep 24, 2019 at 09:29:50AM +0800, Yunsheng Lin wrote: >>>> On 2019/9/24 4:34, Peter Zijlstra wrote: >>> >>>>> I'm saying the ACPI standard is wrong. Explain to me how it is >>>>> physically possible to have a device without NUMA affinity in a NUMA >>>>> system? >>>>> >>>>> 1) The fundamental interconnect is not uniform. >>>>> 2) The device needs to actually be somewhere. >>>>> >>>> >>>> From what I can see, NUMA_NO_NODE may make sense in the below case: >>>> >>>> 1) Theoretically, there would be a device that can access all the memory >>>> uniformly and can be accessed by all cpus uniformly even in a NUMA system. >>>> Suppose we have two nodes, and the device just sit in the middle of the >>>> interconnect between the two nodes. >>>> >>>> Even we define a third node solely for the device, we may need to look at >>>> the node distance to decide the device can be accessed uniformly. >>>> >>>> Or we can decide that the device can be accessed uniformly by setting >>>> it's node to NUMA_NO_NODE. >>> >>> This is indeed a theoretical case; it doesn't scale. The moment you're >>> adding multiple sockets or even board interconnects this all goes out >>> the window. >>> >>> And in this case, forcing the device to either node is fine. >> >> Not really. >> For packet sending and receiving, the buffer memory may be allocated >> dynamically. Node of tx buffer memory is mainly based on the cpu >> that is sending sending, node of rx buffer memory is mainly based on >> the cpu the interrupt handler of the device is running on, and the >> device' interrupt affinity is mainly based on node id of the device. >> >> We can bind the processes that are using the device to both nodes >> in order to utilize memory on both nodes for packet sending. >> >> But for packet receiving, the node1 may not be used becuase the node >> id of device is forced to node 0, which is the default way to bind >> the interrupt to the cpu of the same node. >> >> If node_to_cpumask_map() returns all usable cpus when the device's node >> id is NUMA_NO_NODE, then interrupt can be binded to the cpus on both nodes. > > s/binded/bound/ > > Sure; the data can be allocated wherever, but the control structures are > not dynamically allocated every time. They are persistent, and they will > be local to some node. > > Anyway, are you saying this stupid corner case is actually relevant? > Because how does it scale out? What if you have 8 sockets, with each > socket having 2 nodes and 1 such magic device. Then returning all CPUs > is just plain wrong. Yes, the hardware may not scale out, but what about the virtual device? > >>>> 2) For many virtual deivces, such as tun or loopback netdevice, they >>>> are also accessed uniformly by all cpus. >>> >>> Not true; the virtual device will sit in memory local to some node. >>> >>> And as with physical devices, you probably want at least one (virtual) >>> queue per node. >> >> There may be similar handling as above for virtual device too. > > And it'd be similarly broken. >From [1], there is a lot of devices with node id of NUMA_NO_NODE with the FW_BUG. [1] https://lore.kernel.org/lkml/5a188e2b-6c07-a9db-fbaa-561e9362d3ba@xxxxxxxxxx/ > > . >