On Fri, Feb 05, 2016 at 10:23:53AM +0100, Andrew Jones wrote: > On Thu, Feb 04, 2016 at 06:51:06PM +0000, Marc Zyngier wrote: > What would the benefit of defining a "socket"? > > That's a good lead in for my next question. While I don't believe > there needs to be any relationship between socket and numa node, I > suspect on real machines there is, and quite possibly socket == node. > Shannon is adding numa support to QEMU right now. Without special > configuration there's no gain other than illusion, but with pinning, > etc. the guest numa nodes will map to host nodes, and thus passing > that information on to the guest's kernel is useful. Populating a > socket/node affinity field seems to me like a needed step. But, > question time, is it? Maybe not. I don't think it's necessary. When using ACPI, NUMA info comes from SRAT+SLIT, and the MPIDR.Aff* fields do not provide NUMA topology info. I expect the same to be true with DT using something like numa-distance-map [1]. > Also, the way Linux currently handles non-thread using MPIDRs > (Aff1:socket, Aff0:core) throws a wrench at the Aff2:socket, > Aff1:"cluster", Aff0:core(max 16) plan. Either the plan or Linux > would need to be changed. The topology can be explicitly overridden in DT using cpu-map [2]. I don't know what the story for ACPI is. Mark. [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-February/404057.html [2] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/arm/topology.txt?h=v4.5-rc2&id=36f90b0a2ddd60823fe193a85e60ff1906c2a9b3 _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm