On Thu, Nov 22, 2018 at 11:11 PM Anshuman Khandual <anshuman.khandual@xxxxxxx> wrote: > > > > On 11/22/2018 11:38 PM, Dan Williams wrote: > > On Thu, Nov 22, 2018 at 3:52 AM Anshuman Khandual > > <anshuman.khandual@xxxxxxx> wrote: > >> > >> > >> > >> On 11/19/2018 11:07 PM, Dave Hansen wrote: > >>> On 11/18/18 9:44 PM, Anshuman Khandual wrote: > >>>> IIUC NUMA re-work in principle involves these functional changes > >>>> > >>>> 1. Enumerating compute and memory nodes in heterogeneous environment (short/medium term) > >>> > >>> This patch set _does_ that, though. > >>> > >>>> 2. Enumerating memory node attributes as seen from the compute nodes (short/medium term) > >>> > >>> It does that as well (a subset at least). > >>> > >>> It sounds like the subset that's being exposed is insufficient for yo > >>> We did that because we think doing anything but a subset in sysfs will > >>> just blow up sysfs: MAX_NUMNODES is as high as 1024, so if we have 4 > >>> attributes, that's at _least_ 1024*1024*4 files if we expose *all* > >>> combinations. > >> Each permutation need not be a separate file inside all possible NODE X > >> (/sys/devices/system/node/nodeX) directories. It can be a top level file > >> enumerating various attribute values for a given (X, Y) node pair based > >> on an offset something like /proc/pid/pagemap. > >> > >>> > >>> Do we agree that sysfs is unsuitable for exposing attributes in this manner? > >>> > >> > >> Yes, for individual files. But this can be worked around with an offset > >> based access from a top level global attributes file as mentioned above. > >> Is there any particular advantage of using individual files for each > >> given attribute ? I was wondering that a single unsigned long (u64) will > >> be able to pack 8 different attributes where each individual attribute > >> values can be abstracted out in 8 bits. > > > > sysfs has a 4K limit, and in general I don't think there is much > > incremental value to go describe the entirety of the system from sysfs > > or anywhere else in the kernel for that matter. It's simply too much> information to reasonably consume. Instead the kernel can describe the > > I agree that it may be some amount of information to parse but is crucial > for any task on a heterogeneous system to evaluate (probably re-evaluate > if the task moves around) its memory and CPU binding at runtime to make > sure it has got the right one. Can you provide some more evidence for this statement? It seems that not many applications even care about basic numa let alone specific memory targeting, at least according to libnumactl users. dnf repoquery --whatrequires numactl-libs The kernel is the arbiter of memory, something is broken if applications *need* to take on this responsibility. Yes, there will be applications that want to tune and override the default kernel behavior, but this is the exception, not the rule. The applications that tend to care about specific memories also tend to be purpose built for a given platform, and that lessens their reliance on the kernel to enumerate all properties. > > coarse boundaries and some semblance of "best" access initiator for a > > given target. That should cover the "80%" case of what applications > > The current proposal just assumes that the best one is the nearest one. > This may be true for bandwidth and latency but may not be true for some > other properties. This assumptions should not be there while defining > new ABI. In fact, I tend to agree with you, but in my opinion that's an argument to expose even less, not more. If we start with something minimal that can be extended over time we lessen the risk of over exposing details that don't matter in practice. We're in the middle of a bit of a disaster with the VmFlags export in /proc/$pid/smaps precisely because the implementation was too comprehensive and applications started depending on details that the kernel does not want to guarantee going forward. So there is a real risk of being too descriptive in an interface design. > > want to discover, for the other "20%" we likely need some userspace > > library that can go parse these platform specific information sources > > and supplement the kernel view. I also think a simpler kernel starting > > point gives us room to go pull in more commonly used attributes if it > > turns out they are useful, and avoid going down the path of exporting > > attributes that have questionable value in practice. > > > > Applications can just query platform information right now and just use > them for mbind() without requiring this new interface. No, they can't today, at least not for the topology details that HMAT is describing. The platform-firmware to numa-node translation is currently not complete. At a minimum we need a listing of initiator ids and target ids. For an ACPI platform that is the proximity-domain to numa-node-id translation information. Once that translation is in place then a userspace library can consult the platform-specific information sources to translate the platform-firmware view to the Linux handles for those memories. Am I missing the library that does this today? > We are not even > changing any core MM yet. So if it's just about identifying the node's > memory properties it can be scanned from platform itself. But I agree > we would like the kernel to start adding interfaces for multi attribute > memory but all I am saying is that it has to be comprehensive. Some of > the attributes have more usefulness now and some have less but the new > ABI interface has to accommodate exporting all of these. I get the sense we are talking past each other, can you give the next level of detail on that "has to be comprehensive" statement?