On Fri 13-01-17 14:08:34, Vlastimil Babka wrote: > On 01/12/2017 02:16 PM, Michal Hocko wrote: > > From: Michal Hocko <mhocko@xxxxxxxx> > > > > show_mem() allows to filter out node specific data which is irrelevant > > to the allocation request via SHOW_MEM_FILTER_NODES. The filtering > > is done in skip_free_areas_node which skips all nodes which are not > > in the mems_allowed of the current process. This works most of the > > time as expected because the nodemask shouldn't be outside of the > > allocating task but there are some exceptions. E.g. memory hotplug might > > want to request allocations from outside of the allowed nodes (see > > new_node_page). > > Hm AFAICS memory hotplug's new_node_page() is restricted both by cpusets (by > using GFP_USER), and by the nodemask it constructs. That's probably a bug in > itself, as it shouldn't matter which task is triggering the offline? yes that is true. A task bound to a node which is offlined would be funny... > Which probably means that if show_mem() wants to be really precise, it would > have to start from nodemask and intersect with cpuset when the allocation in > question cannot escape it. But if we accept that it's ok when we print too > many nodes (because we can filter them out when reading the output by having > also nodemask and mems_allowed printed), and strive only to not miss any > nodes, then this patch could really fix cases when we do miss (although > new_node_page() currently isn't such example). I guess it should be sufficient to add cpuset_print_current_mems_allowed() in warn_alloc. This should give us the full picture without doing too much twiddling. What do you think? -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>