On Wed, 19 Oct 2022, 黄杰 wrote: > Hugh Dickins <hughd@xxxxxxxxxx> 于2022年10月13日周四 03:45写道: > > On Wed, 12 Oct 2022, Albert Huang wrote: > > > From: "huangjie.albert" <huangjie.albert@xxxxxxxxxxxxx> > > > > > > implement these two functions so that we can set the mempolicy to > > > the inode of the hugetlb file. This ensures that the mempolicy of > > > all processes sharing this huge page file is consistent. > > > > > > In some scenarios where huge pages are shared: > > > if we need to limit the memory usage of vm within node0, so I set qemu's > > > mempilciy bind to node0, but if there is a process (such as virtiofsd) > > > shared memory with the vm, in this case. If the page fault is triggered > > > by virtiofsd, the allocated memory may go to node1 which depends on > > > virtiofsd. > > > > > > Signed-off-by: huangjie.albert <huangjie.albert@xxxxxxxxxxxxx> > > > > Aha! Congratulations for noticing, after all this time. hugetlbfs > > contains various little pieces of code that pretend to be supporting > > shared NUMA mempolicy, but in fact there was nothing connecting it up. > > > > It will be for Mike to decide, but personally I oppose adding > > shared NUMA mempolicy support to hugetlbfs, after eighteen years. > > > > The thing is, it will change the behaviour of NUMA on hugetlbfs: > > in ways that would have been sensible way back then, yes; but surely > > those who have invested in NUMA and hugetlbfs have developed other > > ways of administering it successfully, without shared NUMA mempolicy. > > > > At the least, I would expect some tests to break (I could easily be > > wrong), and there's a chance that some app or tool would break too. > > Hi : Hugh > > Can you share some issues here? Sorry, I don't think I can: precisely because it's been such a relief to know that hugetlbfs is not in the shared NUMA mempolicy game, I've given no thought to what issues it might have if it joined the game. Not much memory is wasted on the unused fields in hugetlbfs_inode_info, just a few bytes per inode, that aspect doesn't concern me much. Reference counting of shared mempolicy has certainly been a recurrent problem in the past (see mpol_needs_cond_ref() etc): stable nowadays I believe; but whether supporting hugetlbfs would cause new problems to surface there, I don't know; but whatever, those would just be bugs to be fixed. /proc/pid/numa_maps does not represent shared NUMA mempolicies correctly: not for tmpfs, and would not for hugetlbfs. I did have old patches to fix that, but not patches that I'm ever likely to have time to resurrect and present and push. My main difficulties in tmpfs were with how to deal correctly and consistently with non-hugepage-aligned mempolicies when hugepages are in use. In the case of hugetlbfs, it would be simpler, since you're always dealing in hugepages of a known size: I recommend being as strict as possible, demanding correctly aligned mempolicy or else EINVAL. (That may already be enforced, I've not looked.) But my main concern in extending shared NUMA mempolicy to hugetlbfs is exactly what I already said earlier: The thing is, it will change the behaviour of NUMA on hugetlbfs: in ways that would have been sensible way back then, yes; but surely those who have invested in NUMA and hugetlbfs have developed other ways of administering it successfully, without shared NUMA mempolicy. At the least, I would expect some tests to break (I could easily be wrong), and there's a chance that some app or tool would break too. It's a risk, and a body of complication, that I would keep away from myself. The shared mempolicy rbtree: makes sense, but no madvise() since has implemented such a tree, to attach its advice to ranges of the shared object rather than to vma. Hugh