Hi, is there any reason why we enforce the overcommit limit during hugetlb pages migration? It's in alloc_huge_page_node->__alloc_buddy_huge_page path. I am wondering whether this is really an intentional behavior. The page migration allocates a page just temporarily so we should be able to go over the overcommit limit for the migration duration. The reason I am asking is that hugetlb pages tend to be utilized usually (otherwise the memory would be just wasted and pool shrunk) but then the migration simply fails which breaks memory hotplug and other migration dependent functionality which is quite suboptimal. You can workaround that by increasing the overcommit limit. Why don't we simply migrate as long as we are able to allocate the target hugetlb page? I have a half baked patch to remove this restriction, would there be an opposition to do something like that? -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>