On Thu, Mar 28, 2019 at 03:05:33PM -0700, Mike Kravetz wrote: > The number of node specific huge pages can be set via a file such as: > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages > When a node specific value is specified, the global number of huge > pages must also be adjusted. This adjustment is calculated as the > specified node specific value + (global value - current node value). > If the node specific value provided by the user is large enough, this > calculation could overflow an unsigned long leading to a smaller > than expected number of huge pages. > > To fix, check the calculation for overflow. If overflow is detected, > use ULONG_MAX as the requested value. This is inline with the user > request to allocate as many huge pages as possible. > > It was also noticed that the above calculation was done outside the > hugetlb_lock. Therefore, the values could be inconsistent and result > in underflow. To fix, the calculation is moved within the routine > set_max_huge_pages() where the lock is held. > > In addition, the code in __nr_hugepages_store_common() which tries to > handle the case of not being able to allocate a node mask would likely > result in incorrect behavior. Luckily, it is very unlikely we will > ever take this path. If we do, simply return ENOMEM. > > Reported-by: Jing Xiangfeng <jingxiangfeng@xxxxxxxxxx> > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> -- Oscar Salvador SUSE L3