On 4/2/21 2:32 AM, Miaohe Lin wrote: > A rare out of memory error would prevent removal of the reserve map region > for a page. hugetlb_fix_reserve_counts() handles this rare case to avoid > dangling with incorrect counts. Unfortunately, hugepage_subpool_get_pages > and hugetlb_acct_memory could possibly fail too. We should correctly handle > these cases. Yes, this is a potential issue. The 'good news' is that hugetlb_fix_reserve_counts() is unlikely to ever be called. To do so would imply we could not allocate a region entry which is only 6 words in size. We also keep a 'cache' of entries so we may not even need to allocate. But, as mentioned it is a potential issue. > Fixes: b5cec28d36f5 ("hugetlbfs: truncate_hugepages() takes a range of pages") This is likely going to make this get picked by by stable releases. That is unfortunate as mentioned above this is mostly theoretical. > Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> > --- > mm/hugetlb.c | 11 +++++++++-- > 1 file changed, 9 insertions(+), 2 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index bdff8d23803f..ca5464ed04b7 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -745,13 +745,20 @@ void hugetlb_fix_reserve_counts(struct inode *inode) > { > struct hugepage_subpool *spool = subpool_inode(inode); > long rsv_adjust; > + bool reserved = false; > > rsv_adjust = hugepage_subpool_get_pages(spool, 1); > - if (rsv_adjust) { > + if (rsv_adjust > 0) { > struct hstate *h = hstate_inode(inode); > > - hugetlb_acct_memory(h, 1); > + if (!hugetlb_acct_memory(h, 1)) > + reserved = true; > + } else if (!rsv_adjust) { > + reserved = true; > } > + > + if (!reserved) > + pr_warn("hugetlb: fix reserve count failed\n"); We should expand this warning message a bit to indicate what this may mean to the user. Add something like" "Huge Page Reserved count may go negative". -- Mike Kravetz