Our DB team noticed negative hugetlb reserved page counts during development testing. Related meminfo fields were as follows on one system: HugePages_Total: 47143 HugePages_Free: 45610 HugePages_Rsvd: 18446744073709551613 HugePages_Surp: 0 Hugepagesize: 2048 kB Code inspection revealed that the most likely cause were races with truncate and page faults. In fact, I could write a not too complicated program to cause the races and recreate the issue. Way back in 2006, Hugh Dickins created a patch (ebed4bfc8da8) with this message: "[PATCH] hugetlb: fix absurd HugePages_Rsvd If you truncated an mmap'ed hugetlbfs file, then faulted on the truncated area, /proc/meminfo's HugePages_Rsvd wrapped hugely "negative". Reinstate my preliminary i_size check before attempting to allocate the page (though this only fixes the most obvious case: more work will be needed here)." Looks like we need to do more work. While looking at the code, there were many issues to correctly handle racing and back out changes partially made. Instead, why not just introduce a rw mutex to prevent the races. Page faults would take the mutex in read mode to allow multiple faults in parallel as it works today. Truncate code would take the mutex in write mode and prevent faults for the duration of truncate processing. This seems almost too obvious. Something must be wrong with this approach, or others would have employed it earlier. The following patch describes the current race in detail and adds the mutex to prevent truncate/fault races. Mike Kravetz (1): hugetlbfs: introduce truncation/fault mutex to avoid races fs/hugetlbfs/inode.c | 24 ++++++++++++++++++++---- include/linux/hugetlb.h | 1 + mm/hugetlb.c | 25 +++++++++++++++++++------ mm/userfaultfd.c | 8 +++++++- 4 files changed, 47 insertions(+), 11 deletions(-) -- 2.17.1