I was debugging an issue with a malloc implementation when I noticed some unintuitive behavior that happens when someone attempts to overwrite part of a hugepage-backed PROT_NONE mapping with another mapping. I've isolated the issue and reproduced it with the following program: [root@localhost ~]# cat test.c #include <assert.h> #include <errno.h> #include <stdio.h> #include <stdlib.h> #include <sys/mman.h> #include <unistd.h> #define MMAP_FLAGS_COMMON (MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB) int main() { size_t len = 2ULL << 30; void *a = mmap( (void *)0x7c8000000000, len, PROT_NONE, MMAP_FLAGS_COMMON | MAP_FIXED_NOREPLACE | MAP_NORESERVE, -1, 0); printf("a=%p errno %d %m\n", a, errno); errno = 0; char buf[128]; sprintf(buf, "cp /proc/%d/smaps smaps1", getpid()); assert(system(buf) == 0); len = 4096; void *b = mmap( a, len, PROT_READ | PROT_WRITE, MMAP_FLAGS_COMMON | MAP_POPULATE | MAP_FIXED, -1, 0); printf("b=%p errno %d %m\n", b, errno); errno = 0; sprintf(buf, "cp /proc/%d/smaps smaps2", getpid()); assert(system(buf) == 0); return 0; } [root@localhost ~]# gcc -o test test.c && ./test a=0x7c8000000000 errno 0 Success b=0xffffffffffffffff errno 12 Cannot allocate memory [root@localhost ~]# diff smaps1 smaps2 157,158c157,158 < 7c8000000000-7c8080000000 ---p 00000000 00:10 7332 /anon_hugepage (deleted) < Size: 2097152 kB --- > 7c8000200000-7c8080000000 ---p 00200000 00:10 7332 /anon_hugepage (deleted) > Size: 2095104 kB First, we map a 2G PROT_NONE region using hugepages. This succeeds. Then we try to map a 4096-length PROT_READ | PROT_WRITE region at the beginning of the PROT_NONE region, still using hugepages. This fails, as expected, because 4096 is much smaller than the hugepage size configured on the system (this is x86 with a default hugepage size of 2M). The surprising thing is the difference in /proc/pid/smaps before and after the failed mmap. Even though the mmap failed, the value in /proc/pid/smaps changed, with a 2M-sized bite being taken out the front of the mapping. This feels unintuitive to me, as I'd expect a failed mmap to have no effect on the virtual memory mappings of the calling process whatsoever. I initially saw this on an ancient redhat kernel, but I was able to reproduce it on 6.13 as well. So I assume this behavior still exists and has been around forever.