On Thu, 2 Jun 2011 23:55:57 -0300 Rafael Aquini <aquini@xxxxxxxxx> wrote: > When 1GB hugepages are allocated on a system, free(1) reports > less available memory than what really is installed in the box. > Also, if the total size of hugepages allocated on a system is > over half of the total memory size, CommitLimit becomes > a negative number. > > The problem is that gigantic hugepages (order > MAX_ORDER) > can only be allocated at boot with bootmem, thus its frames > are not accounted to 'totalram_pages'. However, they are > accounted to hugetlb_total_pages() > > What happens to turn CommitLimit into a negative number > is this calculation, in fs/proc/meminfo.c: > > allowed = ((totalram_pages - hugetlb_total_pages()) > * sysctl_overcommit_ratio / 100) + total_swap_pages; > > A similar calculation occurs in __vm_enough_memory() in mm/mmap.c. > > Also, every vm statistic which depends on 'totalram_pages' will render > confusing values, as if system were 'missing' some part of its memory. Is this bug serious enough to justify backporting the fix into -stable kernels? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>