Re: HugePage by default

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






On Wed, Jul 30, 2014 at 5:22 PM, <Valdis.Kletnieks@xxxxxx> wrote:
On Wed, 30 Jul 2014 15:06:39 -0500, Xin Tong said:


> 2. modify the kernel (maybe extensively) to allocate 2MB page by default.

How fast do you run out of memory if you do that every time you actually
only need a few 4K pages?  (In other words - think what that isn't the
default behavior already :)

​I am planning to use this only for workloads with very large memory footprints, e.g. hadoop, tpcc, etc. 

BTW, i see Linux kernel uses the hugetlbfs to manage hugepages. every api call, mmap, shmget​, etc, all create a hugetlbfs before the hugepages can be allocated. why can not huge pages be allocated the same way as 4K pages ? whats the point of having the hugetlbfs. 

Xin

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@xxxxxxxxxxxxxxxxx
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux