(2012/04/04 10:56), David Rientjes wrote: > On COW, a new hugepage is allocated and charged to the memcg. If the > memcg is oom, however, this charge will fail and will return VM_FAULT_OOM > to the page fault handler which results in an oom kill. > > Instead, it's possible to fallback to splitting the hugepage so that the > COW results only in an order-0 page being charged to the memcg which has > a higher liklihood to succeed. This is expensive because the hugepage > must be split in the page fault handler, but it is much better than > unnecessarily oom killing a process. > > Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx> > --- > mm/huge_memory.c | 1 + > mm/memory.c | 18 +++++++++++++++--- > 2 files changed, 16 insertions(+), 3 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -959,6 +959,7 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, > > if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) { > put_page(new_page); > + split_huge_page(page); > put_page(page); > ret |= VM_FAULT_OOM; > goto out; ?? how about == if (transparent_hugepage_enabled(vma) && !transparent_hugepage_debug_cow()) new_page = alloc_hugepage_vma(transparent_hugepage_defrag(vma), vma, haddr, numa_node_id(), 0); else new_page = NULL; if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) { put_page(new_page); new_page = NULL; /* never OOM, just cause fallback */ } if (unlikely(!new_page)) { count_vm_event(THP_FAULT_FALLBACK); ret = do_huge_pmd_wp_page_fallback(mm, vma, address, pmd, orig_pmd, page, haddr); put_page(page); goto out; } == ? Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>