(2012/04/10 9:23), David Rientjes wrote: > On Mon, 9 Apr 2012, KAMEZAWA Hiroyuki wrote: > >> if (transparent_hugepage_enabled(vma) && >> !transparent_hugepage_debug_cow()) >> new_page = alloc_hugepage_vma(transparent_hugepage_defrag(vma), >> vma, haddr, numa_node_id(), 0); >> else >> new_page = NULL; >> >> if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) { >> put_page(new_page); >> new_page = NULL; /* never OOM, just cause fallback */ >> } >> >> if (unlikely(!new_page)) { >> count_vm_event(THP_FAULT_FALLBACK); >> ret = do_huge_pmd_wp_page_fallback(mm, vma, address, >> pmd, orig_pmd, page, haddr); >> put_page(page); >> goto out; >> } > > This would result in the same error since do_huge_pmd_wp_page_fallback() > would fail to charge the necessary memory to the memcg. > Ah, I see. this will charge 1024 pages anyway. But ...hm, memcg easily returns failure when many pages are requested. AND.... I misunderstood your patch. You split hugepage and allocate 1 page at fault. Ok, seems reasonable, I'm sorry. Thanks, -Kame > Are you still including my change to handle_mm_fault() to retry if this > returns VM_FAULT_OOM? > > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>