On Fri, 8 Oct 2010 13:37:12 +0900 KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > On Thu, 7 Oct 2010 16:14:54 -0700 > Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > > On Thu, 7 Oct 2010 17:04:05 +0900 > > KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > > > > > Now, at task migration among cgroup, memory cgroup scans page table and moving > > > account if flags are properly set. > > > > > > The core code, mem_cgroup_move_charge_pte_range() does > > > > > > pte_offset_map_lock(); > > > for all ptes in a page table: > > > 1. look into page table, find_and_get a page > > > 2. remove it from LRU. > > > 3. move charge. > > > 4. putback to LRU. put_page() > > > pte_offset_map_unlock(); > > > > > > for pte entries on a 3rd level? page table. > > > > > > This pte_offset_map_lock seems a bit long. This patch modifies a rountine as > > > > > > for 32 pages: pte_offset_map_lock() > > > find_and_get a page > > > record it > > > pte_offset_map_unlock() > > > for all recorded pages > > > isolate it from LRU. > > > move charge > > > putback to LRU > > > for all recorded pages > > > put_page() > > > > The patch makes the code larger, more complex and slower! > > > > Slower ? Sure. It walks the same data three times, potentially causing thrashing in the L1 cache. It takes and releases locks at a higher frequency. It increases the text size. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>