The patch titled swap_prefetch: zoned vmstats fixes has been removed from the -mm tree. Its filename is swap_prefetch-vs-zoned-vm-stats.patch This patch was dropped because it had testing failures ------------------------------------------------------ Subject: swap_prefetch: zoned vmstats fixes From: Christoph Lameter <clameter@xxxxxxx> Upadate zone_prefetch for zoned vm stats Signed-off-by: Christoph Lameter <clameter@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxx> --- mm/swap_prefetch.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff -puN mm/swap_prefetch.c~swap_prefetch-vs-zoned-vm-stats mm/swap_prefetch.c --- devel/mm/swap_prefetch.c~swap_prefetch-vs-zoned-vm-stats 2006-06-10 09:29:32.000000000 -0700 +++ devel-akpm/mm/swap_prefetch.c 2006-06-10 09:29:32.000000000 -0700 @@ -357,7 +357,6 @@ static int prefetch_suitable(void) */ for_each_node_mask(node, sp_stat.prefetch_nodes) { struct node_stats *ns = &sp_stat.node[node]; - struct page_state ps; /* * We check to see that pages are not being allocated @@ -378,10 +377,8 @@ static int prefetch_suitable(void) if (!test_pagestate) continue; - get_page_state_node(&ps, node); - /* We shouldn't prefetch when we are doing writeback */ - if (ps.nr_writeback) { + if (global_page_state(NR_WRITEBACK)) { node_clear(node, sp_stat.prefetch_nodes); continue; } @@ -389,13 +386,13 @@ static int prefetch_suitable(void) /* * >2/3 of the ram on this node is mapped, slab, swapcache or * dirty, we need to leave some free for pagecache. - * Note that currently nr_slab is innacurate on numa because - * nr_slab is incremented on the node doing the accounting - * even if the slab is being allocated on a remote node. This - * would be expensive to fix and not of great significance. */ - limit = ps.nr_mapped + ps.nr_slab + ps.nr_dirty + - ps.nr_unstable + total_swapcache_pages; + limit = global_page_state(NR_MAPPED) + + global_page_state(NR_ANON) + + global_page_state(NR_SLAB) + + global_page_state(NR_DIRTY) + + global_page_state(NR_UNSTABLE) + + total_swapcache_pages; if (limit > ns->prefetch_watermark) { node_clear(node, sp_stat.prefetch_nodes); continue; _ Patches currently in -mm which might be from clameter@xxxxxxx are page-migration-make-do_swap_page-redo-the-fault.patch slab-extract-cache_free_alien-from-__cache_free.patch migration-remove-unnecessary-pageswapcache-checks.patch page-migration-cleanup-rename-ignrefs-to-migration.patch page-migration-cleanup-group-functions.patch page-migration-cleanup-remove-useless-definitions.patch page-migration-cleanup-drop-nr_refs-in-remove_references.patch page-migration-cleanup-extract-try_to_unmap-from-migration-functions.patch page-migration-cleanup-pass-mapping-to-migration-functions.patch page-migration-cleanup-move-fallback-handling-into-special-function.patch swapless-pm-add-r-w-migration-entries.patch swapless-page-migration-rip-out-swap-based-logic.patch swapless-page-migration-modify-core-logic.patch more-page-migration-do-not-inc-dec-rss-counters.patch more-page-migration-use-migration-entries-for-file-pages.patch page-migration-update-documentation.patch mm-remove-vm_locked-before-remap_pfn_range-and-drop-vm_shm.patch page-migration-simplify-migrate_pages.patch page-migration-simplify-migrate_pages-tweaks.patch page-migration-handle-freeing-of-pages-in-migrate_pages.patch page-migration-use-allocator-function-for-migrate_pages.patch page-migration-support-moving-of-individual-pages.patch page-migration-detailed-status-for-moving-of-individual-pages.patch page-migration-support-moving-of-individual-pages-fixes.patch page-migration-support-moving-of-individual-pages-x86_64-support.patch page-migration-support-moving-of-individual-pages-x86-support.patch page-migration-support-a-vma-migration-function.patch allow-migration-of-mlocked-pages.patch cpuset-remove-extra-cpuset_zone_allowed-check-in-__alloc_pages.patch - To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html