On Mon, 02 May 2011 16:44:30 PDT, Andrew Morton said: > hm, me too. After boot, hald has a get_mm_counter(mm, MM_ANONPAGES) of > 0xffffffffffff3c27. Bisected to Pater's > mm-extended-batches-for-generic-mmu_gather.patch, can't see how it did > that. Looking at it: @@ -177,15 +205,24 @@ tlb_finish_mmu(struct mmu_gather *tlb, u */ static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) { + struct mmu_gather_batch *batch; + tlb->need_flush = 1; + if (tlb_fast_mode(tlb)) { free_page_and_swap_cache(page); return 1; /* avoid calling tlb_flush_mmu() */ } - tlb->pages[tlb->nr++] = page; - VM_BUG_ON(tlb->nr > tlb->max); - return tlb->max - tlb->nr; + batch = tlb->active; + batch->pages[batch->nr++] = page; + VM_BUG_ON(batch->nr > batch->max); + if (batch->nr == batch->max) { + if (!tlb_next_batch(tlb)) + return 0; + } + + return batch->max - batch->nr; } Who's intializing/setting batch->max? Perhaps whoever set up tlb->active failed to do so?
Attachment:
pgpd3vqqimsoP.pgp
Description: PGP signature