>2) find out how long the tlb batches actually are and make them smallerPAGE_SIZE is 16k other than 4k in my kernel configuration, batch count max is 2015. currently MAX_GATHER_BATCH depends on PAGE_SIZE, I will make batch's max size smaller without depending on PAGE_SIZE. #define MAX_GATHER_BATCH ((PAGE_SIZE - sizeof(struct mmu_gather_batch)) / sizeof(void *)) On 03/10/2022 05:11 PM, Vlastimil Babka
wrote:
On 3/10/22 04:29, Andrew Morton wrote:On Thu, 10 Mar 2022 10:48:41 +0800 wangjianxing <wangjianxing@xxxxxxxxxxx> wrote:spin_lock will preempt_disable(), interrupt context will __irq_enter/local_bh_disable and also add preempt count with offset. cond_resched check whether if preempt_count == 0 in first and won't schedule in previous context. Is this right? With another way, could we add some condition to avoid call cond_resched in interrupt context or spin_lock()? + if (preemptible()) + cond_resched();None of this works with CONFIG_PREEMPTION=n.Yeah I think we have at least two options. 1) check all callers, maybe realize all have enabled interrupts anyway, rewrite the locking to only assume those 2) find out how long the tlb batches actually are and make them smaller |