On Wed, Jul 5, 2017 at 5:25 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > On Thu, Jun 29, 2017 at 08:53:22AM -0700, Andy Lutomirski wrote: >> +static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen, >> + u16 *new_asid, bool *need_flush) >> +{ >> + u16 asid; >> + >> + if (!static_cpu_has(X86_FEATURE_PCID)) { >> + *new_asid = 0; >> + *need_flush = true; >> + return; >> + } >> + >> + for (asid = 0; asid < TLB_NR_DYN_ASIDS; asid++) { >> + if (this_cpu_read(cpu_tlbstate.ctxs[asid].ctx_id) != >> + next->context.ctx_id) >> + continue; >> + >> + *new_asid = asid; >> + *need_flush = (this_cpu_read(cpu_tlbstate.ctxs[asid].tlb_gen) < >> + next_tlb_gen); >> + return; >> + } >> + >> + /* >> + * We don't currently own an ASID slot on this CPU. >> + * Allocate a slot. >> + */ >> + *new_asid = this_cpu_add_return(cpu_tlbstate.next_asid, 1) - 1; > > So this basically RR the ASID slots. Have you tried slightly more > complex replacement policies like CLOCK ? No, mainly because I'm lazy and because CLOCK requires scavenging a bit. (Which we can certainly do, but it will further complicate the code.) It could be worth playing with better replacement algorithms as a followup, though. I've also considered a slight elaboration of RR in which we make sure not to reuse the most recent ASID slot, which would guarantee that, if we switch from task A to B and back to A, we don't flush on the way back to A. (Currently, if B is not in the cache, there's a 1/6 chance we'll flush on the way back.) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>