On 10/31/2011 07:50 AM, Avi Kivity wrote: > On 10/31/2011 09:53 AM, Alexander Graf wrote: >> +/* sesel is index into the set, not the whole array */ >> +static void write_stlbe(struct kvmppc_vcpu_e500 *vcpu_e500, >> + struct tlbe *gtlbe, >> + struct tlbe *stlbe, >> + int stlbsel, int sesel) >> +{ >> + int stid; >> + >> + preempt_disable(); >> + stid = kvmppc_e500_get_sid(vcpu_e500, get_tlb_ts(gtlbe), >> + get_tlb_tid(gtlbe), >> + get_cur_pr(&vcpu_e500->vcpu), 0); >> + >> + stlbe->mas1 |= MAS1_TID(stid); >> + write_host_tlbe(vcpu_e500, stlbsel, sesel, stlbe); >> + preempt_enable(); >> +} >> + >> > > This naked preempt_disable() is fishy. What happens if we're migrated > immediately afterwards? we fault again and redo? Yes, we'll fault again. We just want to make sure that the sid is still valid when we write the TLB entry. If we migrate, we'll get a new sid and the old TLB entry will be irrelevant, even if we migrate back to the same CPU. The entire TLB will be flushed before a sid is reused. If we don't do the preempt_disable(), we could get the sid on one CPU and then get migrated and run it on another CPU where that sid is (or will be) valid for a different context. Or we could run out of sids while preempted, making the sid allocated before this possibly valid for a different context. -Scott -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html