On Thu, Nov 28, 2013 at 11:49:00AM +0200, Avi Kivity wrote: > On 11/28/2013 11:19 AM, Gleb Natapov wrote: > >On Thu, Nov 28, 2013 at 09:55:42AM +0100, Paolo Bonzini wrote: > >>Il 28/11/2013 07:27, Zhanghaoyu (A) ha scritto: > >>>>>Without synchronize_rcu you could have > >>>>> > >>>>> VCPU writes to routing table > >>>>> e = entry from IRQ routing table > >>>>> kvm_irq_routing_update(kvm, new); > >>>>> VCPU resumes execution > >>>>> kvm_set_msi_irq(e, &irq); > >>>>> kvm_irq_delivery_to_apic_fast(); > >>>>> > >>>>>where the entry is stale but the VCPU has already resumed execution. > >>>>> > >>>If we use call_rcu()(Not consider the problem that Gleb pointed out temporarily) instead of synchronize_rcu(), should we still ensure this? > >>The problem is that we should ensure this, so using call_rcu is not > >>possible (even not considering the memory allocation problem). > >> > >Not changing current behaviour is certainly safer, but I am still not 100% > >convinced we have to ensure this. > > > >Suppose guest does: > > > >1: change msi interrupt by writing to pci register > >2: read the pci register to flush the write > >3: zero idt > > > >I am pretty certain that this code can get interrupt after step 2 on real HW, > >but I cannot tell if guest can rely on it to be delivered exactly after > >read instruction or it can be delayed by couple of instructions. Seems to me > >it would be fragile for an OS to depend on this behaviour. AFAIK Linux does not. > > > > Linux is safe, it does interrupt migration from within the interrupt > handler. If you do that before the device-specific EOI, you won't > get another interrupt until programming the MSI is complete. > > Is virtio safe? IIRC it can post multiple interrupts without guest acks. > > Using call_rcu() is a better solution than srcu IMO. Less code > changes, consistently faster. Why not fix userspace to use KVM_SIGNAL_MSI instead? -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html