On Tue, 1 May 2012, Jeremy Fitzhardinge wrote: > On 05/01/2012 03:59 AM, Peter Zijlstra wrote: > > On Tue, 2012-05-01 at 12:57 +0200, Peter Zijlstra wrote: > >> Anyway, I don't have any idea about the costs involved with > >> HAVE_RCU_TABLE_FREE, but I don't think its much.. otherwise these other > >> platforms (PPC,SPARC) wouldn't have used it, gup_fast() is a very > >> specific case, whereas mmu-gather is something affecting pretty much all > >> tasks. > > Which reminds me, I thought Xen needed this too, but a git grep on > > HAVE_RCU_TABLE_FREE shows its still only ppc and sparc. > > > > Jeremy? > > Yeah, I was thinking that too, but I can't remember what we did to > resolve it. For pure PV guests, gupf simply isn't used, so the problem > is moot. But for dom0 or PCI-passthrough it could be. Yes, dom0 can use gupf, for example when a userspace block backend is involved. Reading the code it seems to me that xen_flush_tlb_others returns immediately and succesfully, no matter whether one or more vcpus are or are not running at the moment and no matter if one or more vcpus previously disabled interrupts. Therefore I think that we should be using HAVE_RCU_TABLE_FREE. I am going to submit a patch for that. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html