On Mon, Oct 24, 2022, Sean Christopherson wrote: > On Sat, Oct 22, 2022, Marc Zyngier wrote: > > On Fri, 21 Oct 2022 17:05:26 +0100, Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > > > On Fri, Oct 21, 2022, Marc Zyngier wrote: > > > > Because dirtying memory outside of a vcpu context makes it > > > > incredibly awkward to handle a "ring full" condition? > > > > > > Kicking all vCPUs with the soft-full request isn't _that_ awkward. > > > It's certainly sub-optimal, but if inserting into the per-VM ring is > > > relatively rare, then in practice it's unlikely to impact guest > > > performance. > > > > But there is *nothing* to kick here. The kernel is dirtying pages, > > devices are dirtying pages (DMA), and there is no context associated > > with that. Which is why a finite ring is the wrong abstraction. > > I don't follow. If there's a VM, KVM can always kick all vCPUs. Again, might > be far from optimal, but it's an option. If there's literally no VM, then KVM > isn't involved at all and there's no "ring vs. bitmap" decision. Finally caught up in the other part of the thread that calls out that the devices can't be stalled. https://lore.kernel.org/all/87czakgmc0.wl-maz@xxxxxxxxxx _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm