On Mon, Oct 22, 2012 at 02:55:14PM +0200, Jan Kiszka wrote: > On 2012-10-22 14:53, Gleb Natapov wrote: > > On Mon, Oct 22, 2012 at 02:45:37PM +0200, Jan Kiszka wrote: > >> On 2012-10-22 14:18, Avi Kivity wrote: > >>> On 10/22/2012 01:45 PM, Jan Kiszka wrote: > >>> > >>>>> Indeed. git pull, recheck and call for kvm_flush_coalesced_mmio_buffer() > >>>>> is gone. So this will break new userspace, not old. By global you mean > >>>>> shared between devices (or memory regions)? > >>>> > >>>> Yes. We only have a single ring per VM, so we cannot flush multi-second > >>>> VGA access separately from other devices. In theory solvable by > >>>> introducing per-region rings that can be driven separately. > >>> > >>> But in practice unneeded. Real time VMs can disable coalescing and not > >>> use planar VGA modes. > >> > >> A) At least right now, we do not differentiate between the VGA modes and > >> if flushing is needed. So that device is generally taboo for RT cores of > >> the VM. > >> B) We need to disable coalescing in E1000 as well - if we want to use > >> that model. > >> C) Gleb seems to propose using coalescing far beyond those two use cases. > >> > > Since the userspace change is needed the idea is dead, but if we could > > implement it I do not see how it can hurt the latency if it would be the > > only mechanism to use coalesced mmio buffer. Checking that the ring buffer > > is empty is cheap and if it is not empty it means that kernel just saved > > you a lot of 8 bytes exists so even after iterating over all the entries there > > you still saved a lot of time. > > When taking an exit for A, I'm not interesting in flushing stuff for B > unless I have a dependency. Thus, buffers would have to be per device > before extending their use. > Buts this is not what will happen (in the absence of other users of coalesced mmio). What will happen is instead of taking 200 exists for B you will take 1 exit for B. -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html