From: Wei Hu <weh@xxxxxxxxxxxxx> Sent: Tuesday, September 17, 2019 11:03 PM > > Without deferred IO support, hyperv_fb driver informs the host to refresh > the entire guest frame buffer at fixed rate, e.g. at 20Hz, no matter there > is screen update or not. This patch supports deferred IO for screens in > graphics mode and also enables the frame buffer on-demand refresh. The > highest refresh rate is still set at 20Hz. > > Currently Hyper-V only takes a physical address from guest as the starting > address of frame buffer. This implies the guest must allocate contiguous > physical memory for frame buffer. In addition, Hyper-V Gen 2 VMs only > accept address from MMIO region as frame buffer address. Due to these > limitations on Hyper-V host, we keep a shadow copy of frame buffer > in the guest. This means one more copy of the dirty rectangle inside > guest when doing the on-demand refresh. This can be optimized in the > future with help from host. For now the host performance gain from deferred > IO outweighs the shadow copy impact in the guest. > > Signed-off-by: Wei Hu <weh@xxxxxxxxxxxxx> > --- > v2: Incorporated review comments from Michael Kelley > - Increased dirty rectangle by one row in deferred IO case when sending > to Hyper-V. > - Corrected the dirty rectangle size in the text mode. > - Added more comments. > - Other minor code cleanups. > > v3: Incorporated more review comments > - Removed a few unnecessary variable tests > > v4: Incorporated test and review feedback from Dexuan Cui > - Not disable interrupt while acquiring docopy_lock in > hvfb_update_work(). This avoids significant bootup delay in > large vCPU count VMs. > > v5: Completely remove the unnecessary docopy_lock after discussing > with Dexuan Cui. > > v6: Do not request host refresh when the VM guest screen is > closed or minimized. > > drivers/video/fbdev/Kconfig | 1 + > drivers/video/fbdev/hyperv_fb.c | 210 ++++++++++++++++++++++++++++---- > 2 files changed, 190 insertions(+), 21 deletions(-) > Reviewed-by: Michael Kelley <mikelley@xxxxxxxxxxxxx>