On Sat, Mar 08, 2025 at 08:50:12PM -0800, Saurabh Singh Sengar wrote: > On Mon, Feb 10, 2025 at 09:01:14PM -0800, mhkelley58@xxxxxxxxx wrote: > > From: Michael Kelley <mhklinux@xxxxxxxxxxx> > > > > The VMBus driver manages the MMIO space it owns via the hyperv_mmio > > resource tree. Because the synthetic video framebuffer portion of the > > MMIO space is initially setup by the Hyper-V host for each guest, the > > VMBus driver does an early reserve of that portion of MMIO space in the > > hyperv_mmio resource tree. It saves a pointer to that resource in > > fb_mmio. When a VMBus driver requests MMIO space and passes "true" > > for the "fb_overlap_ok" argument, the reserved framebuffer space is > > used if possible. In that case it's not necessary to do another request > > against the "shadow" hyperv_mmio resource tree because that resource > > was already requested in the early reserve steps. > > > > However, the vmbus_free_mmio() function currently does no special > > handling for the fb_mmio resource. When a framebuffer device is > > removed, or the driver is unbound, the current code for > > vmbus_free_mmio() releases the reserved resource, leaving fb_mmio > > pointing to memory that has been freed. If the same or another > > driver is subsequently bound to the device, vmbus_allocate_mmio() > > checks against fb_mmio, and potentially gets garbage. Furthermore > > a second unbind operation produces this "nonexistent resource" error > > because of the unbalanced behavior between vmbus_allocate_mmio() and > > vmbus_free_mmio(): > > > > [ 55.499643] resource: Trying to free nonexistent > > resource <0x00000000f0000000-0x00000000f07fffff> > > > > Fix this by adding logic to vmbus_free_mmio() to recognize when > > MMIO space in the fb_mmio reserved area would be released, and don't > > release it. This filtering ensures the fb_mmio resource always exists, > > and makes vmbus_free_mmio() more parallel with vmbus_allocate_mmio(). > > > > Fixes: be000f93e5d7 ("drivers:hv: Track allocations of children of hv_vmbus in private resource tree") > > Signed-off-by: Michael Kelley <mhklinux@xxxxxxxxxxx> > > --- > > drivers/hv/vmbus_drv.c | 13 +++++++++++++ > > 1 file changed, 13 insertions(+) > > > > diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c > > index 2892b8da20a5..7507b3641ebd 100644 > > --- a/drivers/hv/vmbus_drv.c > > +++ b/drivers/hv/vmbus_drv.c > > @@ -2262,12 +2262,25 @@ void vmbus_free_mmio(resource_size_t start, resource_size_t size) > > struct resource *iter; > > > > mutex_lock(&hyperv_mmio_lock); > > + > > + /* > > + * If all bytes of the MMIO range to be released are within the > > + * special case fb_mmio shadow region, skip releasing the shadow > > + * region since no corresponding __request_region() was done > > + * in vmbus_allocate_mmio(). > > + */ > > + if (fb_mmio && (start >= fb_mmio->start) && > > + (start + size - 1 <= fb_mmio->end)) > > + goto skip_shadow_release; > > + > > for (iter = hyperv_mmio; iter; iter = iter->sibling) { > > if ((iter->start >= start + size) || (iter->end <= start)) > > continue; > > > > __release_region(iter, start, size); > > } > > + > > +skip_shadow_release: > > release_mem_region(start, size); > > mutex_unlock(&hyperv_mmio_lock); > > > > -- > > 2.25.1 > > > > Thanks for the fix. > There are couple of checkpatch.pl --strict CHECK, post fixing that: > > Tested-by: Saurabh Sengar <ssengar@xxxxxxxxxxxxxxxxxxx> > Reviewed-by: Saurabh Sengar <ssengar@xxxxxxxxxxxxxxxxxxx> I will wait for a new version with the checkpatch.pl issues fixed. Wei.