On Tue, Feb 11, 2025 at 03:46:51AM +0000, Michael Kelley wrote: > From: Saurabh Singh Sengar <ssengar@xxxxxxxxxxxxxxxxxxx> Sent: Monday, February 10, 2025 7:33 PM > > > > On Mon, Feb 10, 2025 at 11:34:41AM -0800, mhkelley58@xxxxxxxxx wrote: > > > From: Michael Kelley <mhklinux@xxxxxxxxxxx> > > > > > > When a Hyper-V DRM device is probed, the driver allocates MMIO space for > > > the vram, and maps it cacheable. If the device removed, or in the error > > > path for device probing, the MMIO space is released but no unmap is done. > > > Consequently the kernel address space for the mapping is leaked. > > > > > > Fix this by adding iounmap() calls in the device removal path, and in the > > > error path during device probing. > > > > > > Fixes: f1f63cbb705d ("drm/hyperv: Fix an error handling path in hyperv_vmbus_probe()") > > > Fixes: a0ab5abced55 ("drm/hyperv : Removing the restruction of VRAM allocation with PCI bar size") > > > Signed-off-by: Michael Kelley <mhklinux@xxxxxxxxxxx> > > > --- > > > drivers/gpu/drm/hyperv/hyperv_drm_drv.c | 2 ++ > > > 1 file changed, 2 insertions(+) > > > > > > diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > > index e0953777a206..b491827941f1 100644 > > > --- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > > +++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > > @@ -156,6 +156,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev, > > > return 0; > > > > > > err_free_mmio: > > > + iounmap(hv->vram); > > > vmbus_free_mmio(hv->mem->start, hv->fb_size); > > > err_vmbus_close: > > > vmbus_close(hdev->channel); > > > @@ -174,6 +175,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev) > > > vmbus_close(hdev->channel); > > > hv_set_drvdata(hdev, NULL); > > > > > > + iounmap(hv->vram); > > > vmbus_free_mmio(hv->mem->start, hv->fb_size); > > > } > > > > > > -- > > > 2.25.1 > > > > > > > Thanks for the fix. May I know how do you find such issues ? > > I think it was that I was looking at the Hyper-V FB driver for the > vmbus_free_mmio() call sites, and realizing that such call sites > should probably also have an associated iounmap(). Then I was > looking at the same thing in the Hyper-V DRM driver, and > realizing there were no calls to iounmap()! > > To confirm, the contents of /proc/vmallocinfo can be filtered > for ioremap calls with size 8 MiB (which actually show up as > 8 MiB + 4KiB because the address space allocator adds a guard > page to each allocation). When doing repeated unbind/bind > sequences on the DRM driver, those 8 MiB entries in > /proc/vmallocinfo kept accumulating and were never freed. > > Michael Thank you! Regards, Saurabh