Re: [PATCH v4 06/33] drm/gpusvm: Add support for GPU Shared Virtual Memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 30, 2025 at 03:13:22PM +0200, Gwan-gyeong Mun wrote:
> 
> 
> On 1/30/25 1:17 PM, Matthew Auld wrote:
> > On 29/01/2025 19:51, Matthew Brost wrote:
> > > This patch introduces support for GPU Shared Virtual Memory (SVM) in the
> > > Direct Rendering Manager (DRM) subsystem. SVM allows for seamless
> > > sharing of memory between the CPU and GPU, enhancing performance and
> > > flexibility in GPU computing tasks.
> > > 
> > > The patch adds the necessary infrastructure for SVM, including data
> > > structures and functions for managing SVM ranges and notifiers. It also
> > > provides mechanisms for allocating, deallocating, and migrating memory
> > > regions between system RAM and GPU VRAM.
> > > 
> > > This is largely inspired by GPUVM.
> > > 
> > > v2:
> > >   - Take order into account in check pages
> > >   - Clear range->pages in get pages error
> > >   - Drop setting dirty or accessed bit in get pages (Vetter)
> > >   - Remove mmap assert for cpu faults
> > >   - Drop mmap write lock abuse (Vetter, Christian)
> > >   - Decouple zdd from range (Vetter, Oak)
> > >   - Add drm_gpusvm_range_evict, make it work with coherent pages
> > >   - Export drm_gpusvm_evict_to_sram, only use in BO evict path (Vetter)
> > >   - mmget/put in drm_gpusvm_evict_to_sram
> > >   - Drop range->vram_alloation variable
> > >   - Don't return in drm_gpusvm_evict_to_sram until all pages detached
> > >   - Don't warn on mixing sram and device pages
> > >   - Update kernel doc
> > >   - Add coherent page support to get pages
> > >   - Use DMA_FROM_DEVICE rather than DMA_BIDIRECTIONAL
> > >   - Add struct drm_gpusvm_vram and ops (Thomas)
> > >   - Update the range's seqno if the range is valid (Thomas)
> > >   - Remove the is_unmapped check before hmm_range_fault (Thomas)
> > >   - Use drm_pagemap (Thomas)
> > >   - Drop kfree_mapping (Thomas)
> > >   - dma mapp pages under notifier lock (Thomas)
> > >   - Remove ctx.prefault
> > >   - Remove ctx.mmap_locked
> > >   - Add ctx.check_pages
> > >   - s/vram/devmem (Thomas)
> > > v3:
> > >   - Fix memory leak drm_gpusvm_range_get_pages
> > >   - Only migrate pages with same zdd on CPU fault
> > >   - Loop over al VMAs in drm_gpusvm_range_evict
> > >   - Make GPUSVM a drm level module
> > >   - GPL or MIT license
> > >   - Update main kernel doc (Thomas)
> > >   - Prefer foo() vs foo for functions in kernel doc (Thomas)
> > >   - Prefer functions over macros (Thomas)
> > >   - Use unsigned long vs u64 for addresses (Thomas)
> > >   - Use standard interval_tree (Thomas)
> > >   - s/drm_gpusvm_migration_put_page/
> > > drm_gpusvm_migration_unlock_put_page (Thomas)
> > >   - Drop err_out label in drm_gpusvm_range_find_or_insert (Thomas)
> > >   - Fix kernel doc in drm_gpusvm_range_free_pages (Thomas)
> > >   - Newlines between functions defs in header file (Thomas)
> > >   - Drop shall language in driver vfunc kernel doc (Thomas)
> > >   - Move some static inlines from head to C file (Thomas)
> > >   - Don't allocate pages under page lock in
> > > drm_gpusvm_migrate_populate_ram_pfn (Thomas)
> > >   - Change check_pages to a threshold
> > > v4:
> > >   - Fix NULL ptr deref in drm_gpusvm_migrate_populate_ram_pfn
> > > (Thomas, Himal)
> > >   - Fix check pages threshold
> > >   - Check for range being unmapped under notifier lock in get pages
> > > (Testing)
> > >   - Fix characters per line
> > >   - Drop WRITE_ONCE for zdd->devmem_allocation assignment (Thomas)
> > >   - Use completion for devmem_allocation->detached (Thomas)
> > >   - Make GPU SVM depend on ZONE_DEVICE (CI)
> > >   - Use hmm_range_fault for eviction (Thomas)
> > >   - Drop zdd worker (Thomas)
> > > 
> > > Cc: Simona Vetter <simona.vetter@xxxxxxxx>
> > > Cc: Dave Airlie <airlied@xxxxxxxxxx>
> > > Cc: Christian König <christian.koenig@xxxxxxx>
> > > Cc: <dri-devel@xxxxxxxxxxxxxxxxxxxxx>
> > > Signed-off-by: Matthew Brost <matthew.brost@xxxxxxxxx>
> > > Signed-off-by: Thomas Hellström <thomas.hellstrom@xxxxxxxxxxxxxxx>
> > > ---
> > 
> > <snip>
> > 
> > > +/**
> > > + * __drm_gpusvm_migrate_to_ram() - Migrate GPU SVM range to RAM
> > > (internal)
> > > + * @vas: Pointer to the VM area structure
> > > + * @device_private_page_owner: Device private pages owner
> > > + * @page: Pointer to the page for fault handling (can be NULL)
> > > + * @fault_addr: Fault address
> > > + * @size: Size of migration
> > > + *
> > > + * This internal function performs the migration of the specified
> > > GPU SVM range
> > > + * to RAM. It sets up the migration, populates + dma maps RAM PFNs, and
> > > + * invokes the driver-specific operations for migration to RAM.
> > > + *
> > > + * Returns:
> > > + * 0 on success, negative error code on failure.
> > > + */
> > > +static int __drm_gpusvm_migrate_to_ram(struct vm_area_struct *vas,
> > > +                       void *device_private_page_owner,
> > > +                       struct page *page,
> > > +                       unsigned long fault_addr,
> > > +                       unsigned long size)
> > > +{
> > > +    struct migrate_vma migrate = {
> > > +        .vma        = vas,
> > > +        .pgmap_owner    = device_private_page_owner,
> > > +        .flags        = MIGRATE_VMA_SELECT_DEVICE_PRIVATE |
> > > +            MIGRATE_VMA_SELECT_DEVICE_COHERENT,
> > > +        .fault_page    = page,
> > > +    };
> > > +    struct drm_gpusvm_zdd *zdd;
> > > +    const struct drm_gpusvm_devmem_ops *ops;
> > > +    struct device *dev;
> > > +    unsigned long npages, mpages = 0;
> > > +    struct page **pages;
> > > +    dma_addr_t *dma_addr;
> > > +    unsigned long start, end;
> > > +    void *buf;
> > > +    int i, err = 0;
> > > +
> > > +    start = ALIGN_DOWN(fault_addr, size);
> > > +    end = ALIGN(fault_addr + 1, size);
> > > +
> > > +    /* Corner where VMA area struct has been partially unmapped */
> > > +    if (start < vas->vm_start)
> > > +        start = vas->vm_start;
> > > +    if (end > vas->vm_end)
> > > +        end = vas->vm_end;
> > > +
> > > +    migrate.start = start;
> > > +    migrate.end = end;
> > > +    npages = npages_in_range(start, end);
> > > +
> > > +    buf = kvcalloc(npages, 2 * sizeof(*migrate.src) +
> > > sizeof(*dma_addr) +
> > > +               sizeof(*pages), GFP_KERNEL);
> > > +    if (!buf) {
> > > +        err = -ENOMEM;
> > > +        goto err_out;
> > > +    }
> > > +    dma_addr = buf + (2 * sizeof(*migrate.src) * npages);
> > > +    pages = buf + (2 * sizeof(*migrate.src) + sizeof(*dma_addr)) *
> > > npages;
> > > +
> > > +    migrate.vma = vas;
> > > +    migrate.src = buf;
> > > +    migrate.dst = migrate.src + npages;
> > > +
> > > +    err = migrate_vma_setup(&migrate);
> > > +    if (err)
> > > +        goto err_free;
> > > +
> > > +    /* Raced with another CPU fault, nothing to do */
> > > +    if (!migrate.cpages)
> > > +        goto err_free;
> > > +
> > > +    if (!page) {
> > > +        for (i = 0; i < npages; ++i) {
> > > +            if (!(migrate.src[i] & MIGRATE_PFN_MIGRATE))
> > > +                continue;
> > > +
> > > +            page = migrate_pfn_to_page(migrate.src[i]);
> > > +            break;
> > > +        }
> > > +
> > > +        if (!page)
> > > +            goto err_finalize;
> > > +    }
> > > +    zdd = page->zone_device_data;
> > > +    ops = zdd->devmem_allocation->ops;
> > > +    dev = zdd->devmem_allocation->dev;
> > > +
> > > +    err = drm_gpusvm_migrate_populate_ram_pfn(vas, page, npages,
> > > &mpages,
> > > +                          migrate.src, migrate.dst,
> > > +                          start);
> > > +    if (err)
> > > +        goto err_finalize;
> > > +
> > > +    err = drm_gpusvm_migrate_map_pages(dev, dma_addr, migrate.dst,
> > > npages,
> > > +                       DMA_FROM_DEVICE);
> > > +    if (err)
> > > +        goto err_finalize;
> > > +
> > > +    for (i = 0; i < npages; ++i)
> > > +        pages[i] = migrate_pfn_to_page(migrate.src[i]);
> > > +
> > > +    err = ops->copy_to_ram(pages, dma_addr, npages);
> > > +    if (err)
> > > +        goto err_finalize;
> > > +
> > > +err_finalize:
> > > +    if (err)
> > > +        drm_gpusvm_migration_unlock_put_pages(npages, migrate.dst);
> > > +    migrate_vma_pages(&migrate);
> > > +    migrate_vma_finalize(&migrate);
> > > +    drm_gpusvm_migrate_unmap_pages(dev, dma_addr, npages,
> > > +                       DMA_FROM_DEVICE);
> > 
> > clang for me is throwing:
> > 
> > drivers/gpu/drm/drm_gpusvm.c:2017:7: error: variable 'dev' is used
> > uninitialized whenever 'if' condition is true [-Werror,-Wsometimes-
> > uninitialized]
> >   2017 |                 if (!page)
> >        |                     ^~~~~
> > drivers/gpu/drm/drm_gpusvm.c:2047:33: note: uninitialized use occurs here
> >   2047 |         drm_gpusvm_migrate_unmap_pages(dev, dma_addr, npages,
> >        |                                        ^~~
> > drivers/gpu/drm/drm_gpusvm.c:2017:3: note: remove the 'if' if its
> > condition is always false
> >   2017 |                 if (!page)
> >        |                 ^~~~~~~~~~
> >   2018 |                         goto err_finalize;
> >        |                         ~~~~~~~~~~~~~~~~~
> > drivers/gpu/drm/drm_gpusvm.c:1966:20: note: initialize the variable
> > 'dev' to silence this warning
> >   1966 |         struct device *dev;
> >        |                           ^
> >        |                            = NULL
> > 1 error generated.
> 
> I also reported this issue in the v3 patch, but it doesn't seem to have been
> fixed in v4 yet.
> 
> https://lore.kernel.org/dri-devel/0416fa97-1734-4565-a352-f045a6c0a15a@xxxxxxxxx/
> 

Yea. The uninitialization is in fact harmless as
drm_gpusvm_migrate_unmap_pages is a NOP in this path which I stated. I
was unware a tool was complaining about this. I think get this fixed
then.

Thanks,
Matt

> Br,
> 
> G.G.
> 



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux