Re: [Nouveau] CUDA fixed VA allocations and sparse mappings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 2. Use the GPUVM inside the GPU to access system memory. The
> limitation here is that the GPUVM uses 40-bit addresses, so the
> virtual address range must be in the lower 40-bit address range of the
> CPU. Access speed is limited by PCI-e bandwidth. Another limitation is
> that the system memory pages must be pinned as the GPU doesn't support
> page faults.
> 
> The operation of reserving address space and BO creation is one IOCTL,
> while the mapping of the BO to the address space is a second IOCTL.
> There are of course unmap and free IOCTLs. The separation is done for
> a couple of reasons:
> 
> 1. If the application knows that it wants to use only part of the
> memory area it allocated, then there is no point in pinning all the
> BOs. So, the application can map/unmap just part of the allocation.
> 
> 2. If the application knows that it has finished using the BOs, and it
> also knows that it will use them later on, it can unmap the BOs (to
> make them unpinned) but not free them so the memory is still reserved
> (with its contents intact).

Yes, thanks Oled.  I think this is pretty much exactly how I imagine things
to work.

I'll post my code soon and see what you guys think.  There was some
misunderstanding on my part on how bo's work, so I need to rework some
stuff.
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux