On Thu, Aug 8, 2024 at 4:16 AM Dmitry Osipenko <dmitry.osipenko@xxxxxxxxxxxxx> wrote: > > On 8/5/24 19:24, Rob Clark wrote: > > On Wed, Jul 24, 2024 at 12:00 PM Dmitry Osipenko > > <dmitry.osipenko@xxxxxxxxxxxxx> wrote: > >> > >> On 7/23/24 14:49, Sergio Lopez wrote: > >>> There's an incresing number of machines supporting multiple page sizes > >>> and on these machines the host and a guest can be running, each one, > >>> with a different page size. > >>> > >>> For what pertains to virtio-gpu, this is not a problem if the page size > >>> of the guest happens to be bigger or equal than the host, but will > >>> potentially lead to failures in memory allocations and/or mappings > >>> otherwise. > >> > >> Please describe concrete problem you're trying to solve. Guest memory > >> allocation consists of guest pages, I don't see how knowledge of host > >> page size helps anything in userspace. > >> > >> I suspect you want this for host blobs, but then it should be > >> virtio_gpu_vram_create() that should use max(host_page_sz, > >> guest_page_size), AFAICT. It's kernel who is responsible for memory > >> management, userspace can't be trusted for doing that. > > > > fwiw virtgpu native context would require this as well, mesa would > > need to know the host page size to correctly align GPU VA allocations > > (which must be a multiple of the host page size). > > > > So a-b for adding this and exposing it to userspace. > > In general, GPU page size has no connection to the CPU page size. It > happens that MSM driver uses same page size for both GPU and CPU. Likely > you could configure a different GPU page size if you wanted. dGPUs would > often use 64k pages. The smmu actually supports various different page sizes (4k, 64k, etc.. I think up to 2g), and will try to map larger contiguous sets of pages using larger page sizes to reduce TLB pressure. This restriction about aligning to host page size is because the kernel expects allocations and therefore (currently, pre-sparse) gpu mappings to be a multiple of the host page size. As far as whether this should be something outside of virtio-gpu, this does feel a bit specific to how GEM buffer allocations work and how host blob resources work. Maybe other subsystems like media end up with similar constraints for similar reasons, idk. But it at least feels like something applicable to all/most virtgpu context types. BR, -R