On Mon, 2024-04-29 at 16:03 +1000, Dave Airlie wrote: > > Currently, this can result in runtime PM issues on systems where > > memory > > Luckily, we don't actually need to allocate coherent memory for the > > page > > table thanks to being able to pass the GPU a radix3 page table for > > suspend/resume data. So, let's rewrite nvkm_gsp_radix3_sg() to use > > the sg > > allocator for level 2. We continue using coherent allocations for > > lvl0 and > > 1, since they only take a single page. > > > > Signed-off-by: Lyude Paul <lyude@xxxxxxxxxx> > > Cc: stable@xxxxxxxxxxxxxxx > > --- > > .../gpu/drm/nouveau/include/nvkm/subdev/gsp.h | 4 +- > > .../gpu/drm/nouveau/nvkm/subdev/gsp/r535.c | 71 ++++++++++++--- > > ---- > > 2 files changed, 47 insertions(+), 28 deletions(-) > > > > diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h > > b/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h > > index 6f5d376d8fcc1..a11d16a16c3b2 100644 > > --- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h > > +++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h > > @@ -15,7 +15,9 @@ struct nvkm_gsp_mem { > > }; > > > > struct nvkm_gsp_radix3 { > > - struct nvkm_gsp_mem mem[3]; > > + struct nvkm_gsp_mem lvl0; > > + struct nvkm_gsp_mem lvl1; > > + struct sg_table lvl2; > > This looks great, could we go a step further and combine lvl0 and > lvl1 > into a 2 page allocation, I thought we could combine lvl0/lvl1 into a > 2 page alloc, but that actually might be a bad idea under memory > pressure. I'm not sure I understand :P, do we want to go for that or not? TBH - I'm not sure there's any hardware reason we wouldn't be able to do the whole radix3 table as an sg allocation with two additional memory pages added on for level 0 and 1 - since both of those can only be the size of a single page anyway it probably doesn't make much of a difference. The main reason I didn't end up doing that though is because it would make the codepath in nvkm_radix3_sg() a lot uglier. We need the virtual addresses of level 0-2's first/only pages to populate them, and we also need the DMA addresses of level 1-2. There isn't an iterator that lets you go through both DMA/virtual addresses as far as I can tell - and even if there was we'd start having to keep track of when we reach the end of a page in the loop and make sure that we always set pte to the address of the third sg page on the first iteration of the loop. IMO, scatterlist could definitely benefit from having an iterator that does both and can be stepped through both in and out of for loop macros (like Iterator in rust). So - it's definitely possible, but considering: * nvkm_gsp_mem isn't a very big struct * We're only allocating a single page for level 0 and 1, so at least according to the advice I got from Sima this should be a safe amount to allocate coherently under memory pressure. * It's just a lot easier code-wise having direct address to the DMA/virt addresses for the first two levels I decided to stay with nvkm_gsp_mem_ctor() for the first two pages and just use nvkm_gsp_sg() for the rest. I can definitely convert the whole thing to using nvkm_gsp_sg() if we really want though - but I don't think it'll give us much benefit. I'll send out the new version of the patch without these changes and a fix for one of the issues with this patch I already mentioned to Timur, just let me know what you end up deciding and I can revise the patch if you want. > > Dave. > -- Cheers, Lyude Paul (she/her) Software Engineer at Red Hat