Quoting Matthew Auld (2020-04-29 17:11:38) > On 28/04/2020 22:55, Chris Wilson wrote: > > We need to keep the default context state around to instantiate new > > contexts (aka golden rendercontext), and we also keep it pinned while > > the engine is active so that we can quickly reset a hanging context. > > However, the default contexts are large enough to merit keeping in > > swappable memory as opposed to kernel memory, so we store them inside > > shmemfs. Currently, we use the normal GEM objects to create the default > > context image, but we can throw away all but the shmemfs file. > > > > This greatly simplifies the tricky power management code which wants to > > run underneath the normal GT locking, and we definitely do not want to > > use any high level objects that may appear to recurse back into the GT. > > Though perhaps the primary advantage of the complex GEM object is that > > we aggressively cache the mapping, but here we are recreating the > > vm_area everytime time we unpark. At the worst, we add a lightweight > > cache, but first find a microbenchmark that is impacted. > > > > Having started to create some utility functions to make working with > > shmemfs objects easier, we can start putting them to wider use, where > > GEM objects are overkill, such as storing persistent error state. > > Is there any point in having the default state in device local-memory, > and if so does this change the story at all? I'm guessing not... We want it in CPU memory unless you plan on blitting the default context image? Otherwise we have to do UC reads for each new context. And it since the default state is not required for running the context, we'd prefer not to waste precious device memory on them, if we can avoid doing so. At worst, we could reuse the kernel_context image for the default state. Hmm. Well we'd have to give up poisoning it, and do some recovery after [runtime] suspend. I'd rather keep a copy of the default state in swap, and push it to lmem on demand :) -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx