On Wed, Dec 09, 2015 at 05:25:19PM +0000, Tvrtko Ursulin wrote: > > Hi, > > On 09/12/15 12:46, ankitprasad.r.sharma@xxxxxxxxx wrote: > > From: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > > > Ville reminded us that stolen memory is not preserved across > > hibernation, and a result of this was that context objects now being > > allocated from stolen were being corrupted on S4 and promptly hanging > > the GPU on resume. > > > > We want to utilise stolen for as much as possible (nothing else will use > > that wasted memory otherwise), so we need a strategy for handling > > general objects allocated from stolen and hibernation. A simple solution > > is to do a CPU copy through the GTT of the stolen object into a fresh > > shmemfs backing store and thenceforth treat it as a normal objects. This > > can be refined in future to either use a GPU copy to avoid the slow > > uncached reads (though it's hibernation!) and recreate stolen objects > > upon resume/first-use. For now, a simple approach should suffice for > > testing the object migration. > > > > v2: > > Swap PTE for pinned bindings over to the shmemfs. This adds a > > complicated dance, but is required as many stolen objects are likely to > > be pinned for use by the hardware. Swapping the PTEs should not result > > in externally visible behaviour, as each PTE update should be atomic and > > the two pages identical. (danvet) > > > > safe-by-default, or the principle of least surprise. We need a new flag > > to mark objects that we can wilfully discard and recreate across > > hibernation. (danvet) > > > > Just use the global_list rather than invent a new stolen_list. This is > > the slowpath hibernate and so adding a new list and the associated > > complexity isn't worth it. > > > > v3: Rebased on drm-intel-nightly (Ankit) > > > > v4: Use insert_page to map stolen memory backed pages for migration to > > shmem (Chris) > > > > v5: Acquire mutex lock while copying stolen buffer objects to shmem (Chris) > > > > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk> > > Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@xxxxxxxxx> > > --- > > drivers/gpu/drm/i915/i915_drv.c | 17 ++- > > drivers/gpu/drm/i915/i915_drv.h | 7 + > > drivers/gpu/drm/i915/i915_gem.c | 232 ++++++++++++++++++++++++++++++-- > > drivers/gpu/drm/i915/intel_display.c | 3 + > > drivers/gpu/drm/i915/intel_fbdev.c | 6 + > > drivers/gpu/drm/i915/intel_pm.c | 2 + > > drivers/gpu/drm/i915/intel_ringbuffer.c | 6 + > > 7 files changed, 261 insertions(+), 12 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c > > index 9f55209..2bb9e9e 100644 > > --- a/drivers/gpu/drm/i915/i915_drv.c > > +++ b/drivers/gpu/drm/i915/i915_drv.c > > @@ -1036,6 +1036,21 @@ static int i915_pm_suspend(struct device *dev) > > return i915_drm_suspend(drm_dev); > > } > > > > +static int i915_pm_freeze(struct device *dev) > > +{ > > + int ret; > > + > > + ret = i915_gem_freeze(pci_get_drvdata(to_pci_dev(dev))); > > + if (ret) > > + return ret; > > Can we distinguish between S3 and S4 if the stolen corruption only > happens in S4? Not to spend all the extra effort for nothing in S3? Or > maybe this is not even called for S3? The hook is only for hibernation as explained in the nice comment Imre added next to the function pointer assignments. It actually gets called for both the freeze and quiesce transitions. We should only need it for freeze. I'm not sure if the PMSG_ thing gets stored anywhere that we could look it up and skip this for quiesce. And not sure if ayone really cares that much. I don't, since I don't even load i915 for the loader kernel. https://bugs.freedesktop.org/show_bug.cgi?id=91295 actually says we might need this for S3 too if rabidstart is enabled. I have a laptop that supports it, but I don't have a clue how what kind of partition it would need. Not that I would be willing to repartition the disk anyway. Judging by drivers/platform/x86/intel-rst.c, maybe we could just look for the INT3392 ACPI device, or something? -- Ville Syrjälä Intel OTC _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx