It seems that killing an application while faults are occurring (particularly with a GPU in FPGA at a whopping 40MHz) can lead to handling a lingering page fault after all the address space contexts have already been freed. In this situation, the LRU list is empty so addr_to_drm_mm_node() ends up dereferencing the list head as if it were a struct panfrost_mmu entry; this leaves "mmu->as" actually pointing at the pfdev->alloc_mask bitmap, which is also empty, and given that the fault has a high likelihood of being in AS0, hilarity ensues. Sadly, the cleanest solution seems to involve another goto. Oh well, at least it's robust... Signed-off-by: Robin Murphy <robin.murphy@xxxxxxx> --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index e61984e26e0a..508765f80cfe 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -413,11 +413,11 @@ addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr) spin_lock(&pfdev->as_lock); list_for_each_entry(mmu, &pfdev->as_lru_list, list) { if (as == mmu->as) - break; + goto found_mmu; } - if (as != mmu->as) - goto out; + goto out; +found_mmu: priv = container_of(mmu, struct panfrost_file_priv, mmu); spin_lock(&priv->mm_lock); -- 2.21.0.dirty _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel