On Wed, 2025-02-26 at 17:47 +0000, Matthew Auld wrote: > Currently we just leave it uninitialised, which at first looks > harmless, > however we also don't zero out the pfn array, and with pfn_flags_mask > the idea is to be able set individual flags for a given range of pfn > or > completely ignore them, outside of default_flags. So here we end up > with > pfn[i] & pfn_flags_mask, and if both are uninitialised we might get > back > an unexpected flags value, like asking for read only with > default_flags, > but getting back write on top, leading to potentially bogus > behaviour. > > To fix this ensure we zero the pfn_flags_mask, such that hmm only > considers the default_flags and not also the initial pfn[i] value. > > v2 (Thomas): > - Prefer proper initializer. > > Fixes: 81e058a3e7fd ("drm/xe: Introduce helper to populate userptr") > Signed-off-by: Matthew Auld <matthew.auld@xxxxxxxxx> > Cc: Matthew Brost <matthew.brost@xxxxxxxxx> > Cc: Thomas Hellström <thomas.hellstrom@xxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> # v6.10+ > --- > drivers/gpu/drm/xe/xe_hmm.c | 18 ++++++++++-------- > 1 file changed, 10 insertions(+), 8 deletions(-) > > diff --git a/drivers/gpu/drm/xe/xe_hmm.c > b/drivers/gpu/drm/xe/xe_hmm.c > index 089834467880..2e4ae61567d8 100644 > --- a/drivers/gpu/drm/xe/xe_hmm.c > +++ b/drivers/gpu/drm/xe/xe_hmm.c > @@ -166,13 +166,20 @@ int xe_hmm_userptr_populate_range(struct > xe_userptr_vma *uvma, > { > unsigned long timeout = > jiffies + > msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); > - unsigned long *pfns, flags = HMM_PFN_REQ_FAULT; > + unsigned long *pfns; > struct xe_userptr *userptr; > struct xe_vma *vma = &uvma->vma; > u64 userptr_start = xe_vma_userptr(vma); > u64 userptr_end = userptr_start + xe_vma_size(vma); > struct xe_vm *vm = xe_vma_vm(vma); > - struct hmm_range hmm_range; > + struct hmm_range hmm_range = { > + .pfn_flags_mask = 0, /* ignore pfns */ > + .default_flags = HMM_PFN_REQ_FAULT, > + .start = userptr_start, > + .end = userptr_end, > + .notifier = &uvma->userptr.notifier, > + .dev_private_owner = vm->xe, > + }; > bool write = !xe_vma_read_only(vma); > unsigned long notifier_seq; > u64 npages; > @@ -199,19 +206,14 @@ int xe_hmm_userptr_populate_range(struct > xe_userptr_vma *uvma, > return -ENOMEM; > > if (write) > - flags |= HMM_PFN_REQ_WRITE; > + hmm_range.default_flags |= HMM_PFN_REQ_WRITE; > > if (!mmget_not_zero(userptr->notifier.mm)) { > ret = -EFAULT; > goto free_pfns; > } > > - hmm_range.default_flags = flags; > hmm_range.hmm_pfns = pfns; > - hmm_range.notifier = &userptr->notifier; > - hmm_range.start = userptr_start; > - hmm_range.end = userptr_end; > - hmm_range.dev_private_owner = vm->xe; > > while (true) { > hmm_range.notifier_seq = > mmu_interval_read_begin(&userptr->notifier); Reviewed-by: Thomas Hellström <thomas.hellstrom@xxxxxxxxxxxxxxx>