On Thu, Jan 24, 2019 at 02:09:12PM +0200, Joonas Lahtinen wrote: > Hi Jerome, > > This patch seems to have plenty of Cc:s, but none of the right ones :) So sorry, i am bad with git commands. > For further iterations, I guess you could use git option --cc to make > sure everyone gets the whole series, and still keep the Cc:s in the > patches themselves relevant to subsystems. Will do. > This doesn't seem to be on top of drm-tip, but on top of your previous > patches(?) that I had some comments about. Could you take a moment to > first address the couple of question I had, before proceeding to discuss > what is built on top of that base. It is on top of Linus tree so roughly ~ rc3 it does not depend on any of the previous patch i posted. I still intended to propose to remove GUP from i915 once i get around to implement the equivalent of GUP_fast for HMM and other bonus cookies with it. The plan is once i have all mm bits properly upstream then i can propose patches to individual driver against the proper driver tree ie following rules of each individual device driver sub-system and Cc only people there to avoid spamming the mm folks :) > > My reply's Message-ID is: > 154289518994.19402.3481838548028068213@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > > Regards, Joonas > > PS. Please keep me Cc:d in the following patches, I'm keen on > understanding the motive and benefits. > > Quoting jglisse@xxxxxxxxxx (2019-01-24 00:23:14) > > From: Jérôme Glisse <jglisse@xxxxxxxxxx> > > > > When range of virtual address is updated read only and corresponding > > user ptr object are already read only it is pointless to do anything. > > Optimize this case out. > > > > Signed-off-by: Jérôme Glisse <jglisse@xxxxxxxxxx> > > Cc: Christian König <christian.koenig@xxxxxxx> > > Cc: Jan Kara <jack@xxxxxxx> > > Cc: Felix Kuehling <Felix.Kuehling@xxxxxxx> > > Cc: Jason Gunthorpe <jgg@xxxxxxxxxxxx> > > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > > Cc: Matthew Wilcox <mawilcox@xxxxxxxxxxxxx> > > Cc: Ross Zwisler <zwisler@xxxxxxxxxx> > > Cc: Dan Williams <dan.j.williams@xxxxxxxxx> > > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > > Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> > > Cc: Michal Hocko <mhocko@xxxxxxxxxx> > > Cc: Ralph Campbell <rcampbell@xxxxxxxxxx> > > Cc: John Hubbard <jhubbard@xxxxxxxxxx> > > Cc: kvm@xxxxxxxxxxxxxxx > > Cc: dri-devel@xxxxxxxxxxxxxxxxxxxxx > > Cc: linux-rdma@xxxxxxxxxxxxxxx > > Cc: linux-fsdevel@xxxxxxxxxxxxxxx > > Cc: Arnd Bergmann <arnd@xxxxxxxx> > > --- > > drivers/gpu/drm/i915/i915_gem_userptr.c | 16 ++++++++++++++++ > > 1 file changed, 16 insertions(+) > > > > diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c > > index 9558582c105e..23330ac3d7ea 100644 > > --- a/drivers/gpu/drm/i915/i915_gem_userptr.c > > +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c > > @@ -59,6 +59,7 @@ struct i915_mmu_object { > > struct interval_tree_node it; > > struct list_head link; > > struct work_struct work; > > + bool read_only; > > bool attached; > > }; > > > > @@ -119,6 +120,7 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > > container_of(_mn, struct i915_mmu_notifier, mn); > > struct i915_mmu_object *mo; > > struct interval_tree_node *it; > > + bool update_to_read_only; > > LIST_HEAD(cancelled); > > unsigned long end; > > > > @@ -128,6 +130,8 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > > /* interval ranges are inclusive, but invalidate range is exclusive */ > > end = range->end - 1; > > > > + update_to_read_only = mmu_notifier_range_update_to_read_only(range); > > + > > spin_lock(&mn->lock); > > it = interval_tree_iter_first(&mn->objects, range->start, end); > > while (it) { > > @@ -145,6 +149,17 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > > * object if it is not in the process of being destroyed. > > */ > > mo = container_of(it, struct i915_mmu_object, it); > > + > > + /* > > + * If it is already read only and we are updating to > > + * read only then we do not need to change anything. > > + * So save time and skip this one. > > + */ > > + if (update_to_read_only && mo->read_only) { > > + it = interval_tree_iter_next(it, range->start, end); > > + continue; > > + } > > + > > if (kref_get_unless_zero(&mo->obj->base.refcount)) > > queue_work(mn->wq, &mo->work); > > > > @@ -270,6 +285,7 @@ i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj, > > mo->mn = mn; > > mo->obj = obj; > > mo->it.start = obj->userptr.ptr; > > + mo->read_only = i915_gem_object_is_readonly(obj); > > mo->it.last = obj->userptr.ptr + obj->base.size - 1; > > INIT_WORK(&mo->work, cancel_userptr); > > > > -- > > 2.17.2 > > > > _______________________________________________ > > dri-devel mailing list > > dri-devel@xxxxxxxxxxxxxxxxxxxxx > > https://lists.freedesktop.org/mailman/listinfo/dri-devel