On Wed, Apr 06, 2016 at 10:30:15AM +0100, Tvrtko Ursulin wrote: > > On 05/04/16 13:57, Chris Wilson wrote: > >We now have two implementations for vmapping a whole object, one for > >dma-buf and one for the ringbuffer. If we couple the vmapping into the > >obj->pages lifetime, then we can reuse an obj->vmapping for both and at > >the same time couple it into the shrinker. > > I suppose Dave could respin the "vmap_range" helper on top of this > series to further consolidate cmd parser and > i915_gem_object_pin_vmap. Why though? The plan (see the earlier patches) is to completely replace the cmdparser's current pessimism with this. > >@@ -2400,6 +2405,46 @@ i915_gem_object_get_pages(struct drm_i915_gem_object *obj) > > return 0; > > } > > > >+void *i915_gem_object_pin_vmap(struct drm_i915_gem_object *obj) > > Kerneldoc would be cool, if for nothing then for the return value. > > >+{ > >+ int ret; > >+ > > lockdep_assert_held maybe? Urm, see long term plans ;) Yes, it applies now. I have in mind going through adding the lockdep_assert_held() precisely because I have some patches to remove the locking requirements around some core functions. > >+ ret = i915_gem_object_get_pages(obj); > >+ if (ret) > >+ return ERR_PTR(ret); > >+ > >+ i915_gem_object_pin_pages(obj); > >+ > >+ if (obj->vmapping == NULL) { > >+ struct sg_page_iter sg_iter; > >+ struct page **pages; > >+ int n; > >+ > >+ n = obj->base.size >> PAGE_SHIFT; > >+ pages = kmalloc(n*sizeof(*pages), GFP_TEMPORARY | __GFP_NOWARN); > >+ if (pages == NULL) > >+ pages = drm_malloc_ab(n, sizeof(*pages)); > >+ if (pages != NULL) { > >+ n = 0; > >+ for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) > >+ pages[n++] = sg_page_iter_page(&sg_iter); > >+ > >+ obj->vmapping = vmap(pages, n, 0, PAGE_KERNEL); > >+ if (obj->vmapping == NULL) { > >+ i915_gem_shrink_all(to_i915(obj->base.dev)); > > Won't the shrinker already run via the new notifier? Why call it > again and for all objects this time? Left over, forgot to update this chunk. The shrinker will be run for both any kmalloc failures inside vmap as well as the arena notifier. So we can indeed remove this manual shrinking. If we wanted to we could do something like get_pages_gtt where we shrink ourselves before we allow the notifier to run. I'm going to say that is overkill here... > Also, act on the return value before retrying vmap? No. The return value doesn't give a good indication as to whether the next attempt will succeed or not (and we care more about the failure to allocate than the transistory failure to shrink). -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx