Hi, On 20/04/16 12:17, ankitprasad.r.sharma@xxxxxxxxx wrote:
From: Ankitprasad Sharma <ankitprasad.r.sharma@xxxxxxxxx>
Patch is mostly Rodrigo's, right? So I assumed he approved the authorship transfer.
When constructing a batchbuffer, it is sometimes crucial to know the largest hole into which we can fit a fenceable buffer (for example when handling very large objects on gen2 and gen3). This depends on the fragmentation of pinned buffers inside the aperture, a question only the kernel can easily answer. This patch extends the current DRM_I915_GEM_GET_APERTURE ioctl to include a couple of new fields in its reply to userspace - the total amount of space available in the mappable region of the aperture and also the single largest block available. This is not quite what userspace wants to answer the question of whether this batch will fit as fences are also required to meet severe alignment constraints within the batch. For this purpose, a third conservative estimate of largest fence available is also provided. For when userspace needs more than one batch, we also provide the culmulative space available for fences such that it has some additional guidance to how much space it could allocate to fences. Conservatism still wins. The patch also adds a debugfs file for convenient testing and reporting. v2: The first object cannot end at offset 0, so we can use last==0 to detect the empty list. v3: Expand all values to 64bit, just in case. Report total mappable aperture size for userspace that cannot easily determine it by inspecting the PCI device. v4: (Rodrigo) Fixed rebase conflicts. v5: Keeping limits to get_aperture ioctl, and moved changing numbers to debugfs, Addressed comments (Chris/Tvrtko)
It is confusing that you already posted a different v5 on 1st of July 2015.
Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk> Signed-off-by: Rodrigo Vivi <rodrigo.vivi at intel.com> Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@xxxxxxxxx> --- drivers/gpu/drm/i915/i915_debugfs.c | 133 ++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_gem.c | 1 + include/uapi/drm/i915_drm.h | 5 ++ 3 files changed, 139 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 7f94c6a..d8d7994 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -524,6 +524,138 @@ static int i915_gem_object_info(struct seq_file *m, void* data) return 0; } +static int vma_rank_by_ggtt(void *priv, + struct list_head *A, + struct list_head *B) +{ + struct i915_vma *a = list_entry(A, typeof(*a), exec_list); + struct i915_vma *b = list_entry(B, typeof(*b), exec_list); + + return a->node.start - b->node.start; +} + +static u32 __fence_size(struct drm_i915_private *dev_priv, u32 start, u32 end) +{ + u32 size = end - start; + u32 fence_size; + + if (INTEL_INFO(dev_priv)->gen < 4) { + u32 fence_max; + u32 fence_next; + + if (IS_GEN3(dev_priv)) { + fence_max = I830_FENCE_MAX_SIZE_VAL << 20; + fence_next = 1024*1024; + } else { + fence_max = I830_FENCE_MAX_SIZE_VAL << 19; + fence_next = 512*1024; + } + + fence_max = min(fence_max, size); + fence_size = 0; + /* Find fence_size less than fence_max and power of 2 */ + while (fence_next <= fence_max) { + u32 base = ALIGN(start, fence_next); + if (base + fence_next > end) + break; + + fence_size = fence_next; + fence_next <<= 1; + } + } else { + fence_size = size; + } + + return fence_size; +} + +static int i915_gem_aperture_info(struct seq_file *m, void *data) +{ + struct drm_info_node *node = m->private; + struct drm_device *dev = node->minor->dev; + struct drm_i915_private *dev_priv = dev->dev_private; + struct i915_ggtt *ggtt = &dev_priv->ggtt; + struct drm_i915_gem_get_aperture arg; + struct i915_vma *vma; + struct list_head map_list; + const uint64_t map_limit = ggtt->mappable_end; + uint64_t map_space, map_largest, fence_space, fence_largest; + uint64_t last, size; + int ret; + + INIT_LIST_HEAD(&map_list); + + map_space = map_largest = 0; + fence_space = fence_largest = 0; + + ret = i915_gem_get_aperture_ioctl(node->minor->dev, &arg, NULL); + if (ret) + return ret; + + mutex_lock(&dev->struct_mutex); + list_for_each_entry(vma, &ggtt->base.active_list, vm_link) + if (vma->pin_count && + (vma->node.start + vma->node.size) <= map_limit) + list_add(&vma->exec_list, &map_list); + list_for_each_entry(vma, &ggtt->base.inactive_list, vm_link) + if (vma->pin_count && + (vma->node.start + vma->node.size) <= map_limit) + list_add(&vma->exec_list, &map_list);
Could squash the two with an outer loop containing pointers to active and inactive list, like a pattern from the shrinker or something. Wouldn't save many lines of code though so not sure.
+ + last = 0; + list_sort(NULL, &map_list, vma_rank_by_ggtt); + while (!list_empty(&map_list)) { + vma = list_first_entry(&map_list, typeof(*vma), exec_list); + list_del_init(&vma->exec_list); + + if (last == 0) + goto skip_first; + + size = vma->node.start - last;
hole_size for readability? (took me some time to figure it out)
+ if (size > map_largest) + map_largest = size; + map_space += size; + + size = __fence_size(dev_priv, last, vma->node.start); + if (size > fence_largest) + fence_largest = size; + fence_space += size; + +skip_first: + last = vma->node.start + vma->node.size; + } + if (last < map_limit) { + size = map_limit - last; + if (size > map_largest) + map_largest = size; + map_space += size; + + size = __fence_size(dev_priv, last, map_limit); + if (size > fence_largest) + fence_largest = size; + fence_space += size; + } + + mutex_unlock(&dev->struct_mutex); + + seq_printf(m, "Total size of the GTT: %llu bytes\n", + arg.aper_size); + seq_printf(m, "Available space in the GTT: %llu bytes\n", + arg.aper_available_size); + seq_printf(m, "Total space in the mappable aperture: %llu bytes\n", + arg.map_total_size); + seq_printf(m, "Available space in the mappable aperture: %llu bytes\n", + map_space); + seq_printf(m, "Single largest space in the mappable aperture: %llu bytes\n", + map_largest); + seq_printf(m, "Available space for fences: %llu bytes\n", + fence_space); + seq_printf(m, "Single largest fence available: %llu bytes\n", + fence_largest); + + return 0; +} +
In general I find this a lot of code for a feature of questionable utility. As such I would prefer someone really stated the need for this and explained how it really is useful - even though whetever number they get from this may be completely irrelevant by the time it is acted upon.
static int i915_gem_gtt_info(struct seq_file *m, void *data) { struct drm_info_node *node = m->private; @@ -5354,6 +5486,7 @@ static int i915_debugfs_create(struct dentry *root, static const struct drm_info_list i915_debugfs_list[] = { {"i915_capabilities", i915_capabilities, 0}, {"i915_gem_objects", i915_gem_object_info, 0}, + {"i915_gem_aperture", i915_gem_aperture_info, 0}, {"i915_gem_gtt", i915_gem_gtt_info, 0}, {"i915_gem_pinned", i915_gem_gtt_info, 0, (void *) PINNED_LIST}, {"i915_gem_active", i915_gem_object_list_info, 0, (void *) ACTIVE_LIST}, diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 45fd049..a1be35f 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -165,6 +165,7 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data, args->aper_size = ggtt->base.total; args->aper_available_size = args->aper_size - pinned; + args->map_total_size = ggtt->mappable_end;; return 0; } diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index dae02d8..8f38407 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -1007,6 +1007,11 @@ struct drm_i915_gem_get_aperture { * bytes */ __u64 aper_available_size; + + /** + * Total space in the mappable region of the aperture, in bytes + */ + __u64 map_total_size; }; struct drm_i915_get_pipe_from_crtc_id {
And I guess it is up to Daniel to approve any ABI extensions? (CCed) Open source user for it exists? Regards, Tvrtko _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx