On Fri, 19 Oct 2012 18:03:07 +0100 Chris Wilson <chris at chris-wilson.co.uk> wrote: > To be used later by i915 to preallocate exact blocks of space from the > range manager. > > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk> > Cc: Dave Airlie <airlied at redhat.com> > Cc: dri-devel at lists.freedestkop.org With bikesheds below addressed or not: Reviewed-by: Ben Widawsky <ben at bwidawsk.net> > --- > drivers/gpu/drm/drm_mm.c | 49 > ++++++++++++++++++++++++++++++++++++++++++++++ > include/drm/drm_mm.h | 4 ++++ 2 files changed, 53 insertions(+) > > diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c > index 9bb82f7..5db8c20 100644 > --- a/drivers/gpu/drm/drm_mm.c > +++ b/drivers/gpu/drm/drm_mm.c > @@ -161,6 +161,55 @@ static void drm_mm_insert_helper(struct > drm_mm_node *hole_node, } > } > > +struct drm_mm_node *drm_mm_create_block(struct drm_mm *mm, > + unsigned long start, > + unsigned long size, > + bool atomic) > +{ <bikeshed> You could add a best_fit field like some of the other interfaces which will try to find a start == hole_start and end == hole_end. I'd guess this interface won't be called enough to worry about fragmentation too much though. </bikeshed> > + struct drm_mm_node *hole, *node; > + unsigned long end = start + size; > + > + list_for_each_entry(hole, &mm->hole_stack, hole_stack) { > + unsigned long hole_start; > + unsigned long hole_end; > + > + BUG_ON(!hole->hole_follows); <bikeshed> This isn't bad, but I don't think sticking the bug here is all that helpful in finding where the bug occured, since it wasn't here. WARN is perhaps more useful, but equally unhelpful IMO. </bikeshed> > + hole_start = drm_mm_hole_node_start(hole); > + hole_end = drm_mm_hole_node_end(hole); > + > + if (hole_start > start || hole_end < end) > + continue; > + > + node = drm_mm_kmalloc(mm, atomic); > + if (unlikely(node == NULL)) > + return NULL; > + > + node->start = start; > + node->size = size; > + node->mm = mm; > + node->allocated = 1; > + > + INIT_LIST_HEAD(&node->hole_stack); > + list_add(&node->node_list, &hole->node_list); > + > + if (start == hole_start) { > + hole->hole_follows = 0; > + list_del_init(&hole->hole_stack); > + } > + > + node->hole_follows = 0; > + if (end != hole_end) { > + list_add(&node->hole_stack, &mm->hole_stack); > + node->hole_follows = 1; > + } > + > + return node; > + } > + > + return NULL; > +} > +EXPORT_SYMBOL(drm_mm_create_block); > + > struct drm_mm_node *drm_mm_get_block_generic(struct drm_mm_node > *hole_node, unsigned long size, > unsigned alignment, > diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h > index 06d7f79..4020f96 100644 > --- a/include/drm/drm_mm.h > +++ b/include/drm/drm_mm.h > @@ -102,6 +102,10 @@ static inline bool drm_mm_initialized(struct > drm_mm *mm) /* > * Basic range manager support (drm_mm.c) > */ > +extern struct drm_mm_node *drm_mm_create_block(struct drm_mm *mm, > + unsigned long start, > + unsigned long size, > + bool atomic); > extern struct drm_mm_node *drm_mm_get_block_generic(struct > drm_mm_node *node, unsigned long size, > unsigned > alignment, -- Ben Widawsky, Intel Open Source Technology Center