On Mon, Mar 04, 2019 at 09:41:34AM +0000, Chris Wilson wrote: > Quoting Andy Shevchenko (2019-03-04 09:29:08) > > Switch to bitmap_zalloc() to show clearly what we are allocating. > > Besides that it returns pointer of bitmap type instead of opaque void *. > > Which is confusing; since we explicitly want unsigned longs, not some > amorphous bitmap type. Why? You use it as a bitmap anyway since you are telling below you are using bit ops like set/clear_bit. > > if (obj->bit_17 == NULL) { > > - obj->bit_17 = kcalloc(BITS_TO_LONGS(page_count), > > - sizeof(long), GFP_KERNEL); > > + obj->bit_17 = bitmap_zalloc(page_count, GFP_KERNEL); > > That feels a bit more of an overreach, as we just use bitops and never > actually use the bitmap iface. bitops are _luckily_ part of bitmap iface. bitmap iface has been evolved specifically the way the existing ops will work on it w/o any change. > Simply because it kills BITS_TO_LONGS(), even though I do not see why > the bitmap_[z]alloc and bitmap_free are not inlines... Because of circular dependencies (hell) in the headers. > And for this is not the overflow protection of kcalloc silly? We start > with a large value, factorise it, then check that the two factors do not > overflow? If it were to overflow, it would overflow in the > BITS_TO_LONGS() itself. This just a simple API change w/o functional changes. > Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Thank you. -- With Best Regards, Andy Shevchenko _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx