On Fri, Oct 10 2014, Weijie Yang wrote: > The current cma bitmap aligned mask compute way is incorrect, it could > cause an unexpected align when using cma_alloc() if wanted align order > is bigger than cma->order_per_bit. > > Take kvm for example (PAGE_SHIFT = 12), kvm_cma->order_per_bit is set to 6, > when kvm_alloc_rma() tries to alloc kvm_rma_pages, it will input 15 as > expected align value, after using current computing, however, we get 0 as > cma bitmap aligned mask other than 511. > > This patch fixes the cma bitmap aligned mask compute way. > > Signed-off-by: Weijie Yang <weijie.yang@xxxxxxxxxxx> Acked-by: Michal Nazarewicz <mina86@xxxxxxxxxx> Should that also get: Cc: <stable@xxxxxxxxxxxxxxx> # v3.17 > --- > mm/cma.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/cma.c b/mm/cma.c > index c17751c..f6207ef 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -57,7 +57,10 @@ unsigned long cma_get_size(struct cma *cma) > > static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order) > { > - return (1UL << (align_order >> cma->order_per_bit)) - 1; > + if (align_order <= cma->order_per_bit) > + return 0; > + else > + return (1UL << (align_order - cma->order_per_bit)) - 1; > } > > static unsigned long cma_bitmap_maxno(struct cma *cma) > -- > 1.7.10.4 > > -- Best regards, _ _ .o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o ..o | Computer Science, Michał “mina86” Nazarewicz (o o) ooo +--<mpn@xxxxxxxxxx>--<xmpp:mina86@xxxxxxxxxx>--ooO--(_)--Ooo-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href