The current cma bitmap aligned mask compute way is incorrect, it could cause an unexpected align when using cma_alloc() if wanted align order is bigger than cma->order_per_bit. Take kvm for example (PAGE_SHIFT = 12), kvm_cma->order_per_bit is set to 6, when kvm_alloc_rma() tries to alloc kvm_rma_pages, it will input 15 as expected align value, after using current computing, however, we get 0 as cma bitmap aligned mask other than 511. This patch fixes the cma bitmap aligned mask compute way. Signed-off-by: Weijie Yang <weijie.yang@xxxxxxxxxxx> --- mm/cma.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/cma.c b/mm/cma.c index c17751c..f6207ef 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -57,7 +57,10 @@ unsigned long cma_get_size(struct cma *cma) static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order) { - return (1UL << (align_order >> cma->order_per_bit)) - 1; + if (align_order <= cma->order_per_bit) + return 0; + else + return (1UL << (align_order - cma->order_per_bit)) - 1; } static unsigned long cma_bitmap_maxno(struct cma *cma) -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>