Convert current looping-based implementation into bit operation, which can bring improvement for: 1) bitops is more efficient for its arch-level optimization. 2) Given that blksize_bits() is inline, _if_ @size is compile-time constant, it's possible that order_base_2() _may_ make output compile-time evaluated, depending on code context and compiler behavior. v1: https://lore.kernel.org/all/TYCP286MB2323169D81A806A7C1F7FDF1CA309@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx v2: Remove the ternary operator, based on Bart's suggestion But this may lead to break for corner cases below: BUILD_BUG_ON(blksize_bits(1025) != 11); So make a minor modification by adding (SECTOR_SIZE - 1) before shifting. v3: Remove the rounding stuff. base-commit: 30209debe98b6f66b13591e59e5272cb65b3945e Signed-off-by: Dawei Li <set_pte_at@xxxxxxxxxxx> --- include/linux/blkdev.h | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 57ed49f20d2e..32137d85c9ad 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1349,12 +1349,7 @@ static inline int blk_rq_aligned(struct request_queue *q, unsigned long addr, /* assumes size > 256 */ static inline unsigned int blksize_bits(unsigned int size) { - unsigned int bits = 8; - do { - bits++; - size >>= 1; - } while (size > 256); - return bits; + return order_base_2(size >> SECTOR_SHIFT) + SECTOR_SHIFT; } static inline unsigned int block_size(struct block_device *bdev) -- 2.25.1