Commit 429120f3df2d starts to take account of segment's start dma address when computing max segment size, and data type of 'unsigned long' is used to do that. However, the segment mask may be 0xffffffff, so the figured out segment size may be overflowed in case of zero physical address on 32bit arch. Fix the issue by returning queue_max_segment_size() directly when that happens. Fixes: 429120f3df2d ("block: fix splitting segments on boundary masks") Reported-by: Guenter Roeck <linux@xxxxxxxxxxxx> Tested-by: Guenter Roeck <linux@xxxxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> --- V2: - don't use 64bit offset, so that not adding overhead on 32bit physical address space block/blk-merge.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 347782a24a35..1534ed736363 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -164,8 +164,13 @@ static inline unsigned get_max_segment_size(const struct request_queue *q, unsigned long mask = queue_segment_boundary(q); offset = mask & (page_to_phys(start_page) + offset); - return min_t(unsigned long, mask - offset + 1, - queue_max_segment_size(q)); + + /* + * overflow may be triggered in case of zero page physical address + * on 32bit arch, use queue's max segment size when that happens. + */ + return min_not_zero(mask - offset + 1, + (unsigned long)queue_max_segment_size(q)); } /** -- 2.20.1