On 02/01/2023 18:03, Yonatan Nachum wrote: > When registering a new DMA MR after selecting the best aligned page size > for it, we iterate over the given sglist to split each entry to smaller, > aligned to the selected page size, DMA blocks. > > In given circumstances where the sg entry and page size fit certain sizes > and the sg entry is not aligned to the selected page size, the total size > of the aligned pages we need to cover the sg entry is >= 4GB. Under this > circumstances, while iterating page aligned blocks, the counter responsible > for counting how much we advanced from the start of the sg entry is > overflowed because its type is u32 and we pass 4GB in size. This can > lead to an infinite loop inside the iterator function because in some > cases the overflow prevents the counter to be larger than the size of > the sg entry. > > Fix the presented problem with changing the counter type to u64. > > Backtrace: > [ 192.374329] efa_reg_user_mr_dmabuf > [ 192.376783] efa_register_mr > [ 192.382579] pgsz_bitmap 0xfffff000 rounddown 0x80000000 > [ 192.386423] pg_sz [0x80000000] umem_length[0xc0000000] > [ 192.392657] start 0x0 length 0xc0000000 params.page_shift 31 params.page_num 3 > [ 192.399559] hp_cnt[3], pages_in_hp[524288] > [ 192.403690] umem->sgt_append.sgt.nents[1] > [ 192.407905] number entries: [1], pg_bit: [31] > [ 192.411397] biter->__sg_nents [1] biter->__sg [0000000008b0c5d8] > [ 192.415601] biter->__sg_advance [665837568] sg_dma_len[3221225472] > [ 192.419823] biter->__sg_nents [1] biter->__sg [0000000008b0c5d8] > [ 192.423976] biter->__sg_advance [2813321216] sg_dma_len[3221225472] > [ 192.428243] biter->__sg_nents [1] biter->__sg [0000000008b0c5d8] > [ 192.432397] biter->__sg_advance [665837568] sg_dma_len[3221225472] > > Fixes: a808273a495c Missing the patch subject line, please see: https://www.kernel.org/doc/html/latest/process/submitting-patches.html#describe-your-changes Also, there shouldn't be a blank line here. > > Signed-off-by: Yonatan Nachum <ynachum@xxxxxxxxxx>