On Thu, 17 Jun 2021 at 11:23, Magnus Karlsson <magnus.karlsson@xxxxxxxxx> wrote: > > From: Magnus Karlsson <magnus.karlsson@xxxxxxxxx> > > Fix a missing validation of a Tx descriptor when executing in skb mode > and the umem is in unaligned mode. A descriptor could point to a > buffer straddling the end of the umem, thus effectively tricking the > kernel to read outside the allowed umem region. This could lead to a > kernel crash if that part of memory is not mapped. > > In zero-copy mode, the descriptor validation code rejects such > descriptors by checking a bit in the DMA address that tells us if the > next page is physically contiguous or not. For the last page in the > umem, this bit is not set, therefore any descriptor pointing to a > packet straddling this last page boundary will be rejected. However, > the skb path does not use this bit since it copies out data and can do > so to two different pages. (It also does not have the array of DMA > address, so it cannot even store this bit.) The code just returned > that the packet is always physically contiguous. But this is > unfortunately also returned for the last page in the umem, which means > that packets that cross the end of the umem are being allowed, which > they should not be. > > Fix this by introducing a check for this in the SKB path only, not > penalizing the zero-copy path. > > Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API") > Signed-off-by: Magnus Karlsson <magnus.karlsson@xxxxxxxxx> Nice catch! Acked-by: Björn Töpel <bjorn@xxxxxxxxxx> > --- > include/net/xsk_buff_pool.h | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) > > diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h > index eaa8386dbc63..7a9a23e7a604 100644 > --- a/include/net/xsk_buff_pool.h > +++ b/include/net/xsk_buff_pool.h > @@ -147,11 +147,16 @@ static inline bool xp_desc_crosses_non_contig_pg(struct xsk_buff_pool *pool, > { > bool cross_pg = (addr & (PAGE_SIZE - 1)) + len > PAGE_SIZE; > > - if (pool->dma_pages_cnt && cross_pg) { > + if (likely(!cross_pg)) > + return false; > + > + if (pool->dma_pages_cnt) { > return !(pool->dma_pages[addr >> PAGE_SHIFT] & > XSK_NEXT_PG_CONTIG_MASK); > } > - return false; > + > + /* skb path */ > + return addr + len > pool->addrs_cnt; > } > > static inline u64 xp_aligned_extract_addr(struct xsk_buff_pool *pool, u64 addr) > > base-commit: da5ac772cfe2a03058b0accfac03fad60c46c24d > -- > 2.29.0 >