On 11/14/24 9:17 AM, Christoph Hellwig wrote: >> @@ -313,21 +314,35 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, >> if (unlikely(offs & queue_dma_alignment(rq->q))) >> j = 0; >> else { >> - for (j = 0; j < npages; j++) { >> + for (j = 0; j < npages; j += num_pages) { >> struct page *page = pages[j]; >> - unsigned int n = PAGE_SIZE - offs; >> + struct folio *folio = page_folio(page); >> bool same_page = false; >> >> - if (n > bytes) >> - n = bytes; >> >> - if (!bio_add_hw_page(rq->q, bio, page, n, offs, >> - max_sectors, &same_page)) >> + folio_offset = ((size_t)folio_page_idx(folio, >> + page) << PAGE_SHIFT) + offs; > > I'm not sure if Jens want to rush something like this in for 6.13, but if > we're aiming for the next merge window I actually have a 3/4 done series > that rips out bio_add_hw_page and all the passthrough special casing by > simply running the 'do we need to split the bio' helper on the free-form > bio and return an error if we do. That means all this code will go away, > and you'll automatically get all the work done for the normal path for > passthrough as well. I'd rather it simmer a bit first, so I'd say we have time since 6.13 is coming up really soon. -- Jens Axboe