On 09/03/2020 23:41, Jens Axboe wrote: > On 3/9/20 2:03 PM, Pavel Begunkov wrote: >> On 24/02/2020 18:22, Jens Axboe wrote: >> A problem here is that we actually have a 2D array of works because of linked >> requests. > > You could either skip anything with a link, or even just ignore it and > simply re-queue a dependent link if it isn't hashed when it's done if > grabbed in a batch. > >> We can io_wqe_enqueue() dependant works, if have hashed requests, so delegating >> it to other threads. But if the work->list is not per-core, it will hurt >> locality. Either re-enqueue hashed ones if there is a dependant work. Need to >> think how to do better. > > If we ignore links for a second, I think we can all agree that it'd be a > big win to do the batch. Definitely > > With links, worst case would then be something where every other link is > hashed. > > For a first patch, I'd be quite happy to just stop the batch if there's > a link on a request. The normal case here is buffered writes, and > that'll handle that case perfectly. Links will be no worse than before. > Seems like a no-brainer to me. That isn't really a problem, just pointing that there could be optimisations for different cases. -- Pavel Begunkov