On 4/20/23 08:01, Bart Van Assche wrote: > On 4/18/23 22:07, Christoph Hellwig wrote: >>> deadline_add_rq_rb(struct dd_per_prio *per_prio, struct request *rq) >>> { >>> struct rb_root *root = deadline_rb_root(per_prio, rq); >>> + struct request **next_rq __maybe_unused; >>> >>> elv_rb_add(root, rq); >>> +#ifdef CONFIG_BLK_DEV_ZONED >>> + next_rq = &per_prio->next_rq[rq_data_dir(rq)]; >>> + if (*next_rq == NULL || !blk_queue_is_zoned(rq->q)) >>> + return; >>> + /* >>> + * If a request got requeued or requests have been submitted out of >>> + * order, make sure that per zone the request with the lowest LBA is >>> + * submitted first. >>> + */ >>> + if (blk_rq_pos(rq) < blk_rq_pos(*next_rq) && >>> + blk_rq_zone_no(rq) == blk_rq_zone_no(*next_rq)) >>> + *next_rq = rq; >>> +#endif >> >> Please key move this into a helper only called when blk_queue_is_zoned >> is true. > Hi Christoph, > > I'm looking into an alternative, namely to remove the next_rq member > from struct dd_per_prio and instead to do the following: > * Track the offset (blk_rq_pos()) of the most recently dispatched > request ("latest_pos"). > * Where the next_rq member is read, look up the request that comes after > latest_pos in the RB-tree. This should require an effort that is similar > to updating next_rq after having dispatched a request. > > With this approach the code quoted above and that is surrounded with > #ifdef/#endif will disappear. This sounds much better, given that there are in practice lots of cases where next_rq is set null and we endup getting the next req from the fifo list head. At least last time I looked at this is what I saw (it was when I patched for the skip seq writes over 2 zones). > > Bart.