On 11/4/19 11:17 AM, Kent Overstreet wrote: > On Mon, Nov 04, 2019 at 10:15:41AM -0800, Christoph Hellwig wrote: >> On Mon, Nov 04, 2019 at 01:14:03PM -0500, Kent Overstreet wrote: >>> On Sat, Nov 02, 2019 at 03:29:11PM +0800, Ming Lei wrote: >>>> __blk_queue_split() may be a bit heavy for small block size(such as >>>> 512B, or 4KB) IO, so introduce one flag to decide if this bio includes >>>> multiple page. And only consider to try splitting this bio in case >>>> that the multiple page flag is set. >>> >>> So, back in the day I had an alternative approach in mind: get rid of >>> blk_queue_split entirely, by pushing splitting down to the request layer - when >>> we map the bio/request to sgl, just have it map as much as will fit in the sgl >>> and if it doesn't entirely fit bump bi_remaining and leave it on the request >>> queue. >>> >>> This would mean there'd be no need for counting segments at all, and would cut a >>> fair amount of code out of the io path. >> >> I thought about that to, but it will take a lot more effort. Mostly >> because md/dm heavily rely on splitting as well. I still think it is >> worthwhile, it will just take a significant amount of time and we >> should have the quick improvement now. > > We can do it one driver at a time - driver sets a flag to disable > blk_queue_split(). Obvious one to do first would be nvme since that's where it > shows up the most. > > And md/md do splitting internally, but I'm not so sure they need > blk_queue_split(). I'm a big proponent of doing something like that instead, but it is a lot of work. I absolutely hate the splitting we're doing now, even though the original "let's work as hard as we add add page time to get things right" was pretty abysmal as well. -- Jens Axboe