On 2020/9/21 16:07, Christoph Hellwig wrote: > Inherit the optimal I/O size setting just like the readahead window, > as any reason to do larger I/O does not apply to just readahead. > > Signed-off-by: Christoph Hellwig <hch@xxxxxx> > --- > drivers/md/bcache/super.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c > index 1bbdc410ee3c51..48113005ed86ad 100644 > --- a/drivers/md/bcache/super.c > +++ b/drivers/md/bcache/super.c > @@ -1430,6 +1430,8 @@ static int cached_dev_init(struct cached_dev *dc, unsigned int block_size) > dc->disk.disk->queue->backing_dev_info->ra_pages = > max(dc->disk.disk->queue->backing_dev_info->ra_pages, > q->backing_dev_info->ra_pages); > + blk_queue_io_opt(dc->disk.disk->queue, > + max(queue_io_opt(dc->disk.disk->queue), queue_io_opt(q))); > Hi Christoph, I am not sure whether virtual bcache device's optimal request size can be simply set like this. Most of time inherit backing device's optimal request size is fine, but there are two exceptions, - Read request hits on cache device - User sets sequential_cuttoff as 0, all writing may go into cache device firstly. For the above two conditions, all I/Os goes into cache device, using optimal request size of backing device might be improper. Just a guess, is it OK to set the optimal request size of the virtual bcache device as the least common multiple of cache device's and backing device's optimal request sizes ? [snipped] Thanks. Coly Li -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel