Hi experts, May I ask a question about block layer? When running fio in guest os, I find a 256k IO is split into the page by page in bio, saved in bvecs. And virtio-blk just put the bio_vec one by one in the available descriptor table. So if my backend device does not support iovector opertion(preadv/pwritev), then IO is issued to a low layer page by page. My question is: why doesn't the bio save multi-pages in one bio_vec? the /dev/vdb is a vhost-user-blk-pci device from spdk or virtio-blk-pci device. fio config is: [global] name=fio-rand-read rw=randread ioengine=libaio direct=1 numjobs=1 iodepth=1 bs=256K [file1] filename=/dev/vdb Traceing result like this: /usr/share/bcc/tools/stackcount -K -T '__blockdev_direct_IO' return 378048 /usr/share/bcc/tools/stackcount -K -T 'bio_add_page' return 5878 I can get: 378048/5878 = 64 256k/4k=64. __blockdev_direct_IO splits 256k to 64 parts. The /dev/vdb queue properties is as follows: [root@t1 00:10:42 queue]$find . | while read f;do echo "$f = $(cat $f)";done ./nomerges = 0 ./logical_block_size = 512 ./rq_affinity = 1 ./discard_zeroes_data = 0 ./max_segments = 126 ./unpriv_sgio = 0 ./max_segment_size = 4294967295 ./rotational = 1 ./scheduler = none ./read_ahead_kb = 128 ./max_hw_sectors_kb = 2147483647 ./discard_granularity = 0 ./discard_max_bytes = 0 ./write_same_max_bytes = 0 ./max_integrity_segments = 0 ./max_sectors_kb = 512 ./physical_block_size = 512 ./add_random = 0 ./nr_requests = 128 ./minimum_io_size = 512 ./hw_sector_size = 512 ./optimal_io_size Sometimes the part io size is bigger than 4k. some logs: id: 0 size: 4096 ... id: 57 size: 4096 id: 58 size: 24576 Why does this happen? kernel version: 1. 3.10.0-1062.1.2.el7 2. 5.3 Thanks in advance.