A vdo device loads all reference count values at startup, reading entire spans of up to a few MB at a time. Use a small number of large vios instead of a big flurry of 4kB vios to read this data more efficiently. On our test systems (KVM virtual machines with 6 Intel Icelake cores and 16 GB RAM, simulating ~34 TB of physical storage through some device-mapper hackery), using the larger vios cuts the vdo startup time by 15-30%. Installations with smaller or very fast storage may see less benefit. Ken Raeburn (4): dm vdo vio-pool: add a pool pointer to pooled_vio dm vdo vio-pool: support pools with multiple data blocks per vio dm vdo vio-pool: allow variable-sized metadata vios dm vdo slab-depot: read refcount blocks in large chunks at load time drivers/md/dm-vdo/block-map.c | 11 ++--- drivers/md/dm-vdo/io-submitter.c | 6 ++- drivers/md/dm-vdo/io-submitter.h | 18 +++++-- drivers/md/dm-vdo/slab-depot.c | 80 ++++++++++++++++++++++---------- drivers/md/dm-vdo/slab-depot.h | 13 +++++- drivers/md/dm-vdo/types.h | 3 ++ drivers/md/dm-vdo/vio.c | 54 ++++++++++++--------- drivers/md/dm-vdo/vio.h | 13 ++++-- 8 files changed, 136 insertions(+), 62 deletions(-) -- 2.45.2