We currently increment the task/vm counts when we first attempt to queue a bio. But this isn't necessarily correct - if the request allocation fails with -EAGAIN, for example, and the caller retries, then we'll over-account by as many retries as are done. This can happen for polled IO, where we cannot wait for requests. Hence retries can get aggressive, if we're running out of requests. If this happens, then watching the IO rates in vmstat are incorrect as they count every issue attempt as successful and hence the stats are inflated by quite a lot potentially. Add a bio flag to know if we've done accounting or not. This prevents the same bio from being accounted potentially many times, when retried. Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> --- diff --git a/block/blk-core.c b/block/blk-core.c index d9d632639bd1..ff562a8cd9c9 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1236,7 +1236,7 @@ blk_qc_t submit_bio(struct bio *bio) * If it's a regular read/write or a barrier with data attached, * go through the normal accounting stuff before submission. */ - if (bio_has_data(bio)) { + if (bio_has_data(bio) && !bio_flagged(bio, BIO_ACCOUNTED)) { unsigned int count; if (unlikely(bio_op(bio) == REQ_OP_WRITE_SAME)) @@ -1259,6 +1259,7 @@ blk_qc_t submit_bio(struct bio *bio) (unsigned long long)bio->bi_iter.bi_sector, bio_devname(bio, b), count); } + bio_set_flag(bio, BIO_ACCOUNTED); } /* diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 63a39e47fc60..39bcc9326c7a 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -266,6 +266,7 @@ enum { * of this bio. */ BIO_CGROUP_ACCT, /* has been accounted to a cgroup */ BIO_TRACKED, /* set if bio goes through the rq_qos path */ + BIO_ACCOUNTED, /* task/vm stats have been done */ BIO_FLAG_LAST }; -- Jens Axboe