On Fri, Aug 18, 2017 at 01:06:46PM -0600, Jens Axboe wrote: > On 08/18/2017 10:28 AM, Shaohua Li wrote: > > On Fri, Aug 18, 2017 at 09:35:01AM -0600, Jens Axboe wrote: > >> On 08/18/2017 09:13 AM, Shaohua Li wrote: > >>> discard request usually is very big and easily use all bandwidth budget > >>> of a cgroup. discard request size doesn't really mean the size of data > >>> written, so it doesn't make sense to account it into bandwidth budget. > >>> This patch ignores discard requests size. It makes sense to account > >>> discard request into iops budget though. > >> > >> Some (most) devices to touch media for a discard operation, but the > >> cost tends to be fairly constant and independent of discard size. > >> Would it make sense to just treat it as a constant cost? Zero > >> cost seems wrong. > > > > that would be hard to find the cost. Would this make sense? > > > > min_t(unsigned int, bio->bi_iter.bi_size, queue_max_sectors(q) << 9) > > It's all going to be approximations, for sure, unfortunately it isn't > an exact science. Why not just use a constant small value? If we assume > that a 4k and 8M discard end up writing roughly the same to media, then > it would follow that just using a smaller constant value (regardless of > actual discard command size) would be useful. Sounds good. what number do you suggest? queue_max_sectors or a random number? Thanks, Shaohua