Ryo Tsuruta wrote:
Rik van Riel <riel@xxxxxxxxxx> wrote:
Are you saying that dm-ioband is purposely unfair,
until a certain load level is reached?
Not unfair, dm-ioband(weight policy) is intentionally designed to
use bandwidth efficiently, weight policy tries to give spare bandwidth
of inactive groups to active groups.
This sounds good, except that the lack of anticipation
means that a group with just one task doing reads will
be considered "inactive" in-between reads.
This means writes can always get in-between two reads,
sometimes multiple writes at a time, really disadvantaging
a group that is doing just disk reads.
This is a problem, because reads are generally more time
sensitive than writes.
We regarded reducing throughput loss rather than reducing duration
as the design of dm-ioband. Of course, it is possible to make a new
policy which reduces duration.
... while also reducing overall system throughput
by design?
I think it reduces system throughput compared to the current
implementation, because it causes more overhead to do fine grained
control.
Except that the io scheduler based io controller seems
to be able to enforce fairness while not reducing
throughput.
Dm-ioband would have to address these issues to be a
serious contender, IMHO.
--
All rights reversed.
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel