Hi, Fabio Checconi <fchecconi@xxxxxxxxx> wrote: > Hi, > > > From: Rik van Riel <riel@xxxxxxxxxx> > > Date: Tue, Sep 08, 2009 03:24:08PM -0400 > > > > Ryo Tsuruta wrote: > > >Rik van Riel <riel@xxxxxxxxxx> wrote: > > > > >>Are you saying that dm-ioband is purposely unfair, > > >>until a certain load level is reached? > > > > > >Not unfair, dm-ioband(weight policy) is intentionally designed to > > >use bandwidth efficiently, weight policy tries to give spare bandwidth > > >of inactive groups to active groups. > > > > This sounds good, except that the lack of anticipation > > means that a group with just one task doing reads will > > be considered "inactive" in-between reads. > > > > anticipation helps in achieving fairness, but CFQ currently disables > idling for nonrot+NCQ media, to avoid the resulting throughput loss on > some SSDs. Are we really sure that we want to introduce anticipation > everywhere, not only to improve throughput on rotational media, but to > achieve fairness too? I'm also not sure if it's worth introducing anticipation everywhere. The storage devices are becoming faster and smarter every year. In practice, I did a benchmark with a SAN storage and the noop scheduler got the best result. However, I'll consider how IO from one task should take care of. Thanks, Ryo Tsuruta -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel