James.Smart@xxxxxxxxxx wrote: > Fernando Luis Vázquez Cao wrote: >>> BTW as I said in a previous email, an interesting path to >> be explored >>> IMHO could be to think in terms of IO time. So, look at the >> time an IO >>> request is issued to the drive, look at the time the >> request is served, >>> evaluate the difference and charge the consumed IO time to the >>> appropriate cgroup. Then dispatch IO requests in function of the >>> consumed IO time debts / credits, using for example a token-bucket >>> strategy. And probably the best place to implement the IO time >>> accounting is the elevator. >> Please note that the seek time for a specific IO request is strongly >> correlated with the IO requests that preceded it, which means that the >> owner of that request is not the only one to blame if it >> takes too long >> to process it. In other words, with the algorithm you propose >> we may end >> up charging the wrong guy. > > I assume all of these discussions are focused on simple storage - disks > direct attached to a single server - and are not targeted at SANs with > arrays, multi-initiator accesses, and fabric/network impacts. True ? > Such algorithms can be seriously off-base in these latter configurations. Accounting the IO cost using time values should be in principle a topology-agnostic solution, so it should work both for LUs from SAN, magnetic disks, USB drive, optical drives, etc. because we're actually looking at the time spent to execute each IO operation (and you don't need to know the details of the particular IO operation, because you automatically know the actual cost). If you mean that trying to evaluate or even predict the cost of the seek ops is so meaningful in those "complex" environments, well.. yes, in this case I agree. -Andrea -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel