Linus Torvalds wrote:
I really think we should do latency first, and throughput second.
Agree.
It's _easy_ to get throughput. The people who care just about throughput can always just disable all the work we do for latency.
But in my experience it is not that simple... The argument latency vs throughput or desktop vs server is wrong. I/O can never keep up with the ability of CPUs to dirty data. On desktops and servers (really many-user-desktops) we want minimum latency but the enemy is dirty VM. If we ignore the need for throughput to flush dirty pages, VM gets angry and forced VM page cleaning I/O is bad I/O. We want min latency with low dirty page percent but need to switch to max write throughput at some high dirty page percent. We can not prevent the cliff we fall off where the system chokes because the dirty page load is too high, but if we only worry about latency, we bring that choke point cliff in so it happens with a lower load. A 10% lower overload point might be fine to get 100% better latency, but would desktop users accept a 50% lower overload point where running one more application makes the system appear hung? Even desktop users commonly measure "how much work can I do before the system becomes unresponsive". jim -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel