Hi Mel, I have been thinking of our (sub)discussion, in [1], on possible tests to measure responsiveness. First let me sum up that discuss in terms of the two main facts that we highlighted. On one side, - it is actually possible to measure the start-up time of some popular applications automatically and precisely (my claim), - but to accomplish such a task one needs a desktop environment, which is not available and/or not so easy to handle on a battery of server-like test machines; On the other side, - you did perform some tests to estimate responsiveness, - but the workloads for which you measured latency, namely the I/O generated by a set of independent random readers, is rather too simple to be able to model the much more complex workloads generated by any non-trivial application while starting. The latter, in fact, spawns or wakes up a set of processes that synchronize among each other, and that do I/O that varies over time, ranging from sequential to random with large block sizes. In addition, not only the number of processes doing I/O, but also the total amount of I/O varies greatly with the type of the application. In view of these contrasting facts, here is my proposal to have a feasible yet accurate responsiveness test in your MMTests suite: add a synthetic test like yours, i.e., in which the workload is generated using fio, but in which appropriate workloads are generated to mimic real application-start-up workloads. In more detail, in which appropriate classes of workloads are generated, with each class modeling, in any of the above respect (locality of I/O, number of processes, total amount of I/O, ...), a popular type of application. I think/hope should be able to build these workloads accurately, after years of analysis of traces of the I/O generated by applications while starting. Or, in any case, we can then discuss the workloads I would propose. What do you think? Looking forward to your feedback, Paolo [1] https://lkml.org/lkml/2017/8/3/157