Re: fio-based responsiveness test for MMTests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 09, 2017 at 11:39:13AM +0200, Paolo Valente wrote:
> > Also agreed. However, in general I only rely on those fio configurations to
> > detect major problems in the IO scheduler. There are too many boot-to-boot
> > variances in the throughput and iops figures to make accurate conclusions
> > on the headline figures. For the most part, if I'm looking at those
> > configurations then I'm looking at the iostats to see if there are anomalies
> > in await times, queue sizes, merges, major starvations etc.
> > 
> 
> Ok, probably this is the piece of information that I stretched too much,
> looking at it through my "responsiveness glasses".
> 

Completely understandable. We all have our biases :)

> > However, I'm not aware of a reliable
> > synthetic representation of such workloads. I also am not aware of a
> > synthetic methodology that can simulate both the IO pattern itself, the
> > think time of the application and crucially link the "think time" to when
> > IO is initiated but it's also been a long time since I looked.
> 
> That's exactly the contribution I would like to provide.  In the past
> 10 years, we have analyzed probably thousands of traces of workloads
> generated exactly by applications starting.
> 
> > About the
> > closest I had in the past was generating patterns like you suggest and then
> > timing how long it took an X window to appear once an application started
> > and this was years ago. The effort was abandoned because the time for the
> > window to appear was irrelevant. What mattered was how long it took the
> > application to be ready for use. Evolution was a particular example that
> > eventually caused me to abandon the effort (that and IO performance was not
> > my primary concern at the time). Evolution displaed a window relatively
> > quickly but then had a tendency to freeze while opening inboxes which I
> > didn't find a means of automatically detecting that would scale.
> > 
> 
> I do remember this concern of yours.  My reply was mainly that,
> unfortunately, you looked at one of the most difficult (if ever
> possible) applications to benchmark automatically. Fortunately, there
> are other, as popular applications, which are naturally suitable to
> automatic measurement of their start-up time.  The simplest and
> probably most popular example is any terminal: it stops doing I/O
> right after its window is displayed, i.e., right after it is ready for
> user input.  To be more precise, the amount of I/O the terminal still
> does after its window appears is below 1% of the total amount of I/O
> it does from the beginning of its startup.  Another popular and very
> easy to benchmark application is libreoffice.
> 
> For these applications, we have a detailed database of their I/O:
> size, position and inter-arrival time (thinktime) of every I/O
> request, measured on different storage devices and CPU/memory
> platforms.
> 
> The idea is then to write a set of tests in which (some of) these
> workloads are replayed, together with varying, additional background
> workloads.  The total time needed to serve each workload under test
> will be equal to the start-up time, in exactly the same conditions, of
> the application it mimics, except for a very low tolerance.  We will
> write this information in the documentation of the test.
> 
> If you have no further concerns, we will get back in touch when we
> have something ready.
> 

I have no further concerns. What you propose is ambitious but it would
be extremely valuable if it existed.

-- 
Mel Gorman
SUSE Labs



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux