Re: Formats for log files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Bill,

Looking at the code, I believe you're correct, the bandwidth logging for mixed loads isn't right. While read and write logs are kept separate, only one direction's log will get flushed on each interval. Instead, fio should flush both directions.

I'm attaching a patch. Jens: could you take a look at this? I've tested both mixed and single-direction workloads, seems to work fine.

Best regards,
Josh

Attachment: 0001-Fix-bandwidth-logging-for-mixed-read-write-workloads.patch
Description: Binary data


On Feb 16, 2012, at 11:51 AM, Bill Hooper (whooper) wrote:

> Thanks for the reply. I ended doing something similar to your script consolidate the jobs. If the job was a read only or write only I found that I can add together the jobs based on the timestamps. The resulting data give me a good representation of the totals across the intervals.
> 
> Unfortunately this all falls apart with mixed workloads. Have you done any analysis work with the log files produced by mixed workloads?
> 
> Bill
> ________________________________________
> From: Josh Carter [public@xxxxxxxxxxxxxx]
> Sent: Thursday, February 16, 2012 10:41 AM
> To: Bill Hooper (whooper)
> Subject: Re: Formats for log files
> 
> Hi Bill,
> 
> The times across jobs are accurate -- it's all coming from the same clock. Say you've got 4 jobs, you'll indeed see the time go from 500-ish..end then hop back to 500-ish for the next job's log. The sample points won't be *exactly* the same, however. Job 1 might take a sample at 501ms, job 2 might not sample until 507ms. Fio samples when IOs are completed, so tests with longer-running IOs (like with large block sizes) will have more jitter.
> 
> All jobs are using the same clock, however, so it's valid to plot jobs against each other, or do some analysis with binning, etc.. You may need to post-process the log file, e.g. adding a column for the job number. Example (using Ruby):
> 
>    File.open("multi_job_test_bw-split.log", "w+") do |outfile|
>      File.open("multi_job_test_bw.log") do |infile|
>        job = 1
>        last_time = nil
> 
>        infile.each_line do |line|
>          time, rate, ddir, bs = line.chomp.split(',')
>          time = time.to_i
> 
>          # When time hops backwards, that means we're
>          # on the next job.
>          job += 1 if (last_time && time < last_time)
> 
>          outfile.puts "#{time},#{rate},#{rate},#{bs},#{job}"
>          last_time = time
>        end
>      end
>    end
> 
> I haven't seen empty block sizes in the log -- sounds like a bug.
> 
> Best regards,
> Josh
> 
> On Feb 16, 2012, at 10:33 AM, Bill Hooper (whooper) wrote:
> 
>> Thanks for the information. I suspected that was the format. I do notice that I do not always have the block size in the log. I am using version 2.0 and will try a more recent version.
>> 
>> It appears that when I set the numjobs parameter the log files will contain the data from each job concatenated. I also notice that the times for each job is not synchronized between jobs. The question is how do I merge the data from each job to consolidate the data?
>> 
>> Bill Hooper
>> Micron
> 


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux