Hello Everyone, I was going through the documentation to try and chase down a specific use case for a performance tool. Historically speaking when you deal with a disk that has spindles etc ... the performance drops off as you get further out the disk. In looking at the disk/zone profile provided with the fio source we use a zone size of 256m and then a zoneskip of 2G. If I read the man page correctly that could be taken to mean we aren't actually testing on the 2G boundaries but rather 256M+2G=2.25G+256M+2G=4.5G etc ... So any help in clarifying this would be great. No matter how you look at or interpret that if you try and compare the results and graph them against an increasing iodepth or block size you tend to find that the time of the run changes which makes doing an apples to apples comparison rather difficult. To get around that the runtime option looks like what you would want to run, well that and time_based. What I am trying to understand or figure out is if there is a way to define the following behavior using the currently available options (fio 2.0) Starting at boundary W spend X# of seconds exercising a Y size zone then skip to the next boundary and do it all over again. Ideally that gives you graphs you can map against each other and have a relatively good chance of the total run lengths (time) being the same. Thanks, Roger -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html