Hello, I frequently want to perform a set amount (e.g. 1 GB) of random I/O over very large files or devices (e.g. 1 TB). I have noticed that when using the "size" argument to control the amount of I/O, it also has the side effect of constraining the I/O to the first "size" bytes of the file. Is there a way to tell FIO to perform X bytes of random I/O on a file of size Y and have the random I/O distributed throughout the extent of Y where Y > X? -- Matthew Hayward Director Professional Services Delphix M: 206.849.6389 275 Middlefield Road, Suite 50 Menlo Park, CA 94025 http://www.delphix.com -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html