Re: How to perform limited IO on large file over entire extent of file?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Montag, 1. Oktober 2012 schrieb Matt Hayward:
> Hello,

Hi Matt,

>    I frequently want to perform a set amount (e.g. 1 GB) of random I/O
> over very large files or devices (e.g. 1 TB).
> 
>    I have noticed that when using the "size" argument to control the
> amount of I/O, it also has the side effect of constraining the I/O to
> the first "size" bytes of the file.
> 
>    Is there a way to tell FIO to perform X bytes of random I/O on a
> file of size Y and have the random I/O distributed throughout the
> extent of Y where Y > X?

How about

       offset=int
              Offset in the file to start I/O. Data before the offset will
              not be touched.

       offset_increment=int
              If this is provided, then the real offset becomes the offset
              + offset_increment * thread_number, where the thread  number
              is  a  counter  that starts at 0 and is incremented for each
              job. This option is useful if there are several  jobs  which
              are  intended  to  operate on a file in parallel in disjoint
              segments, with even spacing between the starting points.

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux