Re: could fio be used to wipe disks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16 March 2017 at 12:30, Antoine Beaupré <anarcat@xxxxxxxxxxxxxxx> wrote:
> On 2017-03-16 07:42:48, Sitsofe Wheeler wrote:
>
> Yeah, I know about nwipe and everything. As I mentioned originally, I am
> not sure this is in the scope of my project...

OK.

>> Further it may not easier than you think to get at stale internal
>> mappings the "disk" might have which is why you use SECURE ERASE on
>> SSDs. fio does none of this.
>
> ... I also mentioned I was aware of SSD specifics, thanks. :)

OK - my bad :-)

>> Since you mentioned stress you may want to look at using a deeper
>> iodepth with the libaio engine (assuming you're on Linux, other
>> platforms have asynchronous engines too) and only doing the sync at
>> the end.
>
> Even after reading this:
>
> http://fio.readthedocs.io/en/latest/fio_doc.html#i-o-depth
>
> I don't quite understand what iodepth does or how to use it. Could you
> expand on this?

Sure. If you're using an fio ioengine that can submit I/O
*asynchronously* (i.e. doesn't have the wait for an I/O to come back
as completed before submitting another I/O) you have a potentially
cheaper (because you don't need to use so many CPUs) way of submitting
LOTS of I/O. The iodepth parameter controls the maximum amount of in
flight I/O to submit before you wait for some of it complete. To give
you a wild figure modern SATA disks can accept up to 32 outstanding
commands at once (although you may find the real depth achievable at
least one less than that due to how things work) so if you end up only
submitting four simultaneous commands the disk might not find that too
stressful (but this is highly job dependent).

A different way of putting is: if you're on Linux take a look at the
output shown by "iostat -x 1". One of the columns will be avgqu-sz and
the deeper this gets the more simultaneous I/O is being submitted to
the disk. If you vary the "--numjob=" of your original example you
will hopefully see this changing. What I'm suggesting is: a different
I/O engine may let you achieve the same effect but using less CPU.
Generally a higher depth is desirable but there are lot of things that
influence the actual queue depth achieved. Also bear in mind that you
might stress your kernel/CPU more before you "stress out" your disk
with certain job types.

-- 
Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux