Re: could fio be used to wipe disks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

1. It will try and use the whole area of the disk but if fio's
blocksize turns out not to the same as the disk's block size it may
miss the very end of the disk. Also if an error is encountered fio may
stop part way through and fail to overwrite the rest of the data. If
you're not paranoid about disk wiping then it is unlikely that fio
will be any faster than doing something like
dd if=/dev/zero of=/dev/mydisk bs=1M oflag=direct

Bear in mind that if you're worried about wiping disks properly you
will have to write particular pattern over the disk. Further it may
not easier than you think to get at stale internal mappings the "disk"
might have which is why you use SECURE ERASE on SSDs. fio does none of
this.

Since you mentioned stress you may want to look at using a deeper
iodepth with the libaio engine (assuming you're on Linux, other
platforms have asynchronous engines too) and only doing the sync at
the end.

2. What you are proposing is very dangerous. Assuming the filesystem
is unmounted you and you wanted to do writes you would need to read
the data and then write that same data back but fio has no facility
for such a thing at present. If the filesystem is mounted then I think
what you're asking for is impossible without introducing the
possibility of filesystem corruption.

On 15 March 2017 at 21:42, Antoine Beaupre <anarcat@xxxxxxxxxxxxxxx> wrote:
> Hi,
>
> I'm writing a stress-testing tool and i'm looking at using fio to
> stress-test disks. The point is not exactly to benchmark the disks, but
> put sustained load on the disks to make sure they are generally in
> working order.
>
> Right now, I came up with something like this:
>
>       fio --name=stressant --readwrite=randrw  --filename=/dev/sdX \
>           --size=100% --numjob=4 --sync=1 --direct=1 --group_reporting
>
> My question is:
>
>  1. will this reliably wipe the whole drive? i know that some data can
>     remain due to magnetic properties of the drive or nasty SSD tricks,
>     but assume we don't do crazy forensics
>
>  2. if not, is there a way to directly test write I/O directly through
>     the device (to ignore filesystem-related issues) non-destructively?
>
> Thanks!
>
> A.
>
> PS: for those curious, my prototype is available here:
>
> https://gitlab.com/anarcat/stressant/blob/master/stressant.py
>
> Nothing serious so far...
>
> --
> La publicité est la dictature invisible de notre société.
>                         - Jacques Ellul
>
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux