On Fri, 12 Sep 2014 15:39:15 +0600 Roman Mamedov <rm@xxxxxxxxxxx> wrote: > On Thu, 11 Sep 2014 18:46:04 -0600 > Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote: > > > If it doesn't, a check check > md/sync_action will report mismatches in > > md/mismatch_cnt; and a repair will probably corrupt the volume. > > At least with RAID1/10, why would it? > > > and you can't do repair type scrubs. > > If the FS issues TRIM on a certain region, by definition it no longer cares > about what's stored there (as it's is no longer in use by the FS). So even if > a repair ends up coping some data from one SSD to another, in effect changing > the contents of that region, this should not affect anything whatsoever from > the FS standpoint. > > Technically perhaps that still counts as a "corruption", but not of anything > in the filesystem metadata or user data, just of unused regions. So not as > scary as it first sounds. > > The only case where you'd run into problems with this, is if some apps expect > to read back zeroes on TRIM'ed regions, e.g. Qemu in the "detect-zeroes=unmap" > mode. But using that would be dangerous even on a single SSD with > non-deterministic TRIM, so mdraid changes nothing here. > For any block device in Linux you can read the 'queue/discard_zeroes_data' attribute to see if it is safe to expect zeros from a discarded region. md sets that correctly. For raid1/raid10 it is set if all member devices have it set. For raid5/6, it is never set. This is because we can only discard full stripes so a non-full-stripe discard will not zero all of the data. NeilBrown
Attachment:
signature.asc
Description: PGP signature