On Friday April 28, molle.bestefich@xxxxxxxxx wrote: > NeilBrown wrote: > > Change ENOTSUPP to EOPNOTSUPP > > Because that is what you get if a BIO_RW_BARRIER isn't supported ! > > Dumb question, hope someone can answer it :). > > Does this mean that any version of MD up till now won't know that SATA > disks does not support barriers, and therefore won't flush SATA disks > and therefore I need to disable the disks's write cache if I want to > be 100% sure that raid arrays are not corrupted? > > Or am I way off :-). The effect of this bug is almost unnoticeable. In almost all cases, md will detect that a drive doesn't support barriers when writing out the superblock - this is completely separate code and is correct. Thus md/raid1 will reject any barrier requests coming from the filesystem and will never pass them down, and will not make a wrong decision because of this bug. The only cases where this bug could cause a problem are: 1/ when the first write is a barrier write. It is possible that reiserfs does this in some cases. However only this write will be at risk. 2/ if a device changes its behaviour from accepting barriers to not accepting barrier (Which is very uncommon). As md will be rejecting barrier requests, the filesystem will know not to trust them and should use other techniques such as waiting for dependant requests to complete, and calling blkdev_issue_flush were appropriate. Whether filesystems actually do this, I am less certain. NeilBrown - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html