>>> > On 08/11/2018 02:06 AM, NeilBrown wrote: >>> >> It might be expected behaviour with async direct IO. >>> >> Two threads writing with O_DIRECT io to the same address could result in >>> >> different data on the two devices. This doesn't seem to me to be a >>> >> credible use-case though. Why would you ever want to do that in >>> >> practice? >>> >> >>> >> NeilBrown >>> > >>> > My only thought is while the credible case may be weak, if it is something >>> > that can be protected against with a few conditionals to prevent the different >>> > data on the slaves diverging -- then it's worth a couple of conditions to >>> > prevent the nut that know just enough about dd from confusing things.... >>> >>> Yes, it can be protected against - the code is already written. >>> If you have a 2-drive raid1 and want it to be safe against this attack, >>> simply: >>> >>> mdadm /dev/md127 --grow --level=raid5 >>> >>> This will add the required synchronization between writes so that >>> multiple writes to the one block are linearized. There will be a >>> performance impact. >>> >>> NeilBrown >> Thanks for your comments, Neil. >> Convert to raid5 with 2 drives will not only cause perrormance drop, >> will also disable the redundancy. >> It's clearly a no go. > >I don't understand why you think it would disable the redundancy, there >are still two copies of every block. Both RAID1 and RAID5 can survive a >single device failure. > >I agree about performance and don't expect this would be a useful thing >to do; it just seemed the simplest way to explain the cost that would be >involved in resisting this attack. > >NeilBrown Hi Neil, the performance imact one is facing when running raid5 on top of two legs - is it only due to the tracking of the inflight writes or is the raid5 actually doing some XORing (with zeros?) in that case? And if the cpu is burned also for some other reason apart of tracking, do you think it would make sence to expose that "writes-to-the-same-sector-tracking" functionality also for raid1 personality? Thank you, Danil.