On 06/03/2010 02:20 PM, Vladislav Bolkhovitin wrote: > > There's one interesting problem here, at least theoretically, with SCSI > or similar transports which allow to have commands queue depth >1 and > allowed to internally reorder queued requests. I don't know the FS/block > layers sufficiently well to tell if sending several requests for the > same page really possible or not, but we can see a real life problem, > which can be well explained if it's possible. > > The problem could be if the second (rewrite) request (SCSI command) for > the same page queued to the corresponding device before the original > request finished. Since the device allowed to freely reorder requests, > there's a probability that the original write request would hit the > permanent storage *AFTER* the retry request, hence the data changes it's > carrying would be lost, hence welcome data corruption. > I might be totally wrong here but I think NCQ can reorder sectors but not writes. That is if the sector is cached in device memory and a later write comes to modify the same sector then the original should be replaced not two values of the same sector be kept in device cache at the same time. Failing to do so is a scsi device problem. Please note that page-to-sector is not necessary constant. And the same page might get written at a different sector, next time. But FSs will have to barrier in this case. > For single parallel SCSI or SAS devices such race may look practically > impossible, but for sophisticated clusters when many nodes pretending to > be a single SCSI device in a load balancing configuration, it becomes > very real. > > The real life problem we can see in an active-active DRBD-setup. In this > configuration 2 nodes act as a single SCST-powered SCSI device and they > both run DRBD to keep their backstorage in-sync. The initiator uses them > as a single multipath device in an active-active round-robin > load-balancing configuration, i.e. sends requests to both nodes in > parallel, then DRBD takes care to replicate the requests to the other node. > > The problem is that sometimes DRBD complies about concurrent local > writes, like: > > kernel: drbd0: scsi_tgt0[12503] Concurrent local write detected! > [DISCARD L] new: 144072784s +8192; pending: 144072784s +8192 > > This message means that DRBD detected that both nodes received > overlapping writes on the same block(s) and DRBD can't figure out which > one to store. This is possible only if the initiator sent the second > write request before the first one completed. > It is totally possible in today's code. DRBD should store the original command_sn of the write and discard the sector with the lower SN. It should appear as a single device to the initiator. > The topic of the discussion could well explain the cause of that. But, > unfortunately, people who reported it forgot to note which OS they run > on the initiator, i.e. I can't say for sure it's Linux. > > Vlad > Boaz -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html