On Fri, Mar 03, 2017 at 03:22:44PM -0700, Jason Gunthorpe wrote: > On Fri, Mar 03, 2017 at 03:45:14PM -0600, Shiraz Saleem wrote: > > > This is not quite how our DB logic works. There are additional HW > > steps and nuances in the flow. Unfortunately, to explain this, we > > need to provide details of our internal HW flow for the DB logic. We > > are unable to do so at this time. > > Well, it is very problematic to help you define what a cross-arch > barrier should do if you can't explain what you need to have happen > relative to PCI-E. > Unfortunately, we can help with this only at the point when this information is made public. If you must have an explanation for all barriers defined in utils, an option here is to leave this barrier in i40iw and migrate it to utils when documentation is available. > > Mfence guarantees that load won't be reordered before the store, and > > thus we are using it. > > If that is all then the driver can use LFENCE and the > udma_from_device_barrier() .. Is that OK? > The write valid WQE needs to be globally visible before read tail. LFENCE does not guarantee this. MFENCE does. https://software.intel.com/sites/default/files/managed/39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf LFENCE (Vol. 3A 8-16) "Serializes all load (read) operations that occurred prior to the LFENCE instruction in the program instruction stream, but does not affect store operations" LFENCE (Vol. 2A 3-529) "An LFENCE that follows an instruction that stores to memory might complete before the data being stored have become globally visible. Instructions following an LFENCE may be fetched from memory before the LFENCE, but they will not execute until the LFENCE completes" MFENCE (Vol. 2B 4-22) "This serializing operation guarantees that every load and store instruction that precedes the MFENCE instruction in program order becomes globally visible before any load or store instruction that follows the MFENCE instruction" -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html