On Mon, Apr 24, 2017 at 10:04 PM, Jason Gunthorpe <jgunthorpe@xxxxxxxxxxxxxxxxxxxx> wrote: > On Mon, Apr 24, 2017 at 09:29:11PM +0530, Devesh Sharma wrote: >> On Sat, Apr 22, 2017 at 12:52 AM, Jason Gunthorpe >> <jgunthorpe@xxxxxxxxxxxxxxxxxxxx> wrote: >> > On Fri, Apr 21, 2017 at 02:57:08PM -0400, Devesh Sharma wrote: >> > >> >> +static void bnxt_re_ring_db(struct bnxt_re_dpi *dpi, >> >> + struct bnxt_re_db_hdr *hdr) >> >> +{ >> >> + __le64 *dbval; >> >> + >> >> + pthread_spin_lock(&dpi->db_lock); >> >> + dbval = (__le64 *)&hdr->indx; >> >> + udma_to_device_barrier(); >> >> + iowrite64(dpi->dbpage, dbval); >> >> + pthread_spin_unlock(&dpi->db_lock); >> >> +} >> > >> > What are you expecting this db_lock to do? >> > >> > Is 'dbpage' UC or WC memory? >> >> Driver re-maps it as "UC-" where strong ordering is guaranteed. How >> system would handle 64-bit writes will it issue two 32-bit consecutive >> writes on the bus? > > 64 bit systems support 64 bit writes as a single TLP on the PCI-E, so > the lock is not required. > > If the write is broken up into two 32 bit transfers (eg as on a 32 bit > processor) then the device usually requires them to be presented in > address increasing order, however the compiler does not guarentee that > with a simple 64 bit store as above. > > Going forward we will merge the new shared mmio accessors: > > https://github.com/jgunthorpe/rdma-plumbing/blob/mmio/util/mmio.h > > Which provides sane common implementations for these operations. > > For this reason I don't think there is much point in your spending > time to open code a solution in your driver. Now that I know the lock > is not doing anything it can just be dropped when the mmio accessors > are merged. Okay, Thanks for the explanation, I will take care of Leon's comments and send out a v5, leaving the lock as it is for now. > > Jasn -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html