On 02-May-19 21:02, Jason Gunthorpe wrote: > > On Wed, May 01, 2019 at 01:48:22PM +0300, Gal Pressman wrote: >> +static int mmap_entry_insert(struct efa_dev *dev, >> + struct efa_ucontext *ucontext, >> + struct efa_mmap_entry *entry, >> + u8 mmap_flag) >> +{ >> + u32 mmap_page; >> + int err; >> + >> + err = xa_alloc(&ucontext->mmap_xa, &mmap_page, entry, xa_limit_32b, >> + GFP_KERNEL); >> + if (err) { >> + ibdev_dbg(&dev->ibdev, "mmap xarray full %d\n", err); >> + return err; >> + } >> + >> + entry->key = (u64)mmap_page << PAGE_SHIFT; >> + set_mmap_flag(&entry->key, mmap_flag); > > This doesn't look like it is in the right order.. There is no locking > here so the xa_alloc should only be called on a fully intialized entry > > And because there is no locking you also can't really have a > mmap_obj_entries_remove.. > > I think this needs a mutex lock also head across mmap_get to be correct.. What needs to be atomic here is the "mmap_page" allocations, which is guaranteed by xa_alloc. A unique page is allocated for each insertion and then a key is generated for the entry. The key needs the mmap page hence the order, a lock would be redundant as the sequence does not require it. There are no concurrent gets (to other operations) as the key will only be accessed once the insertion is done and the response is returned (at this point the entry is fully initialized). There are no concurrent removes as they only happen in error flow which happens after all the relevant insertions are done (and won't be accessed) or when the ucontext is being deallocated which cannot happen simultaneously with anything else.