On Thu, Dec 3, 2020 at 11:59 AM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > On Thu, Dec 03, 2020 at 11:40:15AM -0500, Pavel Tatashin wrote: > > > Looking at this code some more.. How is it even correct? > > > > > > 1633 if (!isolate_lru_page(head)) { > > > 1634 list_add_tail(&head->lru, &cma_page_list); > > > > > > Here we are only running under the read side of the mmap sem so multiple > > > GUPs can be calling that sequence in parallel. I don't see any > > > obvious exclusion that will prevent corruption of head->lru. The first > > > GUP thread to do isolate_lru_page() will ClearPageLRU() and the second > > > GUP thread will be a NOP for isolate_lru_page(). > > > > > > They will both race list_add_tail and other list ops. That is not OK. > > > > Good question. I studied it, and I do not see how this is OK. Worse, > > this race is also exposable as a syscall instead of via driver: two > > move_pages() run simultaneously. Perhaps in other places? > > > > move_pages() > > kernel_move_pages() > > mmget() > > do_pages_move() > > add_page_for_migratio() > > mmap_read_lock(mm); > > list_add_tail(&head->lru, pagelist); <- Not protected > > When this was CMA only it might have been rarer to trigger, but this > move stuff sounds like it makes it much more broadly, eg on typical > servers with RDMA exposed/etc > > Seems like it needs fixing as part of this too :\ Just to clarify the stack that I showed above is outside of gup, it is the same issue that you pointed out that happens elsewhere. I suspect there might be more. All of them should be addressed together. Pasha > > Page at a time inside the gup loop could address both concerns, unsure > about batching performance here though.. > > Jason