----- On Jul 21, 2020, at 6:04 AM, Nicholas Piggin npiggin@xxxxxxxxx wrote: > Excerpts from Mathieu Desnoyers's message of July 21, 2020 2:46 am: [...] > > Yeah you're probably right in this case I think. Quite likely most kernel > tasks that asynchronously write to user memory would at least have some > kind of producer-consumer barriers. > > But is that restriction of all async modifications documented and enforced > anywhere? > >>> How about other memory accesses via kthread_use_mm? Presumably there is >>> still ordering requirement there for membarrier, >> >> Please provide an example case with memory accesses via kthread_use_mm where >> ordering matters to support your concern. > > I think the concern Andy raised with io_uring was less a specific > problem he saw and more a general concern that we have these memory > accesses which are not synchronized with membarrier. > >>> so I really think >>> it's a fragile interface with no real way for the user to know how >>> kernel threads may use its mm for any particular reason, so membarrier >>> should synchronize all possible kernel users as well. >> >> I strongly doubt so, but perhaps something should be clarified in the >> documentation >> if you have that feeling. > > I'd rather go the other way and say if you have reasoning or numbers for > why PF_KTHREAD is an important optimisation above rq->curr == rq->idle > then we could think about keeping this subtlety with appropriate > documentation added, otherwise we can just kill it and remove all doubt. > > That being said, the x86 sync core gap that I imagined could be fixed > by changing to rq->curr == rq->idle test does not actually exist because > the global membarrier does not have a sync core option. So fixing the > exit_lazy_tlb points that this series does *should* fix that. So > PF_KTHREAD may be less problematic than I thought from implementation > point of view, only semantics. Today, the membarrier global expedited command explicitly skips kernel threads, but it happens that membarrier private expedited considers those with the same mm as target for the IPI. So we already implement a semantic which differs between private and global expedited membarriers. This can be explained in part by the fact that kthread_use_mm was introduced after 4.16, where the most recent membarrier commands where introduced. It seems that the effect on membarrier was not considered when kthread_use_mm was introduced. Looking at membarrier(2) documentation, it states that IPIs are only sent to threads belonging to the same process as the calling thread. If my understanding of the notion of process is correct, this should rule out sending the IPI to kernel threads, given they are not "part" of the same process, only borrowing the mm. But I agree that the distinction is moot, and should be clarified. Without a clear use-case to justify adding a constraint on membarrier, I am tempted to simply clarify documentation of current membarrier commands, stating clearly that they are not guaranteed to affect kernel threads. Then, if we have a compelling use-case to implement a different behavior which covers kthreads, this could be added consistently across membarrier commands with a flag (or by adding new commands). Does this approach make sense ? Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com