On 09/19/2017 03:53 AM, Mikulas Patocka wrote: > On Fri, 15 Sep 2017, Joe Lawrence wrote: > [ ... snip ... ] >> Hi Mikulas, >> >> I'm not strong when it comes to memory barriers, but one of the >> side-effects of using the mutex is that pipe_set_size() and >> alloc_pipe_info() should have a consistent view of pipe_max_size. >> >> If I remove the mutex (and assume that I implement a custom >> do_proc_dointvec "conv" callback), is it safe for these routines to >> directly use pipe_max_size as they had done before? >> >> If not, is it safe to alias through a temporary stack variable (ie, >> could the compiler re-read pipe_max_size multiple times in the function)? >> >> Would READ_ONCE() help in any way? > > Theoretically re-reading the variable is possible and you should use > ACCESS_ONCE or READ_ONCE+WRITE_ONCE on that variable. > > In practice, ACCESS_ONCE/READ_ONCE/WRITE_ONCE is missing at a lot of > kernel variables that could be modified asynchronously and no one is > complaining about it and no one is making any systematic effort to fix it. > > That re-reading happens (I have some test code that makes the gcc > optimizer re-read a variable), but it happens very rarely. This would be interesting to look at if you are willing to share (can send offlist). > Another theoretical problem is that when reading or writing a variable > without ACCESS_ONCE, the compiler could read and write the variable using > multiple smaller memory accesses. But in practice, it happens only on some > non-common architectures. Smaller access than word size? >> The mutex covered up some confusion on my part here. >> >> OTOH, since pipe_max_size is read-only for pipe_set_size() and >> alloc_pipe_info() and only updated occasionally by pipe_proc_fn(), would >> rw_semaphore or RCU be a fit here? > > RW semaphore causes cache-line ping-pong between CPUs, it slows down the > kernel just like a normal spinlock or mutex. Ah right. > RCU would be useless here (i.e. you don't want to allocate memory and > atomically assign it with rcu_assign_pointer). And good point here. Thanks for the explanations, they confirm and expand what I as already thinking in this space. --- Joe