On Fri, Oct 23, 2020 at 7:00 AM Vlastimil Babka <vbabka@xxxxxxx> wrote: > > On 10/20/20 8:47 PM, Axel Rasmussen wrote: > > The goal of these tracepoints is to be able to debug lock contention > > issues. This lock is acquired on most (all?) mmap / munmap / page fault > > operations, so a multi-threaded process which does a lot of these can > > experience significant contention. > > > > We trace just before we start acquisition, when the acquisition returns > > (whether it succeeded or not), and when the lock is released (or > > downgraded). The events are broken out by lock type (read / write). > > > > The events are also broken out by memcg path. For container-based > > workloads, users often think of several processes in a memcg as a single > > logical "task", so collecting statistics at this level is useful. > > > > The end goal is to get latency information. This isn't directly included > > in the trace events. Instead, users are expected to compute the time > > between "start locking" and "acquire returned", using e.g. synthetic > > events or BPF. The benefit we get from this is simpler code. > > > > Because we use tracepoint_enabled() to decide whether or not to trace, > > this patch has effectively no overhead unless tracepoints are enabled at > > runtime. If tracepoints are enabled, there is a performance impact, but > > how much depends on exactly what e.g. the BPF program does. > > > > Reviewed-by: Michel Lespinasse <walken@xxxxxxxxxx> > > Acked-by: Yafang Shao <laoar.shao@xxxxxxxxx> > > Acked-by: David Rientjes <rientjes@xxxxxxxxxx> > > Signed-off-by: Axel Rasmussen <axelrasmussen@xxxxxxxxxx> > > All seem fine to me, except I started to wonder.. > > > + > > +#ifdef CONFIG_MEMCG > > + > > +DEFINE_PER_CPU(char[MAX_FILTER_STR_VAL], trace_memcg_path); > > + > > +/* > > + * Write the given mm_struct's memcg path to a percpu buffer, and return a > > + * pointer to it. If the path cannot be determined, the buffer will contain the > > + * empty string. > > + * > > + * Note: buffers are allocated per-cpu to avoid locking, so preemption must be > > + * disabled by the caller before calling us, and re-enabled only after the > > + * caller is done with the pointer. > > Is this enough? What if we fill the buffer and then an interrupt comes and the > handler calls here again? We overwrite the buffer and potentially report a wrong > cgroup after the execution resumes? > If nothing worse can happen (are interrupts disabled while the ftrace code is > copying from the buffer?), then it's probably ok? I think you're right, get_cpu()/put_cpu() only deals with preemption, not interrupts. I'm somewhat sure this code can be called in interrupt context, so I don't think we can use locks to prevent this situation. I think it works like this: say we acquire the lock, an interrupt happens, and then we try to acquire again on the same CPU; we can't sleep, so we're stuck. I think we can't kmalloc here (instead of a percpu buffer) either, since I would guess that kmalloc may also acquire mmap_lock itself? Is adding local_irq_save()/local_irq_restore() in addition to get_cpu()/put_cpu() sufficient? > > > + */ > > +static const char *get_mm_memcg_path(struct mm_struct *mm) > > +{ > > + struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm); > > + > > + if (memcg != NULL && likely(memcg->css.cgroup != NULL)) { > > + char *buf = this_cpu_ptr(trace_memcg_path); > > + > > + cgroup_path(memcg->css.cgroup, buf, MAX_FILTER_STR_VAL); > > + return buf; > > + } > > + return ""; > > +} > > + > > +#define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \ > > + do { \ > > + get_cpu(); \ > > + trace_mmap_lock_##type(mm, get_mm_memcg_path(mm), \ > > + ##__VA_ARGS__); \ > > + put_cpu(); \ > > + } while (0) > > + > > +#else /* !CONFIG_MEMCG */ > > + > > +#define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \ > > + trace_mmap_lock_##type(mm, "", ##__VA_ARGS__) > > + > > +#endif /* CONFIG_MEMCG */ > > + > > +/* > > + * Trace calls must be in a separate file, as otherwise there's a circular > > + * dependency between linux/mmap_lock.h and trace/events/mmap_lock.h. > > + */ > > + > > +void __mmap_lock_do_trace_start_locking(struct mm_struct *mm, bool write) > > +{ > > + TRACE_MMAP_LOCK_EVENT(start_locking, mm, write); > > +} > > +EXPORT_SYMBOL(__mmap_lock_do_trace_start_locking); > > + > > +void __mmap_lock_do_trace_acquire_returned(struct mm_struct *mm, bool write, > > + bool success) > > +{ > > + TRACE_MMAP_LOCK_EVENT(acquire_returned, mm, write, success); > > +} > > +EXPORT_SYMBOL(__mmap_lock_do_trace_acquire_returned); > > + > > +void __mmap_lock_do_trace_released(struct mm_struct *mm, bool write) > > +{ > > + TRACE_MMAP_LOCK_EVENT(released, mm, write); > > +} > > +EXPORT_SYMBOL(__mmap_lock_do_trace_released); > > >