Hi, Sorry for late reply. On Fri, 10 May 2024 10:20:56 +0200 Vlastimil Babka <vbabka@xxxxxxx> wrote: > On 5/10/24 9:59 AM, wuqiang.matt wrote: > > On 2024/5/7 21:55, Vlastimil Babka wrote: > >> > >>> + } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1)); > >>> + > >>> + /* now the tail position is reserved for the given obj */ > >>> + WRITE_ONCE(slot->entries[tail & slot->mask], obj); > >>> + /* update sequence to make this obj available for pop() */ > >>> + smp_store_release(&slot->last, tail + 1); > >>> + > >>> + return 0; > >>> +} > >>> > >>> /** > >>> * objpool_push() - reclaim the object and return back to objpool > >>> @@ -134,7 +219,19 @@ void *objpool_pop(struct objpool_head *pool); > >>> * return: 0 or error code (it fails only when user tries to push > >>> * the same object multiple times or wrong "objects" into objpool) > >>> */ > >>> -int objpool_push(void *obj, struct objpool_head *pool); > >>> +static inline int objpool_push(void *obj, struct objpool_head *pool) > >>> +{ > >>> + unsigned long flags; > >>> + int rc; > >>> + > >>> + /* disable local irq to avoid preemption & interruption */ > >>> + raw_local_irq_save(flags); > >>> + rc = __objpool_try_add_slot(obj, pool, raw_smp_processor_id()); > >> > >> And IIUC, we could in theory objpool_pop() on one cpu, then later another > >> cpu might do objpool_push() and cause the latter cpu's pool to go over > >> capacity? Is there some implicit requirements of objpool users to take care > >> of having matched cpu for pop and push? Are the current objpool users > >> obeying this requirement? (I can see the selftests do, not sure about the > >> actual users). > >> Or am I missing something? Thanks. > > > > The objects are all pre-allocated along with creation of the new objpool > > and the total number of objects never exceeds the capacity on local node. > > Aha, I see, the capacity of entries is enough to hold objects from all nodes > in the most unfortunate case they all end up freed from a single cpu. > > > So objpool_push() would always find an available slot from the ring-array > > for the given object to insert back. objpool_pop() would try looping all > > the percpu slots until an object is found or whole objpool is empty. > > So it's correct, but seems rather wasteful to have the whole capacity for > entries replicated on every cpu? It does make objpool_push() simple and > fast, but as you say, objpool_pop() still has to search potentially all > non-local percpu slots, with disabled irqs, which is far from ideal. For the kretprobe/fprobe use-case, it is important to push (return) object fast. We can reservce enough number of objects when registering but push operation will happen always on random CPU. > > And the "abort if the slot was already full" comment for > objpool_try_add_slot() seems still misleading? Maybe that was your initial > idea but changed later? Ah, it should not happen... > > > Currently kretprobe is the only actual usecase of objpool. Note that fprobe is also using this objpool, but currently I'm working on integrating fprobe on function-graph tracer[1] which will make fprobe not using objpool. And also I'm planning to replace kretprobe with the new fprobe eventually. So if SLUB will use objpool for frontend caching, it sounds good to me. (Maybe it can speed up the object allocation/free) > > > > I'm testing an updated objpool in our HIDS project for critical pathes, > > which is widely deployed on servers inside my company. The new version > > eliminates the raw_local_irq_save and raw_local_irq_restore pair of > > objpool_push and gains up to 5% of performance boost. > > Mind Ccing me and linux-mm once you are posting that? Can you add me too? Thank you, > > Thanks, > Vlastimil > -- Masami Hiramatsu (Google) <mhiramat@xxxxxxxxxx>