On Mon 01-05-23 09:54:44, Suren Baghdasaryan wrote: [...] > +static inline void add_ctx(struct codetag_ctx *ctx, > + struct codetag_with_ctx *ctc) > +{ > + kref_init(&ctx->refcount); > + spin_lock(&ctc->ctx_lock); > + ctx->flags = CTC_FLAG_CTX_PTR; > + ctx->ctc = ctc; > + list_add_tail(&ctx->node, &ctc->ctx_head); > + spin_unlock(&ctc->ctx_lock); AFAIU every single tracked allocation will get its own codetag_ctx. There is no aggregation per allocation site or anything else. This looks like a scalability and a memory overhead red flag to me. > +} > + > +static inline void rem_ctx(struct codetag_ctx *ctx, > + void (*free_ctx)(struct kref *refcount)) > +{ > + struct codetag_with_ctx *ctc = ctx->ctc; > + > + spin_lock(&ctc->ctx_lock); This could deadlock when allocator is called from the IRQ context. > + /* ctx might have been removed while we were using it */ > + if (!list_empty(&ctx->node)) > + list_del_init(&ctx->node); > + spin_unlock(&ctc->ctx_lock); > + kref_put(&ctx->refcount, free_ctx); -- Michal Hocko SUSE Labs