On Fri, Oct 4, 2024 at 3:39 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > On Fri, 4 Oct 2024 12:23:30 +0000 高翔 <gaoxiang17@xxxxxxxxxx> wrote: > > > > > +static unsigned long cma_get_used_pages(struct cma *cma) { > > > > + unsigned long used; > > > > + > > > > + spin_lock_irq(&cma->lock); > > > > + used = bitmap_weight(cma->bitmap, (int)cma_bitmap_maxno(cma)); > > > > + spin_unlock_irq(&cma->lock); > > > > > > This adds overhead to each allocation, even if debug outputs are > > > ignored I assume? > > > > > > I wonder if we'd want to print these details only when our allocation > > > failed? > > > > > > Alternatively, we could actually track how many pages are allocated in > > > the cma, so we don't have to traverse the complete bitmap on every > > > allocation. > > > > > > > Yep, that's what I did as part of > > https://lore.kernel.org/all/20240724124845.614c03ad39f8af3729cebee6@xxxxxxxxxxxxxxxxxxxx/T/ > > > > That patch didn't make it in (yet). I'm happy for it to be combined with this one if that's easier. > > That patch has been forgotten about. As I asked in July, > "I suggest a resend, and add some Cc:s for likely reviewers." Indeed - I certainly wasn't suggesting that anyone else forgot about it, it's up to me to follow up here, and I haven't yet. - Frank