Re: [PATCH] mm: cma: support sysfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 04, 2021 at 09:49:54PM -0800, John Hubbard wrote:
> On 2/4/21 9:17 PM, Minchan Kim wrote:
> ...
> > > > > Presumably, having the source code, you can easily deduce that a bluetooth
> > > > > allocation failure goes directly to a CMA allocation failure, right?
> > > 
> > > Still wondering about this...
> > 
> > It would work if we have full source code and stack are not complicated for
> > every usecases. Having said, having a good central place automatically
> > popped up is also beneficial for not to add similar statistics for each
> > call sites.
> > 
> > Why do we have too many item in slab sysfs instead of creating each call
> > site inventing on each own?
> > 
> 
> I'm not sure I understand that question fully, but I don't think we need to
> invent anything unique here. So far we've discussed debugfs, sysfs, and /proc,
> none of which are new mechanisms.

I thought you asked why we couldn't add those stat in their call site
driver syfs instead of central place. Please clarify if I misunderstood
your question.

> 
> ...
> 
> > > It's actually easier to monitor one or two simpler items than it is to monitor
> > > a larger number of complicated items. And I get the impression that this is
> > > sort of a top-level, production software indicator.
> > 
> > Let me clarify one more time.
> > 
> > What I'd like to get ultimately is per-CMA statistics instead of
> > global vmstat for the usecase at this moment. Global vmstat
> > could help the decision whether I should go deeper but it ends up
> > needing per-CMA statistics. And I'd like to keep them in sysfs,
> > not debugfs since it should be stable as a telemetric.
> > 
> > What points do you disagree in this view?
> 
> 
> No huge disagreements, I just want to get us down to the true essential elements
> of what is required--and find a good home for the data. Initial debugging always
> has excesses, and those should not end up in the more carefully vetted production
> code.
> 
> If I were doing this, I'd probably consider HugeTLB pages as an example to follow,
> because they have a lot in common with CMA: it's another memory allocation pool, and
> people also want to monitor it.
> 
> HugeTLB pages and THP pages are monitored in /proc:
> 	/proc/meminfo and /proc/vmstat:
> 
> # cat meminfo |grep -i huge
> AnonHugePages:     88064 kB
> ShmemHugePages:        0 kB
> FileHugePages:         0 kB
> HugePages_Total:     500
> HugePages_Free:      500
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> Hugetlb:         1024000 kB
> 
> # cat vmstat | grep -i huge
> nr_shmem_hugepages 0
> nr_file_hugepages 0
> nr_anon_transparent_hugepages 43
> numa_huge_pte_updates 0
> 
> ...aha, so is CMA:
> 
> # cat vmstat | grep -i cma
> nr_free_cma 261718
> 
> # cat meminfo | grep -i cma
> CmaTotal:        1048576 kB
> CmaFree:         1046872 kB
> 
> OK, given that CMA is already in those two locations, maybe we should put
> this information in one or both of those, yes?

Do you suggest something liks this, for example?


cat vmstat | grep -i cma
cma_a_success	125
cma_a_fail	25
cma_b_success	130
cma_b_fail	156
..
cma_f_fail	xxx







[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux