Re: [PATCH 08/16] iommu/fsl: use page allocation function provided by iommu-pages.h

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 28/11/2023 11:50 pm, Jason Gunthorpe wrote:
On Tue, Nov 28, 2023 at 06:00:13PM -0500, Pasha Tatashin wrote:
On Tue, Nov 28, 2023 at 5:53 PM Robin Murphy <robin.murphy@xxxxxxx> wrote:

On 2023-11-28 8:49 pm, Pasha Tatashin wrote:
Convert iommu/fsl_pamu.c to use the new page allocation functions
provided in iommu-pages.h.

Again, this is not a pagetable. This thing doesn't even *have* pagetables.

Similar to patches #1 and #2 where you're lumping in configuration
tables which belong to the IOMMU driver itself, as opposed to pagetables
which effectively belong to an IOMMU domain's user. But then there are
still drivers where you're *not* accounting similar configuration
structures, so I really struggle to see how this metric is useful when
it's so completely inconsistent in what it's counting :/

The whole IOMMU subsystem allocates a significant amount of kernel
locked memory that we want to at least observe. The new field in
vmstat does just that: it reports ALL buddy allocator memory that
IOMMU allocates. However, for accounting purposes, I agree, we need to
do better, and separate at least iommu pagetables from the rest.

We can separate the metric into two:
iommu pagetable only
iommu everything

or into three:
iommu pagetable only
iommu dma
iommu everything

What do you think?

I think I said this at LPC - if you want to have fine grained
accounting of memory by owner you need to go talk to the cgroup people
and come up with something generic. Adding ever open coded finer
category breakdowns just for iommu doesn't make alot of sense.

You can make some argument that the pagetable memory should be counted
because kvm counts it's shadow memory, but I wouldn't go into further
detail than that with hand coded counters..

Right, pagetable memory is interesting since it's something that any random kernel user can indirectly allocate via iommu_domain_alloc() and iommu_map(), and some of those users may even be doing so on behalf of userspace. I have no objection to accounting and potentially applying limits to *that*.

Beyond that, though, there is nothing special about "the IOMMU subsystem". The amount of memory an IOMMU driver needs to allocate for itself in order to function is not of interest beyond curiosity, it just is what it is; limiting it would only break the IOMMU, and if a user thinks it's "too much", the only actionable thing that might help is to physically remove devices from the system. Similar for DMA buffers; it might be intriguing to account those, but it's not really an actionable metric - in the overwhelming majority of cases you can't simply tell a driver to allocate less than what it needs. And that is of course assuming if we were to account *all* DMA buffers, since whether they happen to have an IOMMU translation or not is irrelevant (we'd have already accounted the pagetables as pagetables if so).

I bet "the networking subsystem" also consumes significant memory on the same kind of big systems where IOMMU pagetables would be of any concern. I believe some of the some of the "serious" NICs can easily run up hundreds of megabytes if not gigabytes worth of queues, SKB pools, etc. - would you propose accounting those too?

Thanks,
Robin.




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux