On 06.05.14 02:06, Benjamin Herrenschmidt wrote:
On Mon, 2014-05-05 at 17:16 +0200, Alexander Graf wrote:
Isn't this a greater problem? We should start swapping before we hit
the point where non movable kernel allocation fails, no?
Possibly but the fact remains, this can be avoided by making sure that
if we create a CMA reserve for KVM, then it uses it rather than using
the rest of main memory for hash tables.
So why were we preferring non-CMA memory before? Considering that Aneesh
introduced that logic in fa61a4e3 I suppose this was just a mistake?
The fact that KVM uses a good number of normal kernel pages is maybe
suboptimal, but shouldn't be a critical problem.
The point is that we explicitly reserve those pages in CMA for use
by KVM for that specific purpose, but the current code tries first
to get them out of the normal pool.
This is not an optimal behaviour and is what Aneesh patches are
trying to fix.
I agree, and I agree that it's worth it to make better use of our
resources. But we still shouldn't crash.
However, reading through this thread I think I've slowly grasped what
the problem is. The hugetlbfs size calculation.
I guess something in your stack overreserves huge pages because it
doesn't account for the fact that some part of system memory is already
reserved for CMA.
So the underlying problem is something completely orthogonal. The patch
body as is is fine, but the patch description should simply say that we
should prefer the CMA region because it's already reserved for us for
this purpose and we make better use of our available resources that way.
All the bits about pinning, numa, libvirt and whatnot don't really
matter and are just details that led Aneesh to find this non-optimal
allocation.
Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html