On 2017-10-23 20:51, Mike Kravetz wrote:
> [...]
Well at least this has a built in fall back mechanism. When using hugetlb(fs)
pages, you would need to handle the case where mremap fails due to lack of
configured huge pages.
You're missing the point. I never asked for a fall-back mechanism, even
though it certainly has its use cases. It just isn't mine. In such a
situation it wouldn't be hard to detect if the user requested huger
pages, and then fall back to a smaller size. The only difference is that
I'd have to implement it myself.
But all of that does not change the fact that it's not transparent.
I assume your allocator will be for somewhat general application usage.
Define "general purpose" first. The allocator itself isn't transparent
to typical malloc/realloc/free-based approaches, and it isn't so very
deliberately.
Yet,
for the most reliability the user/admin will need to know at boot time how
many huge pages will be needed and set that up.
That's what I'm trying to argue. With how much memory were typical 386s
equipped back then? 16 MiBs? With a page size of 4 KiBs that leaves 4096
pages to map the entirety of RAM.
My current testing box has 8 GiBs. If I were to map the entirety of my
RAM with 2-MiB pages that would still require 4096 pages. Did anyone set
up pages pools with Linux in the 90s? Did anyone complain that 4096
bytes are too much of a page size to effectively use memory?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>