On 1/22/22 2:41 AM, Matthew Wilcox wrote:
On Sat, Jan 22, 2022 at 01:39:46AM +0000, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) wrote:
Our use case is that we have some very large files stored on persistent
memory which we want to mmap in thousands of processes. So the first
The memory overhead of PTEs would be significantly saved if we use
hugetlbfs in this case, but why not?
Because we want the files to be persistent across reboots.
100% agree. There is another use case: geo-redundancy.
My view is publicly documented at
https://github.com/schoebel/mars/tree/master/docu and click at
architecture-guide-geo-redundancy.pdf
In some scenarios, migration or (temporary) co-existence of block
devices from/between hardware architecture A to/between hardware
architecture B might become a future requirement for me.
The currrent implementation does not yet use hugetlbfs and/or its
proposed / low-overhead / more fine-grained and/or less
hardware-architecture specific (future) alternatives.
For me, all of these are future options. In particular, when (1)
abstractable for reduction of architectural dependencies, and hopefully
(2) usable from both kernelspace and userspace.
It would be great if msharefs is not only low-footprint, but also would
be usable from kernelspace.
Reduction (or getting rid) of preallocation strategies would be also a
valuable feature for me.
Of course, I cannot decide what I will prefer in future for any future
requirements. But some kind of mutual awareness and future collaboration
would be great.