about sharing the hugepage memory segment between the host and the container

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear all,

What I want to do is to share a hugepage memory segment between the host and a container (I am trying to use intel DPDK package in container). For a normal memory (4k page), the memory sharing can be achieved by memory-mapped I/O (mmap())method with the same disk file on the host exposed to the container (that is, the host and the container share the same disk file).

 

my questions are:

1) Do I need to mount the hugetlbfs in host and container respectively?

2) According to my understanding, the exposed file that will be shared by the host and container must locate at the hugetlbfs mount point. Is this right?

3) According to my understanding, the mount point in the host doesn't necessarily to be the same as that of the container, as long as the mount point is exposed to the container, Is this right?

4) If I want to share the hugepage memory segment betwen two containers, what's the answers for 1) ?

5) According to my internet research, if use open() to open a file under the hugepage mount point, the mmap() will automatically return a pointer to the hugepage memory segment. Is this right? Apart from this, I cannot see other difference from memory-mapped I/O with the 4k page memory.

 

Any answer will be highly appreciated.

Cheng Wang

_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users

[Index of Archives]     [Virt Tools]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux