On 2024/2/26 16:20, David Hildenbrand wrote:
On 26.02.24 08:52, Huang, Ying wrote:
"zhangpeng (AS)" <zhangpeng362@xxxxxxxxxx> writes:
On 2024/2/26 14:04, Huang, Ying wrote:
"zhangpeng (AS)" <zhangpeng362@xxxxxxxxxx> writes:
On 2024/2/7 10:21, Huang, Ying wrote:
Peng Zhang <zhangpeng362@xxxxxxxxxx> writes:
From: ZhangPeng <zhangpeng362@xxxxxxxxxx>
The major fault occurred when using mlockall(MCL_CURRENT |
MCL_FUTURE)
in application, which leading to an unexpected performance
issue[1].
This caused by temporarily cleared PTE during a
read+clear/modify/write
update of the PTE, eg, do_numa_page()/change_pte_range().
For the data segment of the user-mode program, the global
variable area
is a private mapping. After the pagecache is loaded, the private
anonymous
page is generated after the COW is triggered. Mlockall can lock
COW pages
(anonymous pages), but the original file pages cannot be locked
and may
be reclaimed. If the global variable (private anon page) is
accessed when
vmf->pte is zeroed in numa fault, a file page fault will be
triggered.
At this time, the original private file page may have been
reclaimed.
If the page cache is not available at this time, a major fault
will be
triggered and the file will be read, causing additional overhead.
Fix this by rechecking the PTE without acquiring PTL in
filemap_fault()
before triggering a major fault.
Testing file anonymous page read and write page fault
performance in ext4
and ramdisk using will-it-scale[2] on a x86 physical machine.
The data
is the average change compared with the mainline after the patch is
applied. The test results are within the range of fluctuation,
and there
is no obvious difference. The test results are as follows:
You still claim that there's no difference in the test results.
If so,
why do you implement the patch? IMHO, you need to prove your
patch can
improve the performance in some cases.
I'm sorry that maybe I didn't express myself clearly.
The purpose of this patch is to fix the issue that major fault may
still be triggered
with mlockall(), thereby improving a little performance. This
patch is more of a bugfix
than a performance improvement patch.
This issue affects our traffic analysis service. The inbound
traffic is heavy. If a major
fault occurs, the I/O schedule is triggered and the original I/O
is suspended. Generally,
the I/O schedule is 0.7 ms. If other applications are operating
disks, the system needs
to wait for more than 10 ms. However, the inbound traffic is heavy
and the NIC buffer is
small. As a result, packet loss occurs. The traffic analysis
service can't tolerate packet
loss.
To prevent packet loss, we use the mlockall() function to prevent
I/O. It is unreasonable
that major faults will still be triggered after mlockall() is used.
In our service test environment, the baseline is 7 major faults/12
hours. After applied the
unlock patch, the probability of triggering the major fault is 1
major faults/12 hours. After
applied the lock patch, no major fault will be triggered. So only
the locked patch can actually
solve our problem.
This is the data I asked for.
But, you said that this is a feature bug fix instead of performance
improvement. So, I checked the mlock(2), and found the following
words,
"
mlockall() locks all pages mapped into the address space
of the calling
process. This includes the pages of the code, data, and
stack segment,
as well as shared libraries, user space kernel data,
shared memory, and
memory-mapped files. All mapped pages are guaranteed to
be resident in
RAM when the call returns successfully; the pages are
guaranteed to
stay in RAM until later unlocked.
"
In theory, the locked page are in RAM. So, IIUC, we don't violate the
ABI. But, in effect, we does do that.
"mlockall() locks all pages mapped into the address space of the
calling process."
For a private mapping, mlockall() can lock COW pages (anonymous
pages), but the
original file pages can't be locked. Maybe, we violate the ABI here.
If so, please make it explicit and loudly.
This is also
the cause of this issue. The patch fix the impact of this issue:
prevent major
faults, reduce IO, and fix the service packet loss issue.
Preventing major faults, and thus reducing IO, could be an important
reason to use
mlockall(). Could we fix this with the locked patch? Or is there
another way?
Unfortunately, locked patch cause performance regressions for more
common cases. Is it possible for us to change ptep_modify_prot_start()
to use some magic PTE value instead of 0? That may be possible. But,
that isn't enough, you need to change all ptep_get_and_clear() users.
Trigger (false) major faults for mlocked memory is suboptimal.
Having such pages temporarily not mapped (e.g., page migration) is
acceptable (pages are in RAM but are getting moved). We handle that
using nonswap migration entries.
Let me understand the issue first:
1) MAP_PRIVATE file mapping that is mlocked.
2) We caused COW, so we now have an anonymous page mapped. That anon
page is mlocked.
3) Change of protection (under PT lock) will temporarily clear the PTE
4) Page fault will trigger and find the PTE still cleared (without PT
lock)
5) We don't realize that there is a page mapped and, therefore, trigger
a major fault.
Using the PT lock would fix it properly. Doing it as in this patch can
only be considered an optimization, not a proper fix.
Using a magic PTE to work around "just use the PT lock like everyone
else" feels a bit odd. The patch states "We don't hold PTL here as
acquiring PTL hurts performance" -- do we have any numbers on that?
Testing file anonymous page read and write page fault performance in ext4
, tmpfs and ramdisk using will-it-scale[2] on a x86 physical machine. The
data is the average change compared with the mainline after the patch is
applied.
with the locked patch:
processes processes_idle threads threads_idle
ext4 private file write: -0.51% 0.08% -0.03% -0.04%
ext4 shared file write: 0.135% -0.531% 2.883% -0.772%
ramdisk private file write: -0.48% 0.23% -1.08% 0.27%
ramdisk private file read: 0.07% -6.90% -5.85% -0.70%
tmpfs private file write: -0.344% -0.110% 0.200% 0.145%
tmpfs shared file write: 0.958% 0.101% 2.781% -0.337%
tmpfs private file read: -0.16% 0.00% -0.12% 0.41%
with the no locked patch:
processes processes_idle threads threads_idle
ext4 private file write: -1.14% -0.08% -1.87% 0.13%
ext4 private file read: 0.03% -0.65% -0.51% -0.08%
ramdisk private file write: -1.21% -0.21% -1.12% 0.11%
ramdisk private file read: 0.00% -0.68% -0.33% -0.02%
I could also run other tests if needed.
We could special-case that for MLOCK'ed VMAs with MCL_FUTURE, meaning,
take the PTL to double-check only in such VMAs.
Agreed. I think this solution is great. Thanks for your suggestion!
--
Best Regards,
Peng