On 2021/3/9 0:34, Will Deacon wrote:
On Mon, Jan 25, 2021 at 10:10:44PM +0800, Yanan Wang wrote:
After dirty-logging is stopped for a VM configured with huge mappings,
KVM will recover the table mappings back to block mappings. As we only
replace the existing page tables with a block entry and the cacheability
has not been changed, the cache maintenance opreations can be skipped.
Signed-off-by: Yanan Wang <wangyanan55@xxxxxxxxxx>
---
arch/arm64/kvm/mmu.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 8e8549ea1d70..37b427dcbc4f 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -744,7 +744,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
{
int ret = 0;
bool write_fault, writable, force_pte = false;
- bool exec_fault;
+ bool exec_fault, adjust_hugepage;
bool device = false;
unsigned long mmu_seq;
struct kvm *kvm = vcpu->kvm;
@@ -872,12 +872,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
mark_page_dirty(kvm, gfn);
}
- if (fault_status != FSC_PERM && !device)
+ /*
+ * There is no necessity to perform cache maintenance operations if we
+ * will only replace the existing table mappings with a block mapping.
+ */
+ adjust_hugepage = fault_granule < vma_pagesize ? true : false;
nit: you don't need the '? true : false' part
That said, your previous patch checks for 'fault_granule > vma_pagesize',
so I'm not sure the local variable helps all that much here because it
obscures the size checks in my opinion. It would be more straight-forward
if we could structure the logic as:
if (fault_granule < vma_pagesize) {
} else if (fault_granule > vma_page_size) {
} else {
}
With some comments describing what we can infer about the memcache and cache
maintenance requirements for each case.
Thanks for your suggestion here, Will.
But I have resent another newer series [1] (KVM: arm64: Improve
efficiency of stage2 page table)
recently, which has the same theme but different solutions that I think
are better.
[1]
https://lore.kernel.org/lkml/20210208112250.163568-1-wangyanan55@xxxxxxxxxx/
Could you please comment on that series ? I think it can be found in
your inbox :).
Thanks,
Yanan
Will
.