On 4/9/23 2:29 PM, Ricardo Koller wrote:
Refactor kvm_arch_commit_memory_region() as a preparation for a future commit to look cleaner and more understandable. Also, it looks more like its x86 counterpart (in kvm_mmu_slot_apply_flags()). Signed-off-by: Ricardo Koller <ricarkol@xxxxxxxxxx> Reviewed-by: Shaoqin Huang <shahuang@xxxxxxxxxx> --- arch/arm64/kvm/mmu.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-)
With the following nits addressed: Reviewed-by: Gavin Shan <gshan@xxxxxxxxxx>
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index efdaab3f154de..37d7d2aa472ab 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1761,20 +1761,27 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { + bool log_dirty_pages = new && new->flags & KVM_MEM_LOG_DIRTY_PAGES; + /* * At this point memslot has been committed and there is an * allocated dirty_bitmap[], dirty pages will be tracked while the * memory slot is write protected. */ - if (change != KVM_MR_DELETE && new->flags & KVM_MEM_LOG_DIRTY_PAGES) { + if (log_dirty_pages) { + + if (change == KVM_MR_DELETE) + return; + /* * If we're with initial-all-set, we don't need to write * protect any pages because they're all reported as dirty. * Huge pages and normal pages will be write protect gradually. */
The comments need to be adjusted after this series is applied. The huge pages won't be write protected gradually. Instead, the huge pages will be split and write protected in one shoot.
- if (!kvm_dirty_log_manual_protect_and_init_set(kvm)) { - kvm_mmu_wp_memory_region(kvm, new->id); - } + if (kvm_dirty_log_manual_protect_and_init_set(kvm)) + return; + + kvm_mmu_wp_memory_region(kvm, new->id); } }
Thanks, Gavin