On Wed, Aug 02, 2023, Wei Wang wrote: > The implementation of kvm_tdp_mmu_map is a bit long. It essentially does > three things: > 1) adjust the leaf entry level (e.g. 4KB, 2MB or 1GB) to map according to > the hugepage configurations; > 2) map the nonleaf entries of the tdp page table; and > 3) map the target leaf entry. > > Improve the readabiliy by moving the implementation of 2) above into a > subfunction, kvm_tdp_mmu_map_nonleaf, and removing the unnecessary > "goto"s. No functional changes intended. Eh, I prefer the current code from a readability perspective. I like being able to see the entire flow, and I especially like that this if (iter.level == fault->goal_level) goto map_target_level; very clearly and explicitly captures that reaching the goal leavel means that it's time to map the target level, whereas IMO this does not, in no small part because seeing "continue" in a loop makes me think "continue the loop", not "continue on to the next part of the page fault" if (iter->level == fault->goal_level) return RET_PF_CONTINUE; And the existing code follows the patter of the other page fault paths, direct_map() and FNAME(fetch). That doesn't necessarily mean that the existing pattern is "better", but I personally place a lot of value on consistency. > +/* > + * Handle a TDP page fault (NPT/EPT violation/misconfiguration) by installing > + * page tables and SPTEs to translate the faulting guest physical address. > + */ > +int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > +{ > + struct tdp_iter iter; > + int ret; > + > + kvm_mmu_hugepage_adjust(vcpu, fault); > + > + trace_kvm_mmu_spte_requested(fault); > + > + rcu_read_lock(); > + > + ret = kvm_tdp_mmu_map_nonleafs(vcpu, fault, &iter); > + if (ret == RET_PF_CONTINUE) > + ret = tdp_mmu_map_handle_target_level(vcpu, fault, &iter); And I also don't like passing in an uninitialized tdp_iter, and then consuming it too. > > -retry: > rcu_read_unlock(); > return ret; > } > -- > 2.27.0 >