On Mon, Jan 10, 2022 at 03:45:25PM +0800, Chao Gao wrote: > On Fri, Jan 07, 2022 at 10:31:59AM +0200, Maxim Levitsky wrote: > >On Fri, 2022-01-07 at 16:05 +0800, Zeng Guang wrote: > >> On 1/6/2022 10:06 PM, Tom Lendacky wrote: > >> > On 1/5/22 7:44 PM, Zeng Guang wrote: > >> > > On 1/6/2022 3:13 AM, Tom Lendacky wrote: > >> > > > On 12/31/21 8:28 AM, Zeng Guang wrote: > >> > > > Won't this blow up on AMD since there is no corresponding SVM op? > >> > > > > >> > > > Thanks, > >> > > > Tom > >> > > Right, need check ops validness to avoid ruining AMD system. Same > >> > > consideration on ops "update_ipiv_pid_table" in patch8. > >> > Not necessarily for patch8. That is "protected" by the > >> > kvm_check_request(KVM_REQ_PID_TABLE_UPDATE, vcpu) test, but it couldn't hurt. > >> > >> OK, make sense. Thanks. > > > >I haven't fully reviewed this patch series yet, > >and I will soon. > > > >I just want to point out few things: > > Thanks for pointing them out. > > > > >1. AMD's AVIC also has a PID table (its calle AVIC physical ID table). > >It stores addressses of vCPUs apic backing pages, > >and thier real APIC IDs. > > > >avic_init_backing_page initializes the entry (assuming apic_id == vcpu_id) > >(which is double confusing) > > > >2. For some reason KVM supports writable APIC IDs. Does anyone use these? > >Even Intel's PRM strongly discourages users from using them and in X2APIC mode, > >the APIC ID is read only. > > > >Because of this we have quite some bookkeeping in lapic.c, > >(things like kvm_recalculate_apic_map and such) > > > >Also AVIC has its own handling for writes to APIC_ID,APIC_LDR,APIC_DFR > >which tries to update its physical and logical ID tables. > > Intel's IPI virtualization doesn't handle logical-addressing IPIs. They cause > APIC-write vm-exit as usual. So, this series doesn't handle APIC_LDR/DFR. > > > > >(it used also to handle apic base and I removed this as apic base otherwise > >was always hardcoded to the default vaule) > > > >Note that avic_handle_apic_id_update is broken - it always copies the entry > >from the default (apicid == vcpu_id) location to new location and zeros > >the old location, which will fail in many cases, like even if the guest > >were to swap few apic ids. > > This series differs from avic_handle_apic_id_update slightly: > > If a vCPU's APIC ID is changed, this series zeros the old entry in PID-pointer > table and programs the vCPU's PID to the new entry (rather than copy from the > old entry). > > But this series is also problematic if guest swaps two vCPU's APIC ID without > using another free APIC ID; it would end up one of them having no valid entry. > > One solution in my mind is: > > when a vCPU's APIC ID is changed, KVM traverses all vCPUs to count vCPUs using > the old APIC ID and the new APIC ID, programs corrsponding entries following > below rules: > 1. populate an entry with a vCPU's PID if the corrsponding APIC ID is > exclusively used by that vCPU. > 2. zero an entry for other cases. Don't need to traverse I think, just not zero the old entry if it's not belong to the vcpu: +Take new one or exist vm level lock +if (__pa(&to_vmx(vcpu)->pi_desc) == (pid_table[old_id] & ~PID_TABLE_ENTRY_VALID)) WRITE_ONCE(pid_table[old_id], 0); WRITE_ONCE(pid_table[new_id], __pa(&to_vmx(vcpu)->pi_desc) | PID_TABLE_ENTRY_VALID); +Release new one or exist vm level lock > > Proper locking is needed in this process to prevent changes to vCPUs' APIC IDs. > > Or if it doesn't worth it, we can disable IPI virtualization for a guest on its > first attempt to change xAPIC ID. > > Let us know which option is preferred.