Re: [PATCH v10 4/6] arm: KVM: Add initial dirty page locking infrastructure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/10/2014 08:12 PM, Christoffer Dall wrote:
> On Tue, Aug 26, 2014 at 05:04:06PM -0700, Mario Smarduch wrote:
>> Patch adds support for initial write protection of VM memlsot. This patch series
>> assumes that huge PUDs will not be used in 2nd stage tables, which is awlays
>> valid on ARMv7.
>>
>> Signed-off-by: Mario Smarduch <m.smarduch@xxxxxxxxxxx>
>> ---
>>  arch/arm/include/asm/kvm_host.h       |   1 +
>>  arch/arm/include/asm/kvm_mmu.h        |  20 ++++++
>>  arch/arm/include/asm/pgtable-3level.h |   1 +
>>  arch/arm/kvm/arm.c                    |   9 +++
>>  arch/arm/kvm/mmu.c                    | 128 ++++++++++++++++++++++++++++++++++
>>  5 files changed, 159 insertions(+)
>>
[...]
>> +
>> +/**
>> + * stage2_wp_pmd_range - write protect PMD range
>> + * @pud:	pointer to pud entry
>> + * @addr:	range start address
>> + * @end:	range end address
>> + */
>> +static void stage2_wp_pmd_range(pud_t *pud, phys_addr_t addr, phys_addr_t end)
>> +{
>> +	pmd_t *pmd;
>> +	phys_addr_t next;
>> +
>> +	pmd = pmd_offset(pud, addr);
>> +
>> +	do {
>> +		next = kvm_pmd_addr_end(addr, end);
>> +		if (!pmd_none(*pmd)) {
>> +			if (kvm_pmd_huge(*pmd)) {
>> +				if (!kvm_s2pmd_readonly(pmd))
>> +					kvm_set_s2pmd_readonly(pmd);
>> +			} else {
>> +				stage2_wp_pte_range(pmd, addr, next);
>> +			}
>> +		}
>> +	} while (pmd++, addr = next, addr != end);
>> +}
>> +
>> +/**
>> +  * stage2_wp_pud_range - write protect PUD range
>> +  * @kvm:	pointer to kvm structure
>> +  * @pud:	pointer to pgd entry
>         pgd
> 
>> +  * @addr:	range start address
>> +  * @end:	range end address
>> +  *
>> +  * While walking the PUD range huge PUD pages are ignored.
>> +  */
>> +static void  stage2_wp_pud_range(struct kvm *kvm, pgd_t *pgd,
> 
> the naming of this function feels weird.  You're write-protecting
> the puds covered by the range of a single PGD, so I would say,
> stage2_wp_puds(), or stage2_wp_pgd_range().
> 
> [apologies if I suggested this specific naming]
> 
> applies consecutively to the functions above.

Ok will update.
> 
>> +					phys_addr_t addr, phys_addr_t end)
>> +{
>> +	pud_t *pud;
>> +	phys_addr_t next;
>> +
>> +	pud = pud_offset(pgd, addr);
>> +	do {
>> +		next = kvm_pud_addr_end(addr, end);
>> +		/* TODO: PUD not supported, revisit later if implemented  */
>> +		BUG_ON(kvm_pud_huge(*pud));
>> +		if (!pud_none(*pud))
>> +			stage2_wp_pmd_range(pud, addr, next);
>> +	} while (pud++, addr = next, addr != end);
>> +}
>> +
>> +/**
>> + * stage2_wp_range() - write protect stage2 memory region range
>> + * @kvm:	The KVM pointer
>> + * @start:	Start address of range
> 
> the parameter is called addr
> 
>> + * &end:	End address of range
>> + */
>> +static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
>> +{
>> +	pgd_t *pgd;
>> +	phys_addr_t next;
>> +
>> +	pgd = kvm->arch.pgd + pgd_index(addr);
>> +	do {
>> +		/*
>> +		 * Release kvm_mmu_lock periodically if the memory region is
>> +		 * large. Otherwise, we may see kernel panics with
>> +		 * CONFIG_DETECT_HUNG_TASK, CONFIG_LOCK_DETECTOR,
>> +		 * CONFIG_LOCK_DEP. Additionally, holding the lock too long
>> +		 * will also starve other vCPUs.
>> +		 */
>> +		if (need_resched() || spin_needbreak(&kvm->mmu_lock))
>> +			cond_resched_lock(&kvm->mmu_lock);
>> +
>> +		next = kvm_pgd_addr_end(addr, end);
>> +		if (pgd_present(*pgd))
>> +			stage2_wp_pud_range(kvm, pgd, addr, next);
>> +	} while (pgd++, addr = next, addr != end);
>> +}
>> +
>> +/**
>> + * kvm_mmu_wp_memory_region() - write protect stage 2 entries for memory slot
>> + * @kvm:	The KVM pointer
>> + * @slot:	The memory slot to write protect
>> + *
>> + * Called to start logging dirty pages after memory region
>> + * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns
>> + * all present PMD and PTEs are write protected in the memory region.
>> + * Afterwards read of dirty page log can be called.
>> + *
>> + * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired,
>> + * serializing operations for VM memory regions.
>> + */
>> +
>> +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
>> +{
>> +	struct kvm_memory_slot *memslot = id_to_memslot(kvm->memslots, slot);
>> +	phys_addr_t start = memslot->base_gfn << PAGE_SHIFT;
>> +	phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
>> +
>> +	spin_lock(&kvm->mmu_lock);
>> +	stage2_wp_range(kvm, start, end);
>> +	kvm_flush_remote_tlbs(kvm);
> 
> do you need to hold the lock while flushing the TLBs?
> 
>> +	spin_unlock(&kvm->mmu_lock);
>> +}
>> +#endif
>> +
>>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>  			  struct kvm_memory_slot *memslot,
>>  			  unsigned long fault_status)
>> -- 
>> 1.8.3.2
>>
> Thanks,
> -Christoffer
> 

_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm




[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux