Re: [RFC PATCH 3/6] KVM: SVM: Implement demand page pinning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/25/2022 11:29 PM, Peter Gonda wrote:
> On Tue, Jan 25, 2022 at 10:49 AM Nikunj A. Dadhania <nikunj@xxxxxxx> wrote:
>>
>> Hi Peter
>>
>> On 1/25/2022 10:17 PM, Peter Gonda wrote:
>>>> @@ -1637,8 +1627,6 @@ static void sev_migrate_from(struct kvm_sev_info *dst,
>>>>         src->handle = 0;
>>>>         src->pages_locked = 0;
>>>>         src->enc_context_owner = NULL;
>>>> -
>>>> -       list_cut_before(&dst->regions_list, &src->regions_list, &src->regions_list);
>>> I think we need to move the pinned SPTE entries into the target, and
>>> repin the pages in the target here. Otherwise the pages will be
>>> unpinned when the source is cleaned up. Have you thought about how
>>> this could be done?

Right, copying just the list doesn't look to be sufficient. 

In destination kvm context, will have to go over the source region list of 
pinned pages and pin them. Roughly something like the below:

struct list_head *head = &src->pinned_regions_list;
struct pinned_region *new, old;

if (!list_empty(head)) {
	list_for_each_safe(pos, q, head) {
		old = list_entry(pos, struct pinned_region, list);
		/* alloc new region and initialize with old */
		new = kzalloc(sizeof(*region), GFP_KERNEL_ACCOUNT);
		new->uaddr = old->uaddr;
		new->len = old->len;
		new->npages = old->npages;
		/* pin memory */
		new->pages = sev_pin_memory(kvm, new->uaddr, new->npages);
		list_add_tail(&new->list, &dst->pinned_regions_list);
		...
	}
}

>>>
>> I am testing migration with pinned_list, I see that all the guest pages are
>> transferred/pinned on the other side during migration. I think that there is
>> assumption that all private pages needs to be moved.
>>
>> QEMU: target/i386/sev.c:bool sev_is_gfn_in_unshared_region(unsigned long gfn)
>>
>> Will dig more on this.
> 
> The code you linked appears to be for a remote migration. 

Yes, that is correct.

> This
> function is for an "intra-host" migration meaning we are just moving
> the VMs memory and state to a new userspace VMM on the same not an
> entirely new host.

Regards
Nikunj



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux