Re: [RFC PATCH v1 00/10] Add AMD SEV guest live migration support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 4/24/19 2:15 PM, Steve Rutherford wrote:
> On Wed, Apr 24, 2019 at 9:10 AM Singh, Brijesh <brijesh.singh@xxxxxxx> wrote:
>>
>> The series add support for AMD SEV guest live migration commands. To protect the
>> confidentiality of an SEV protected guest memory while in transit we need to
>> use the SEV commands defined in SEV API spec [1].
>>
>> SEV guest VMs have the concept of private and shared memory. Private memory
>> is encrypted with the guest-specific key, while shared memory may be encrypted
>> with hypervisor key. The commands provided by the SEV FW are meant to be used
>> for the private memory only. The patch series introduces a new hypercall.
>> The guest OS can use this hypercall to notify the page encryption status.
>> If the page is encrypted with guest specific-key then we use SEV command during
>> the migration. If page is not encrypted then fallback to default.
>>
>> The patch adds a new ioctl KVM_GET_PAGE_ENC_BITMAP. The ioctl can be used
>> by the qemu to get the page encrypted bitmap. Qemu can consult this bitmap
>> during the migration to know whether the page is encrypted.
>>
>> [1] https://developer.amd.com/wp-content/resources/55766.PDF
>>
>> The series is tested with the Qemu, I am in process of cleaning
>> up the Qemu code and will submit soon.
>>
>> While implementing the migration I stumbled on the follow question:
>>
>> - Since there is a guest OS changes required to support the migration,
>>    so how do we know whether guest OS is updated? Should we extend KVM
>>    capabilities/feature bits to check this?
>>
>> TODO:
>>   - add an ioctl to build encryption bitmap. The encryption bitmap is built during
>>     the guest bootup/execution. We should provide an ioctl so that destination
>>     can build the bitmap as it receives the pages.
>>   - reset the bitmap on guest reboot.
>>
>> The complete tree with patch is available at:
>> https://github.com/codomania/kvm/tree/sev-migration-rfc-v1
>>
>> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
>> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
>> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
>> Cc: "Radim Krčmář" <rkrcmar@xxxxxxxxxx>
>> Cc: Joerg Roedel <joro@xxxxxxxxxx>
>> Cc: Borislav Petkov <bp@xxxxxxx>
>> Cc: Tom Lendacky <thomas.lendacky@xxxxxxx>
>> Cc: x86@xxxxxxxxxx
>> Cc: kvm@xxxxxxxxxxxxxxx
>> Cc: linux-kernel@xxxxxxxxxxxxxxx
>>
>> Brijesh Singh (10):
>>    KVM: SVM: Add KVM_SEV SEND_START command
>>    KVM: SVM: Add KVM_SEND_UPDATE_DATA command
>>    KVM: SVM: Add KVM_SEV_SEND_FINISH command
>>    KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
>>    KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
>>    KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
>>    KVM: x86: Add AMD SEV specific Hypercall3
>>    KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
>>    KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
>>    mm: x86: Invoke hypercall when page encryption status is changed
>>
>>   .../virtual/kvm/amd-memory-encryption.rst     | 116 ++++
>>   Documentation/virtual/kvm/hypercalls.txt      |  14 +
>>   arch/x86/include/asm/kvm_host.h               |   3 +
>>   arch/x86/include/asm/kvm_para.h               |  12 +
>>   arch/x86/include/asm/mem_encrypt.h            |   3 +
>>   arch/x86/kvm/svm.c                            | 560 +++++++++++++++++-
>>   arch/x86/kvm/vmx/vmx.c                        |   1 +
>>   arch/x86/kvm/x86.c                            |  17 +
>>   arch/x86/mm/mem_encrypt.c                     |  45 +-
>>   arch/x86/mm/pageattr.c                        |  15 +
>>   include/uapi/linux/kvm.h                      |  51 ++
>>   include/uapi/linux/kvm_para.h                 |   1 +
>>   12 files changed, 834 insertions(+), 4 deletions(-)
>>
>> --
>> 2.17.1
>>
> 
> What's the back-of-the-envelope marginal cost of transferring a 16kB
> region from one host to another? I'm interested in what the end to end
> migration perf changes look like for this. If you have measured
> migration perf, I'm interested in that also.
> 

I have not done a complete performance analysis yet! From the qemu
QMP prompt (query-migration) I am getting ~8mbps throughput from
one host to another (this is with 4kb regions). I have been told
that increasing the transfer size from 4kb -> 16kb may not give a
huge performance gain because at FW level they still operating
on 4kb blocks. There is possibility that future FW updates may
give a bit better performance on 16kb size.

-Brijesh




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux