On Tue, Apr 19, 2022 at 09:33:21AM -0400, Cole Robinson wrote: > On 4/14/22 4:25 AM, Dr. David Alan Gilbert wrote: > > * Dov Murik (dovmurik@xxxxxxxxxxxxx) wrote: > >> I plan to add SEV-ES and SEV measurements calculation to this > >> library/program as well. > > > > Everyone seems to be writing one; you, Dan etc! > > > > Yeah, I should have mentioned Dan's demo tool here: > https://gitlab.com/berrange/libvirt/-/blob/lgtm/tools/virt-dom-sev-vmsa-tool.py FYI a bit of explanation of that tool... Some complications wrt VMSA contents in no particular order - VMSA contents can vary across firmwares due to reset address - No current supportable way to extract VMSA from kernel - VMSA varies across userspace QEMU vs libkrun - VMSA varies across CPU due to include model/family/stepping The last point in particular is a big pain, becasue it means that there are going to be a great many valid VMSA blobs. Thus I put some time into working on the above tool to build VMSA from first principles. ie populating register defaults based on the AMD tech specs for x86/sev, along with examination on what KVM/QEMU does to override the defaults in places. The tool does three simple things... Create a generic VMSA for CPU 0 for QEMU: $ virt-dom-sev-vmsa-tool.py build --cpu 0 --userspace qemu cpu0.bin Update the generic VMSA with firmware and CPU details $ virt-dom-sev-vmsa-tool.py update \ --firmware OVMF.amdsev.fd \ --model 49 --family 23 --stepping 0 cpu0.bin Note, I split this as I felt it might be interesting for a cloud provider to publish a known "generic" VMSA, and then let it be customized per boot depending on what CPU model/family the VM ran on, and/or what firmware it was booted with. The 'build' command can directly set the firmware and cpu model/family though, if all-in-one is sufficient. Display the VMSA register info, skipping fields which are all zero $ virt-dom-sev-vmsa-tool.py show --zeroes skip cpu0.bin es_attrib : 0x0093 (10010011 00000000) es_limit : 0x0000ffff cs_selector : 0xf000 cs_attrib : 0x009b (10011011 00000000) cs_limit : 0x0000ffff cs_base : 0x00000000ffff0000 ss_attrib : 0x0093 (10010011 00000000) ss_limit : 0x0000ffff ds_attrib : 0x0093 (10010011 00000000) ds_limit : 0x0000ffff fs_attrib : 0x0093 (10010011 00000000) fs_limit : 0x0000ffff gs_attrib : 0x0093 (10010011 00000000) gs_limit : 0x0000ffff gdtr_limit : 0x0000ffff ldtr_attrib : 0x0082 (10000010 00000000) ldtr_limit : 0x0000ffff idtr_limit : 0x0000ffff tr_attrib : 0x008b (10001011 00000000) tr_limit : 0x0000ffff efer : 0x0000000000001000 cr4 : 0x0000000000000040 cr0 : 0x0000000000000010 dr7 : 0x0000000000000400 dr6 : 0x00000000ffff0ff0 rflags : 0x0000000000000002 rip : 0x000000000000fff0 g_pat : 0x0007040600070406 rdx : 0x0000000000830f10 (00010000 00001111 10000011 00000000 00000000 00000000 00000000 00000000) xcr0 : 0x0000000000000001 The 'show' command is largely a debugging tool, so you can understand what unexpectedly changed if you're failing to get a valid match. If you look at the code, you can see comments on where I found the various default values. I'm fairly confident about the QEMU source, but I am not happy with my info sources for libkrun but then I didn't spend much time exploring its code. Anyway, it can at least spit out a vmsa that matches what is committed in libkrun's git repo. I'm not in love with this particular impl of the tool. I wrote it to be quick & easy, to prove the viability of a 'build from specs' approach to VMSA. I find this the most satisfactory way out of all the options we've considered so far. The need for a different VMSA per cpu family/model/stepping in particular, makes me feel we need a tool like this, as just publishing known good VMSA is not viable with so many combinations possible. > Tyler Fanelli is looking at adding that functionality to sevctl too FWIW Yes, I think this functionality belongs in sev / sevctl, rather than my python script, so I'm not intended to submit my python program as an official solution for anything. It is just there as a historical curiosity at this point, until sevctl can do the same. > > I think I'd like to see a new ioctl to read the initial VMSA, primarily > > as a way of debugging so you can see what VMSA you have when something > > goes wrong. > > > > debugfs seems simpler for the dev user (accessing a file per CPU vs code > to call ioctl), but beyond that I don't have any insight. Is there a > reason you think ioctl and not debugfs? A debugfs entry could be useful for automated data collection tools. eg sosreport could capture a debugfs file easily for a running VM, where as using an ioctl will require special code to be written for sosreport. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|