Re: [PATCH v2 kvmtool 26/30] pci: Toggle BAR I/O and memory space emulation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 2/6/20 6:21 PM, Andre Przywara wrote:
> On Thu, 23 Jan 2020 13:48:01 +0000
> Alexandru Elisei <alexandru.elisei@xxxxxxx> wrote:
>
> Hi,
>
>> During configuration of the BAR addresses, a Linux guest disables and
>> enables access to I/O and memory space. When access is disabled, we don't
>> stop emulating the memory regions described by the BARs. Now that we have
>> callbacks for activating and deactivating emulation for a BAR region,
>> let's use that to stop emulation when access is disabled, and
>> re-activate it when access is re-enabled.
>>
>> The vesa emulation hasn't been designed with toggling on and off in
>> mind, so refuse writes to the PCI command register that disable memory
>> or IO access.
>>
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@xxxxxxx>
>> ---
>>  hw/vesa.c | 16 ++++++++++++++++
>>  pci.c     | 42 ++++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 58 insertions(+)
>>
>> diff --git a/hw/vesa.c b/hw/vesa.c
>> index 74ebebbefa6b..3044a86078fb 100644
>> --- a/hw/vesa.c
>> +++ b/hw/vesa.c
>> @@ -81,6 +81,18 @@ static int vesa__bar_deactivate(struct kvm *kvm,
>>  	return -EINVAL;
>>  }
>>  
>> +static void vesa__pci_cfg_write(struct kvm *kvm, struct pci_device_header *pci_hdr,
>> +				u8 offset, void *data, int sz)
>> +{
>> +	u32 value;
> I guess the same comment as on the other patch applies: using u64 looks safer to me. Also you should clear it, to avoid nasty surprises in case of a short write (1 or 2 bytes only).

I was under the impression that the maximum size for a write to the PCI CAM or
ECAM space is 32 bits. This is certainly what I've seen when running Linux, and
the assumption in the PCI emulation code which has been working since 2010. I'm
trying to dig out more information about this.

If it's not, then we have a bigger problem because the PCI emulation code doesn't
support it, and to account for it we would need to add a certain amount of logic
to the code to deal with it: what if a write hits the command register and another
adjacent register? what if a write hits two BARs? A BAR and a regular register
before/after it? Part of a BAR and two registers before/after? You can see where
this is going.

Until we find exactly where in a PCI spec says that 64 bit writes to the
configuration space are allowed, I would rather avoid all this complexity and
assume that the guest is sane and will only write 32 bit values.

Thanks,
Alex
>
> The rest looks alright.
>
> Cheers,
> Andre
>
>> +
>> +	if (offset == PCI_COMMAND) {
>> +		memcpy(&value, data, sz);
>> +		value |= (PCI_COMMAND_IO | PCI_COMMAND_MEMORY);
>> +		memcpy(data, &value, sz);
>> +	}
>> +}
>> +
>>  struct framebuffer *vesa__init(struct kvm *kvm)
>>  {
>>  	struct vesa_dev *vdev;
>> @@ -114,6 +126,10 @@ struct framebuffer *vesa__init(struct kvm *kvm)
>>  		.bar_size[1]		= VESA_MEM_SIZE,
>>  	};
>>  
>> +	vdev->pci_hdr.cfg_ops = (struct pci_config_operations) {
>> +		.write	= vesa__pci_cfg_write,
>> +	};
>> +
>>  	vdev->fb = (struct framebuffer) {
>>  		.width			= VESA_WIDTH,
>>  		.height			= VESA_HEIGHT,
>> diff --git a/pci.c b/pci.c
>> index 5412f2defa2e..98331a1fc205 100644
>> --- a/pci.c
>> +++ b/pci.c
>> @@ -157,6 +157,42 @@ static struct ioport_operations pci_config_data_ops = {
>>  	.io_out	= pci_config_data_out,
>>  };
>>  
>> +static void pci_config_command_wr(struct kvm *kvm,
>> +				  struct pci_device_header *pci_hdr,
>> +				  u16 new_command)
>> +{
>> +	int i;
>> +	bool toggle_io, toggle_mem;
>> +
>> +	toggle_io = (pci_hdr->command ^ new_command) & PCI_COMMAND_IO;
>> +	toggle_mem = (pci_hdr->command ^ new_command) & PCI_COMMAND_MEMORY;
>> +
>> +	for (i = 0; i < 6; i++) {
>> +		if (!pci_bar_is_implemented(pci_hdr, i))
>> +			continue;
>> +
>> +		if (toggle_io && pci__bar_is_io(pci_hdr, i)) {
>> +			if (__pci__io_space_enabled(new_command))
>> +				pci_hdr->bar_activate_fn(kvm, pci_hdr, i,
>> +							 pci_hdr->data);
>> +			else
>> +				pci_hdr->bar_deactivate_fn(kvm, pci_hdr, i,
>> +							   pci_hdr->data);
>> +		}
>> +
>> +		if (toggle_mem && pci__bar_is_memory(pci_hdr, i)) {
>> +			if (__pci__memory_space_enabled(new_command))
>> +				pci_hdr->bar_activate_fn(kvm, pci_hdr, i,
>> +							 pci_hdr->data);
>> +			else
>> +				pci_hdr->bar_deactivate_fn(kvm, pci_hdr, i,
>> +							   pci_hdr->data);
>> +		}
>> +	}
>> +
>> +	pci_hdr->command = new_command;
>> +}
>> +
>>  void pci__config_wr(struct kvm *kvm, union pci_config_address addr, void *data, int size)
>>  {
>>  	void *base;
>> @@ -182,6 +218,12 @@ void pci__config_wr(struct kvm *kvm, union pci_config_address addr, void *data,
>>  	if (*(u32 *)(base + offset) == 0)
>>  		return;
>>  
>> +	if (offset == PCI_COMMAND) {
>> +		memcpy(&value, data, size);
>> +		pci_config_command_wr(kvm, pci_hdr, (u16)value);
>> +		return;
>> +	}
>> +
>>  	bar = (offset - PCI_BAR_OFFSET(0)) / sizeof(u32);
>>  
>>  	/*



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux