Re: [PATCH v3 4/8] PCI: Restore memory decoding after reallocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/8/24 15:37, Bjorn Helgaas wrote:
> On Wed, Aug 07, 2024 at 11:17:13AM -0400, Stewart Hildebrand wrote:
>> Currently, the PCI subsystem unconditionally clears the memory decoding
>> bit of devices with resource alignment specified. While drivers should
>> call pci_enable_device() to enable memory decoding, some platforms have
>> devices that expect memory decoding to be enabled even when no driver is
>> bound to the device. It is assumed firmware enables memory decoding in
>> these cases. For example, the vgacon driver uses the 0xb8000 buffer to
>> display a console without any PCI involvement, yet, on some platforms,
>> memory decoding must be enabled on the PCI VGA device in order for the
>> console to be displayed properly.
>>
>> Restore the memory decoding bit after the resource has been reallocated.
>>
>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
>> ---
>> v2->v3:
>> * no change
>>
>> v1->v2:
>> * capitalize subject text
>> * reword commit message
>>
>> Testing notes / how to check if your platform needs memory decoding
>> enabled in order to use vgacon:
>> 1. Boot your system with a monitor attached, without any driver bound to
>>    your PCI VGA device. You should see a console on your display.
>> 2. Find the SBDF of your VGA device with lspci -v (00:01.0 in this
>>    example).
>> 3. Disable memory decoding (replace 00:01.0 with your SBDF):
>>   $ sudo setpci -v -s 00:01.0 0x04.W=0x05
>> 4. Type something to see if it appears on the console display
>> 5. Re-enable memory decoding:
>>   $ sudo setpci -v -s 00:01.0 0x04.W=0x07
>>
>> Relevant prior discussion at [1]
>>
>> [1] https://lore.kernel.org/linux-pci/20160906165652.GE1214@localhost/
>> ---
>>  drivers/pci/pci.c       |  1 +
>>  drivers/pci/setup-bus.c | 25 +++++++++++++++++++++++++
>>  include/linux/pci.h     |  2 ++
>>  3 files changed, 28 insertions(+)
>>
>> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
>> index acecdd6edd5a..4b97d8d5c2d8 100644
>> --- a/drivers/pci/pci.c
>> +++ b/drivers/pci/pci.c
>> @@ -6676,6 +6676,7 @@ void pci_reassigndev_resource_alignment(struct pci_dev *dev)
>>  
>>  	pci_read_config_word(dev, PCI_COMMAND, &command);
>>  	if (command & PCI_COMMAND_MEMORY) {
>> +		dev->dev_flags |= PCI_DEV_FLAGS_MEMORY_ENABLE;
>>  		command &= ~PCI_COMMAND_MEMORY;
>>  		pci_write_config_word(dev, PCI_COMMAND, command);
>>  	}
>> diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
>> index ab7510ce6917..6847b251e19a 100644
>> --- a/drivers/pci/setup-bus.c
>> +++ b/drivers/pci/setup-bus.c
>> @@ -2131,6 +2131,29 @@ pci_root_bus_distribute_available_resources(struct pci_bus *bus,
>>  	}
>>  }
>>  
>> +static void restore_memory_decoding(struct pci_bus *bus)
>> +{
>> +	struct pci_dev *dev;
>> +
>> +	list_for_each_entry(dev, &bus->devices, bus_list) {
>> +		struct pci_bus *b;
>> +
>> +		if (dev->dev_flags & PCI_DEV_FLAGS_MEMORY_ENABLE) {
>> +			u16 command;
>> +
>> +			pci_read_config_word(dev, PCI_COMMAND, &command);
>> +			command |= PCI_COMMAND_MEMORY;
>> +			pci_write_config_word(dev, PCI_COMMAND, command);
>> +		}
>> +
>> +		b = dev->subordinate;
>> +		if (!b)
>> +			continue;
>> +
>> +		restore_memory_decoding(b);
>> +	}
>> +}
>> +
>>  /*
>>   * First try will not touch PCI bridge res.
>>   * Second and later try will clear small leaf bridge res.
>> @@ -2229,6 +2252,8 @@ void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus)
>>  	goto again;
>>  
>>  dump:
>> +	restore_memory_decoding(bus);
> 
> The fact that we traverse the whole hierarchy here to restore
> PCI_COMMAND_MEMORY makes me suspect there's a window between point A
> (where we clear PCI_COMMAND_MEMORY and update a BAR) and point B
> (where we restore PCI_COMMAND_MEMORY) where VGA console output will
> not work.

Yes, there is a brief moment of garbled junk on the display, but it is
not fatal. The VGA console display returns to normal after the bit is
restored.

> This tickles my memory like we might have talked about this before and
> there's some reason for having to wait.  If so, sorry, and maybe we
> need a comment in the code about that reason.

I don't have a strong opinion on which way to go with this, but my
understanding is that we want memory decoding disabled while r->start
doesn't match the actual BAR value. If you agree with this rationale,
I'll add something to that effect in a comment.

See the prior discussion at [1] (link above), and discussion from v1 of
this patch [2].

[2] https://lore.kernel.org/linux-pci/1d0b52bd-1654-4510-92dc-bd48447ff41d@xxxxxxx/

>>  	/* Dump the resource on buses */
>>  	pci_bus_dump_resources(bus);
>>  }
>> diff --git a/include/linux/pci.h b/include/linux/pci.h
>> index 4246cb790c7b..74636acf152f 100644
>> --- a/include/linux/pci.h
>> +++ b/include/linux/pci.h
>> @@ -245,6 +245,8 @@ enum pci_dev_flags {
>>  	PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 11),
>>  	/* Device does honor MSI masking despite saying otherwise */
>>  	PCI_DEV_FLAGS_HAS_MSI_MASKING = (__force pci_dev_flags_t) (1 << 12),
>> +	/* Firmware enabled memory decoding, to be restored if BAR is updated */
>> +	PCI_DEV_FLAGS_MEMORY_ENABLE = (__force pci_dev_flags_t) (1 << 13),
>>  };
>>  
>>  enum pci_irq_reroute_variant {
>> -- 
>> 2.46.0
>>





[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux