On Fri, Oct 25, 2024 at 05:01:52PM +0200, Szymon Durawa wrote: > Starting from Intel Arrow Lake VMD enhacement introduces separate > rotbus for PCH. It means that all 3 MMIO BARs exposed by VMD are enhancement root bus Does VMD still have only 3 MMIO BARs? VMD_RES_PCH_* suggests more BARs. > shared now between CPU IOC and PCH. This patch adds PCH bus > enumeration and MMIO management for devices with VMD enhancement > support. s/This patch adds/Add/ We already had bus enumeration and MMIO management. It'd be nice to have something specific about what changes with PCH. A different fixed root bus number? Multiple root buses? Additional BARs in the VMD endpoint? If possible, describe this in generic PCIe topology terms, not in Intel-speak (IOC, PCH, etc). > +#define VMD_PRIMARY_PCH_BUS 0x80 > +#define VMD_BUSRANGE0 0xC8 > +#define VMD_BUSRANGE1 0xCC > +#define VMD_MEMBAR1_OFFSET 0xD0 > +#define VMD_MEMBAR2_OFFSET1 0xD8 > +#define VMD_MEMBAR2_OFFSET2 0xDC This file (mostly) uses lower-case hex; match that style. > +#define VMD_BUS_END(busr) ((busr >> 8) & 0xff) > +#define VMD_BUS_START(busr) (busr & 0x00ff) > + > #define MB2_SHADOW_OFFSET 0x2000 > #define MB2_SHADOW_SIZE 16 > > @@ -38,11 +47,15 @@ enum vmd_resource { > VMD_RES_CFGBAR = 0, > VMD_RES_MBAR_1, /*VMD Resource MemBAR 1 */ > VMD_RES_MBAR_2, /*VMD Resource MemBAR 2 */ > + VMD_RES_PCH_CFGBAR, > + VMD_RES_PCH_MBAR_1, /*VMD Resource PCH MemBAR 1 */ > + VMD_RES_PCH_MBAR_2, /*VMD Resource PCH MemBAR 2 */ Space after "/*". > +static inline u8 vmd_has_pch_rootbus(struct vmd_dev *vmd) > +{ > + return vmd->busn_start[VMD_BUS_1] != 0; Seems a little weird to learn this by testing whether this kzalloc'ed field has been set. Could easily save the driver_data pointer or just the "features" value in struct vmd_dev. > + case 3: > + if (!(features & VMD_FEAT_HAS_PCH_ROOTBUS)) { > + pci_err(dev, "VMD Bus Restriction detected type %d, but PCH Rootbus is not supported, aborting.\n", > + BUS_RESTRICT_CFG(reg)); > + return -ENODEV; > + } > + > + /* IOC start bus */ > + vmd->busn_start[VMD_BUS_0] = 224; > + /* PCH start bus */ > + vmd->busn_start[VMD_BUS_1] = 225; Seems like these magic numbers could have #defines. I see we've been using 128 and 224 already, and this basically adds 225. > +static int vmd_create_pch_bus(struct vmd_dev *vmd, struct pci_sysdata *sd, > + resource_size_t *offset) > +{ > + LIST_HEAD(resources_pch); > + > + pci_add_resource(&resources_pch, &vmd->resources[VMD_RES_PCH_CFGBAR]); > + pci_add_resource_offset(&resources_pch, > + &vmd->resources[VMD_RES_PCH_MBAR_1], offset[0]); > + pci_add_resource_offset(&resources_pch, > + &vmd->resources[VMD_RES_PCH_MBAR_2], offset[1]); > + > + vmd->bus[VMD_BUS_1] = pci_create_root_bus(&vmd->dev->dev, > + vmd->busn_start[VMD_BUS_1], > + &vmd_ops, sd, &resources_pch); > + > + if (!vmd->bus[VMD_BUS_1]) { > + pci_free_resource_list(&resources_pch); > + pci_stop_root_bus(vmd->bus[VMD_BUS_1]); > + pci_remove_root_bus(vmd->bus[VMD_BUS_1]); > + return -ENODEV; > + } > + > + /* > + * primary bus is not set by pci_create_root_bus(), it is updated here > + */ > + vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_PCH_BUS; > + > + vmd_copy_host_bridge_flags( > + pci_find_host_bridge(vmd->dev->bus), > + to_pci_host_bridge(vmd->bus[VMD_BUS_1]->bridge)); > + > + if (vmd->irq_domain) > + dev_set_msi_domain(&vmd->bus[VMD_BUS_1]->dev, > + vmd->irq_domain); > + else > + dev_set_msi_domain(&vmd->bus[VMD_BUS_1]->dev, > + dev_get_msi_domain(&vmd->dev->dev)); > + > + return 0; This looks a lot like parts of vmd_enable_domain(). Could this be factored out into a helper function that could be used for both VMD_BUS_0 and VMD_BUS_1? Why is vmd_attach_resource() is different between them? Why is sysfs_create_link() is different? Bjorn