> From: Shameer Kolothum <shameerali.kolothum.thodi@xxxxxxxxxx> > Sent: Friday, March 4, 2022 7:02 AM > +/* > + * Each state Reg is checked 100 times, > + * with a delay of 100 microseconds after each check > + */ > +static u32 acc_check_reg_state(struct hisi_qm *qm, u32 regs) qm_check_reg_state() given the 1st argument is qm > +/* Check the PF's RAS state and Function INT state */ > +static int qm_check_int_state(struct hisi_acc_vf_core_device *hisi_acc_vdev) then this should be acc_check_int_state() given the input is an acc device? anyway please have a consistent naming convention here. > +static int qm_read_reg(struct hisi_qm *qm, u32 reg_addr, > + u32 *data, u8 nums) qm_read_regs() to reflect that multiple registers are processed. > + > +static int qm_write_reg(struct hisi_qm *qm, u32 reg, > + u32 *data, u8 nums) qm_write_regs() > + > +static int qm_rw_regs_read(struct hisi_qm *qm, struct acc_vf_data *vf_data) qm_load_regs(). It's confusing to have both 'rw' and 'read'. > + > +static int qm_rw_regs_write(struct hisi_qm *qm, struct acc_vf_data > *vf_data) qm_save_regs() > +static int hisi_acc_vf_qm_init(struct hisi_acc_vf_core_device *hisi_acc_vdev) > +{ > + struct vfio_pci_core_device *vdev = &hisi_acc_vdev->core_device; > + struct hisi_qm *vf_qm = &hisi_acc_vdev->vf_qm; > + struct pci_dev *vf_dev = vdev->pdev; > + > + /* > + * ACC VF dev BAR2 region consists of both functional register space > + * and migration control register space. For migration to work, we > + * need access to both. Hence, we map the entire BAR2 region here. > + * But from a security point of view, we restrict access to the > + * migration control space from Guest(Please see > mmap/ioctl/read/write > + * override functions). (Please see hisi_acc_vfio_pci_migrn_ops) > + * > + * Also the HiSilicon ACC VF devices supported by this driver on > + * HiSilicon hardware platforms are integrated end point devices > + * and has no capability to perform PCIe P2P. According to v5 discussion I think it is the platform which lacks of the P2P capability instead of the device. Current writing is read to the latter. better clarify it accurately. 😊 > static int hisi_acc_vfio_pci_probe(struct pci_dev *pdev, const struct > pci_device_id *id) > { > - struct vfio_pci_core_device *vdev; > + struct hisi_acc_vf_core_device *hisi_acc_vdev; > + struct hisi_qm *pf_qm; > int ret; > > - vdev = kzalloc(sizeof(*vdev), GFP_KERNEL); > - if (!vdev) > + hisi_acc_vdev = kzalloc(sizeof(*hisi_acc_vdev), GFP_KERNEL); > + if (!hisi_acc_vdev) > return -ENOMEM; > > - vfio_pci_core_init_device(vdev, pdev, &hisi_acc_vfio_pci_ops); > + pf_qm = hisi_acc_get_pf_qm(pdev); > + if (pf_qm && pf_qm->ver >= QM_HW_V3) { > + ret = hisi_acc_vfio_pci_migrn_init(hisi_acc_vdev, pdev, > pf_qm); > + if (!ret) { > + vfio_pci_core_init_device(&hisi_acc_vdev- > >core_device, pdev, > + > &hisi_acc_vfio_pci_migrn_ops); > + } else { > + pci_warn(pdev, "migration support failed, continue > with generic interface\n"); > + vfio_pci_core_init_device(&hisi_acc_vdev- > >core_device, pdev, > + &hisi_acc_vfio_pci_ops); > + } This logic looks weird. Earlier you state that the migration control region must be hidden from the userspace as a security requirement, but above logic reads like if the driver fails to initialize migration support then we just fall back to the default ops which grants the user the full access to the entire MMIO bar. > + } else { > + vfio_pci_core_init_device(&hisi_acc_vdev->core_device, pdev, > + &hisi_acc_vfio_pci_ops); > + } If the hardware itself doesn't support the migration capability, can we just move it out of the id table and let vfio-pci to drive it? Thanks Kevin