Allow the vfio_device file to be in a state where the device FD is opened but the device cannot be used by userspace (i.e. its .open_device() hasn't been called). This inbetween state is not used when the device FD is spawned from the group FD, however when we create the device FD directly by opening a cdev it will be opened in the blocked state. The reason for the inbetween state is userspace only gets a FD but doesn't have the secure until binding the FD to an iommufd. So in the blocked state, only the bind operation is allowed, other device accesses are not allowed. Completing bind will allow user to further access the device. This is implemented by adding a flag in struct vfio_device_file to mark the blocked state and using a simple smp_load_acquire() to obtain the flag value and serialize all the device setup with the thread accessing this device. Following this lockless scheme, it can safely handle the device FD unbound->bound but it cannot handle bound->unbound. To allow this we'd need to add a lock on all the vfio ioctls which seems costly. So once device FD is bound, it remains bound until the FD is closed. Suggested-by: Jason Gunthorpe <jgg@xxxxxxxxxx> Signed-off-by: Yi Liu <yi.l.liu@xxxxxxxxx> --- drivers/vfio/vfio.h | 1 + drivers/vfio/vfio_main.c | 34 +++++++++++++++++++++++++++++++++- 2 files changed, 34 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h index d8275881c1f1..802e13f1256e 100644 --- a/drivers/vfio/vfio.h +++ b/drivers/vfio/vfio.h @@ -18,6 +18,7 @@ struct vfio_container; struct vfio_device_file { struct vfio_device *device; + bool access_granted; spinlock_t kvm_ref_lock; /* protect kvm field */ struct kvm *kvm; struct iommufd_ctx *iommufd; /* protected by struct vfio_device_set::lock */ diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c index c517252aba19..2267057240bd 100644 --- a/drivers/vfio/vfio_main.c +++ b/drivers/vfio/vfio_main.c @@ -476,7 +476,15 @@ int vfio_device_open(struct vfio_device_file *df) device->open_count--; } - return ret; + if (ret) + return ret; + + /* + * Paired with smp_load_acquire() in vfio_device_fops::ioctl/ + * read/write/mmap + */ + smp_store_release(&df->access_granted, true); + return 0; } void vfio_device_close(struct vfio_device_file *df) @@ -1104,8 +1112,14 @@ static long vfio_device_fops_unl_ioctl(struct file *filep, { struct vfio_device_file *df = filep->private_data; struct vfio_device *device = df->device; + bool access; int ret; + /* Paired with smp_store_release() in vfio_device_open() */ + access = smp_load_acquire(&df->access_granted); + if (!access) + return -EINVAL; + ret = vfio_device_pm_runtime_get(device); if (ret) return ret; @@ -1132,6 +1146,12 @@ static ssize_t vfio_device_fops_read(struct file *filep, char __user *buf, { struct vfio_device_file *df = filep->private_data; struct vfio_device *device = df->device; + bool access; + + /* Paired with smp_store_release() in vfio_device_open() */ + access = smp_load_acquire(&df->access_granted); + if (!access) + return -EINVAL; if (unlikely(!device->ops->read)) return -EINVAL; @@ -1145,6 +1165,12 @@ static ssize_t vfio_device_fops_write(struct file *filep, { struct vfio_device_file *df = filep->private_data; struct vfio_device *device = df->device; + bool access; + + /* Paired with smp_store_release() in vfio_device_open() */ + access = smp_load_acquire(&df->access_granted); + if (!access) + return -EINVAL; if (unlikely(!device->ops->write)) return -EINVAL; @@ -1156,6 +1182,12 @@ static int vfio_device_fops_mmap(struct file *filep, struct vm_area_struct *vma) { struct vfio_device_file *df = filep->private_data; struct vfio_device *device = df->device; + bool access; + + /* Paired with smp_store_release() in vfio_device_open() */ + access = smp_load_acquire(&df->access_granted); + if (!access) + return -EINVAL; if (unlikely(!device->ops->mmap)) return -EINVAL; -- 2.34.1