Re: [PATCH v2] MMIO: Make coalesced mmio use a device per zone

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote:
> On 07/19/2011 01:31 PM, Sasha Levin wrote:
> > This patch changes coalesced mmio to create one mmio device per
> > zone instead of handling all zones in one device.
> >
> > Doing so enables us to take advantage of existing locking and prevents
> > a race condition between coalesced mmio registration/unregistration
> > and lookups.
> >
> > @@ -63,7 +63,7 @@ extern struct kmem_cache *kvm_vcpu_cache;
> >    */
> >   struct kvm_io_bus {
> >   	int                   dev_count;
> > -#define NR_IOBUS_DEVS 200
> > +#define NR_IOBUS_DEVS 300
> >   	struct kvm_io_device *devs[NR_IOBUS_DEVS];
> >   };
> 
> This means that a lot of non-coalesced-mmio users can squeeze out 
> coalesced-mmio.  I don't know if it's really worthwhile, but the 100 
> coalesced mmio slots should be reserved so we are guaranteed they are 
> available.

We are currently registering 4 devices, plus how many
ioeventfds/coalesced mmio zones the user wants. I felt bad about upping
it to 300 really.

> 
> >
> > @@ -95,6 +85,8 @@ static void coalesced_mmio_destructor(struct kvm_io_device *this)
> >   {
> >   	struct kvm_coalesced_mmio_dev *dev = to_mmio(this);
> >
> > +	list_del(&dev->list);
> > +
> >   	kfree(dev);
> >   }
> >
> 
> No lock?

The lock is there to synchronize access to the coalesced ring (it was
here before this patch too, it's not something new), not the device
list.

The device list is only accessed when kvm->slots_lock is held, so it
takes care of that.

> 
> >   int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
> >   					   struct kvm_coalesced_mmio_zone *zone)
> >   {
> > -	int i;
> > -	struct kvm_coalesced_mmio_dev *dev = kvm->coalesced_mmio_dev;
> > -	struct kvm_coalesced_mmio_zone *z;
> > -
> > -	if (dev == NULL)
> > -		return -ENXIO;
> > +	struct kvm_coalesced_mmio_dev *dev;
> >
> >   	mutex_lock(&kvm->slots_lock);
> >
> > -	i = dev->nb_zones;
> > -	while (i) {
> > -		z =&dev->zone[i - 1];
> > -
> > -		/* unregister all zones
> > -		 * included in (zone->addr, zone->size)
> > -		 */
> > -
> > -		if (zone->addr<= z->addr&&
> > -		    z->addr + z->size<= zone->addr + zone->size) {
> > -			dev->nb_zones--;
> > -			*z = dev->zone[dev->nb_zones];
> > +	list_for_each_entry(dev,&kvm->coalesced_zones.items, list)
> > +		if (coalesced_mmio_in_range(dev, zone->addr, zone->size)) {
> > +			kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS,&dev->dev);
> > +			kvm_iodevice_destructor(&dev->dev);
> >   		}
> > -		i--;
> > -	}
> 
> No lock?
> 
> >
> >   struct kvm_coalesced_mmio_dev {
> > +	struct list_head list;
> >   	struct kvm_io_device dev;
> >   	struct kvm *kvm;
> > -	spinlock_t lock;
> > -	int nb_zones;
> > -	struct kvm_coalesced_mmio_zone zone[KVM_COALESCED_MMIO_ZONE_MAX];
> > +	struct kvm_coalesced_mmio_zone zone;
> >   };
> >
> 
> Why a list instead of a linear array?
> 

We have an unknown amount of coalesced devices which we allocate
dynamically on creation, it sounded more logical to me to just chain
them in a list.

-- 

Sasha.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux