On Fri, Oct 21, 2011 at 11:27:48AM +0200, Jan Kiszka wrote: > On 2011-10-21 09:54, Michael S. Tsirkin wrote: > > On Fri, Oct 21, 2011 at 09:09:10AM +0200, Jan Kiszka wrote: > >> On 2011-10-21 00:02, Michael S. Tsirkin wrote: > >>>>> Yes. But this still makes an API for acquiring per-vector resources a requirement. > >>>> > >>>> Yes, but a different one than current use/unuse. > >>> > >>> What's wrong with use/unuse as an API? It's already in place > >>> and virtio calls it. > >> > >> Not for that purpose. > >> It remains a useless API in the absence of KVM's > >> requirements. > >> > > > > Sorry, I don't understand. This can acquire whatever resources > > necessary. It does not seem to make sense to rip it out > > only to add a different one back in. > > > >>> > >>>> And it will be an > >>>> optional one, only for those devices that need to establish irq/eventfd > >>>> channels. > >>>> > >>>> Jan > >>> > >>> Not sure this should be up to the device. > >> > >> The device provides the fd. At least it acquires and associates it. > >> > >> Jan > > > > It would surely be beneficial to be able to have a uniform > > API so that devices don't need to be recoded to be moved > > in this way. > > The point is that the current API is useless for devices that do not > have to declare any vector to the core. Don't assigned devices want this as well? They handle 0-address vectors specially, and this hack absolutely doesn't belong in pci core ... > By forcing them to call into > that API, we solve no current problem automatically. We rather need > associate_vector_with_x (and the reverse). And that only for device that > have different backends than user space models. > > Jan I'll need to think about this, would prefer this series not to get blocked on this issue. We more or less agreed to add _use_all/unuse_all for now? > -- > Siemens AG, Corporate Technology, CT T DE IT 1 > Corporate Competence Center Embedded Linux -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html