Re: [RFC PATCH v1 08/10] vfio-ccw: Wire up the CRW irq and CRW region

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 6 Dec 2019 16:24:31 -0500
Eric Farman <farman@xxxxxxxxxxxxx> wrote:

> On 12/6/19 5:21 AM, Cornelia Huck wrote:
> > On Thu, 5 Dec 2019 15:43:55 -0500
> > Eric Farman <farman@xxxxxxxxxxxxx> wrote:
> >   
> >> On 11/19/19 1:52 PM, Cornelia Huck wrote:  
> >>> On Fri, 15 Nov 2019 03:56:18 +0100
> >>> Eric Farman <farman@xxxxxxxxxxxxx> wrote:

> >> The actual CRW handlers are in the base cio code, and we only get into
> >> vfio-ccw when processing the individual subchannels.  Do we need to make
> >> a device (or something?) at the guest level for the chpids that exist?
> >> Or do something to say "hey we got this from a subchannel, put it on a
> >> global queue if it's unique, or throw it away if it's a duplicate we
> >> haven't processed yet" ?  Thoughts?  
> > 
> > The problem is that you can easily get several crws that look identical
> > (consider e.g. a chpid that is set online and offline in a tight loop).  
> 
> Yeah, I have a little program that does such a loop.  Things don't work
> too well even with a random delay between on/off, though a hack I'm
> trying to formalize for v2 improves matters.  If I drop that delay to
> zero, um, well I haven't had the nerve to try that.  :)

:)

I'm also not quite sure what our expectations need to be here. IIRC, it
is not guaranteed that we get a CRW for each and every of the
operations anyway. From what I remember, the only sane way for the
guest to process channel reports is to retrieve the current state (via
stsch) and process that, as the state may have changed again between
generating the CRW and the guest retrieving it.

> 
> > The only entity that should make decisions as to what to process here
> > is the guest.  
> 
> Agreed.  So your suggestion in the QEMU series of acting like stcrw is
> good; give the guest all the information it can, and let it decide what
> thrashing is needed.  I guess if I can just queue everything on the
> vfio_ccw_private, and move one (two?) into the crw_region each time it's
> read then that should work well enough.  Thanks!

I *think* we can assume that the callback is invoked by the common I/O
layer for every affected subchannel if there's actually an event from
the hardware, so we can be reasonably sure that we can relay every
event to userspace.

> 
> > 
> > (...)
> >   
> >>>> @@ -312,6 +334,11 @@ static int vfio_ccw_chp_event(struct subchannel *sch,
> >>>>  	case CHP_ONLINE:
> >>>>  		/* Path became available */
> >>>>  		sch->lpm |= mask & sch->opm;
> >>>> +		private->crw.rsc = CRW_RSC_CPATH;
> >>>> +		private->crw.rsid = 0x0 | (link->chpid.cssid << 8) |
> >>>> +				    link->chpid.id;
> >>>> +		private->crw.erc = CRW_ERC_INIT;
> >>>> +		queue_work(vfio_ccw_work_q, &private->crw_work);    
> >>>
> >>> Isn't that racy? Imagine you get one notification for a chpid and queue
> >>> it. Then, you get another notification for another chpid and queue it
> >>> as well. Depending on when userspace reads, it gets different chpids.
> >>> Moreover, a crw may be lost... or am I missing something obvious?    
> >>
> >> Nope, you're right on.  If I start thrashing config on/off chpids on the
> >> host, I eventually fall down with all sorts of weirdness.
> >>  
> >>>
> >>> Maybe you need a real queue for the generated crws?    
> >>
> >> I guess this is what I'm wrestling with...  We don't have a queue for
> >> guest-wide work items, as it's currently broken apart by subchannel.  Is
> >> adding one at the vfio-ccw level right?  Feels odd to me, since multiple
> >> guests could use devices connected via vfio-ccw, which may or may share
> >> common chpids.  
> > 
> > One problem is that the common I/O layer already processes the crws and
> > translates them into different per-subchannel events. We don't even
> > know what the original crw was: IIUC, we translate both a crw for a
> > chpid and a link incident event (reported by a crw with source css and
> > event information via chsc) concerning the concrete link to the same
> > event. That *probably* doesn't matter too much, but it makes things
> > harder. Access to the original crw queue would be nice, but hard to
> > implement without stepping on each others' toes.>  
> >>
> >> I have a rough hack that serializes things a bit, while still keeping
> >> the CRW duplication at the subchannel level.  Things improve
> >> considerably, but it still seems odd to me.  I'll keep working on that
> >> unless anyone has any better ideas.  
> > 
> > The main issue is that we're trying to report a somewhat global event
> > via individual devices...  
> 
> +1

If only we could use some kind of global interface... but I can't think
of a sane way to do that :/

> 
> > 
> > ...what about not reporting crws at all, but something derived from the
> > events we get at the subchannel driver level? Have four masks that
> > indicate online/offline/vary on/vary off for the respective chpids, and
> > have userspace decide how they want to report these to the guest? A
> > drawback (?) would be that a series of on/off events would only be
> > reported as one on and one off event, though. Feasible, or complete
> > lunacy?
> >   
> 
> Not complete lunacy, but brings concerns of its own as we'd need to
> ensure the masks don't say something nonsensical, like (for example)
> both vary on and vary off.  Or what happens if both vary on and config
> off gets set?  Not a huge amount of work, but just seems to carry more
> risk than a queue of the existing CRWs and letting the guest process
> them itself, even if things are duplicated more than necessary.  In
> reality, these events aren't that common anyway unless things go REALLY
> sideways.

Yeah, that approach probably just brings a different set of issues... I
think we would need to relay all mask changes, and QEMU would need to
inject all events, and the guest would need to figure out what to do.
Not sure if that is better.




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux