On Mon, 2013-09-30 at 15:24 +0100, Chris Wilson wrote: > On Mon, Sep 30, 2013 at 05:08:31PM +0300, ville.syrjala@xxxxxxxxxxxxxxx wrote: > > From: Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx> > > > > We have several problems with out VGA handling: > > - We try to use the GMCH control VGA disable bit even though it may > > be locked > > - If we manage to disable VGA throuh GMCH control, we're no longer > > able to correctly disable the VGA plane > > - Taking part in the VGA arbitration is too expensive for X [1] > > I'd like to emphasize that X disables DRI if it detects 2 vga cards, > effectively breaking all machines with a discrete GPU. Even if one of > those is not being used. Why does it do this? It seems like DRI would make little or no use of VGA space. Having more than one VGA card seems like a pretty common condition when integrated graphics are available. We also seem to have quite some interest in assigning one or more of the cards to a virtual machine, so I worry we're headed the wrong way if X starts deciding not to release the VGA arbiter lock. On a modern desktop what touches VGA space with a high enough frequency that we care about it's performance? Thanks, Alex > > +/* > > + * 21 devices with 8 functions per device max on the same bus. > > + * We don't need locking for these due to stop_machine(). > > + */ > > +static u16 vga_cmd[21*8]; > > +static u16 vga_ctl[21*8]; > > Should we just allocate storage for when we need it? We are now adding > several hundred bytes to our module, which is bound to cause us to use > an extra page, and they can be passed around through the stop_machine > closure rather than static. > > But anyway, it does what is says on the tin and makes my dGPU testing > box usable again, without breaking any other machine that I've tested on > so far, > Tested-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > -Chris > _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx