Re: role of crtcs in modesetting interfaces and possible abstraction away from userspace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dave Airlie <airlied@xxxxxxxxx> writes:

> Hi,
>
> So I've been attempting to hide the 30" Dell MST monitors in the
> kernel, and ran into a number of problems,
> but the major one is how to steal a crtc and get away with it.
>
> The standard scenario I have is
>
> CRTC 0: eDP monitor connected
>
> hotplug 30" monitor, userspace decides to configure things as
>
> CRTC 1: DP-4 - 30" monitor
> CRTC 2: eDP-1
>
> But since we lack atomic it does this in two steps, so when I get the
> first modeset to set the 30" monitor up
> I go and use CRTC-2 as the secondary crtc, as CRTC-0 is in use still,
> then I have to fail the second modeset,
> and things end up with me crying.
>
> So this led me to wonder why we expose CRTCs at all, and KMS does it
> because randr did it, but I've no idea
> why randr did it (Keith??).

Mostly because X has almost always exposed the hardware at as low a
level as possible and left clients to sort things out. Given that we had
no experience with this whole structure before RandR got implemented,
and that we've been running like this for eight years without terrible
trouble, it doesn't seem like an utter failure...

For this particular issue, we've got two choices:

 1) Describe the situation through the protocol and let applications
     sort it out

 2) Hide physical CRTCs from applications and create virtual CRTCs
    for applications.

One reason for exposing physical CRTCs to applications is to let them
figure out the full allocation plan before starting the process, so as
to minimize screen flicker given an API which doesn't let you specify
the whole configuration in one go.

If we hide them, then the kernel may need to shut down monitors while it
shuffles things around to match application requests.

> From my POV I don't think the modesetting interface needs to take
> crtcs, just connectors and modes,

I'm fine with making the X server need to be smarter about kernel CRTC
allocation, pushing the problem out of the kernel and up into the window
system. That seems like the simplest change in the kernel API to me.

Making X hide real CRTCs from clients seems like a fairly simple plan;
that would also offer us an opportunity to add 'virtual' CRTCs for use
by VNC or other software-defined display surfaces.

> so I'm wondering going forward for atomic should we even accept crtcs
> in the interface, just a list of rectangles,
> connectors per rectangle, etc.

Having a list of CRTCs means that the application would have a chance of
figuring out some of the impossible configurations before asking the
kernel; you couldn't light up two single-link monitors and a double-link
monitor if you only had three CRTCs.

> Now I'm at the point of trying to work out if I can make DP MST
> monitors a possibility before we get atomic,

I think fixing X to hide the physical CRTCs and only advertise virtual
ones should be pretty easy to manage; that would leave the kernel API
alone, at least for now.

> Ben and I discussed this here and he suggested we should make the
> userspace crtc ids pretty much meaningless and not have them tied to
> actual hw crtcs, so we can reroute things underneath userspace without
> changing it.

It's clear that we need this kind of redirection at some level in the
stack; what's unclear to me is whether this should be done in the kernel
or up in userspace.

With atomic mode setting in the kernel, I think you're probably right in
proposing to eliminate explicit CRTC allocation from that. I do think
you'll want to indicate the number of available CRTCs in the display
engine, and the number of CRTCs each monitor consumes. Do you know if
there are some of these monitors that can display lower resolution modes
with only a single CRTC? Or is the hardware so separate that you end up
always using multiple CRTCs to drive them?

For the current incremental mode setting API, I think it'd work either
way.

Pushing the problem out to user space is always tempting, and I don't
think it would be hard to teach the X server to manage this. That would
also eliminate the need to construct fake EDID data within the kernel;
the X server could do whatever it liked in building suitable video mode
lists given complete information about the monitor. Plus, I can see how
we'd offer an atomic RandR request that could operate on top of the
current API while minimizing flashing. Hiding CRTCs from the X server
would make this difficult, as the kernel wouldn't have the full set of
configuration information available without the atomic mode kernel API.

Solving this in the kernel would make the X piece simpler, although the
kernel would now be constructing fake EDID data to advertise the
combined set of modes up to X, and you'd end up with more flashing if
the kernel allocated the 'wrong' CRTC to any of the displays and needed
to disable/re-enable things to get a new configuration working.

Without RandR additions, the two solutions are effectively identical.
Somewhere you have to guess which CRTCs to use during incremental mode
setting, and sometimes you're just going to guess wrong and have to
correct that later on.

I'd pick whichever was simpler to implement and expect this to all be
resolved in the glorious atomic mode setting future we've been promised
for so long.

-- 
keith.packard@xxxxxxxxx

Attachment: pgpWWYB7AIfwt.pgp
Description: PGP signature

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel

[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux