Re: How should "max bpc" KMS property work?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2022-04-27 06:52, Pekka Paalanen wrote:
> Hi Ville and Alex,
> 
> thanks for the replies. More below.
> 
> TL;DR:
> 
> My take-away from this is that I should slam 'max bpc' to the max by
> default, and offer a knob for the user in case they want to lower it.
> 
> 
> On Tue, 26 Apr 2022 20:55:14 +0300
> Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx> wrote:
> 
>> On Tue, Apr 26, 2022 at 11:35:02AM +0300, Pekka Paalanen wrote:
>>> Hi all,
>>>
>>> I'm working on setting HDR & WCG video modes in Weston, and I thought
>>> setting "max bpc" KMS property on the connector would be a good idea.
>>> I'm confused about how it works though.
>>>
>>> I did some digging in https://gitlab.freedesktop.org/wayland/weston/-/issues/612
>>>
>>> Summary:
>>>
>>> - Apparently the property was originally added as a manual workaround
>>>   for sink hardware behaving badly with high depth. A simple end user
>>>   setting for "max bpc" would suffice for this use.
>>>
>>> - Drivers will sometimes automatically choose a lower bpc than the "max
>>>   bpc" value, but never bigger.
>>>
>>> - amdgpu seems to (did?) default "max bpc" to 8, meaning that I
>>>   definitely want to raise it.  
>>

I've wanted to remove the 8 bpc limitations for a while now but it
looks like we never did for anything other than eDP.

The original problem we solved was that some monitors default timing
couldn't be driven at a high bpc. Therefore users were faced with black
displays. On some displays you also can't drive high refresh rate modes
with a higher bpc.

>> I've occasionally pondered about doing the same for i915, just to have
>> the safest default possible. But I'd hate to lose the deep color testing
>> coverage knowing very few people would in practice raise the limit.
>> Also the number of systems where deep color doesn't work reliably
>> (or can't be made to work by not using a crap cable) seems to be quite
>> low.
> 
> I think when HDR and WCG get into display servers, setting 'max bpc'
> will become a standard action.
> 
> It's bit moot to e.g. render everything in electrical 10 bit RGB, if
> the link is just going to squash that into electrical 8 bit RGB, right?
> 
> So even 10 bit color would require setting 'max bpc' to at least 10 to
> be able to actually see it, source-side dithering aside.
> 
>>>
>>> If I always slam "max bpc" to the highest supported value for that
>>> property, do I lose more than workarounds for bad sink hardware?  
>>
>> We don't have any workarounds implemented like this in the kernel.
>> Or should not have at least. "max bpc" exists purely for the user
>> to have a say in the matter in addition to whatever the EDID/quirks
>> say. Ie. if the kernel knows for sure that deep color won't work on
>> a particular setup then it should just not allow deep color at all
>> despite what the prop value says.
>>
>> So the only danger is fighting with the user's wishes which I guess
>> you can overcome with some kind of user visible knob.
> 
> Right, good.
> 
> Furthermore, as a KMS client cannot make much assumptions about the KMS
> state it inherits from some other KMS client, it should know and
> program all possible KMS properties according to its own desires
> anyway. That, and the DRM master concept make sure that there cannot be
> any "third party" KMS configuration programs, like V4L2 has.
> 
>>> Do I lose the ability to set video modes that take too much bandwidth
>>> at uncapped driver-selected bpc while capping the bpc lower would allow
>>> me to use those video modes?
>>>
>>> Or, are drivers required to choose a lower-than-usual but highest
>>> usable bpc to make the requested video mode squeeze through the
>>> connector and link?  
>>
>> IMO drivers should implement the "reduce bpc until it fits"
>> fallback. We have that in i915, except for MST where we'd need
>> to potentially involve multiple streams in the fallback. That
>> is something we intend to remedy eventually but it's not an
>> entirely trivial thing to implement so will take some actual
>> work. ATM we just cap MST to <=8bpc to avoid users getting into
>> this situation so often.
> 
> Excellent, but judging from what Alex said, this is also not what
> amdgpu does. We have two drivers doing different things then?
> > So with Weston I probably have to document, that if you can't get the
> video mode you want working, try turning the 'max bpc' knob down and
> try again.
> 
> Or, I could cap 'max bpc' based on my framebuffer depth. If I have an
> electrical 8 bit FB (default in Weston), then there is not much use for
> having 'max bpc' > 8. This ignores the KMS color pipeline a bit. Does
> that make sense?
> 

I think both of those options make sense. I'll need to think about the
automatic fallback if we don't have enough bandwidth for max_bpc.

If a KMS driver falls back automatically we probably want some way
for a (color managed) compositor to know if the output bpc is reduced.

> Does KMS use dithering automatically, btw?
> 

amdgpu's display driver does.

> The only mention of dithering in KMS docs is some nouveau-specific KMS
> properties and another for radeon.
> 
>>>
>>> Do I need to implement a fallback strategy in a display server,
>>> starting from the highest possible "max bpc" value, and if my modeset
>>> is rejected, repeatedly try with lower "max bpc" setting until it works
>>> or I'm out of bpc options?  
>>
>> IMO the bpc part should be handled by the kernel since we already
>> had this behaviour even before the "max bpc" prop was introduced
>> and we didn't add an explicit "use this bpc or fail" prop. But of
>> course you should have some kind of sensible fallback strategy for
>> things that just fail for other reasons.
> 
> Right, but this means that I don't have to add 'max bpc' as yet another
> dimension in the combinatorial explosion of KMS parameters I would need
> to search to find a working setup. That's really good.
> 
> The one thing missing is seeing what format and bpc we actually got on
> the link.
> 
>> The one problem we have in the kernel is that we have no way to
>> ask the user if the display we tried to light up is actually
>> working. So our policy decisions can't really involve user input.
>> Userspace should not generally have that problem.
> 
> Indeed.
> 
> Also like Alex said, the kernel does not know if the user prefers high
> color depth or high refresh rate either. One way to solve that is to
> light up the requested video mode any way the kernel can, and then
> report what the resulting color depth is. Another way is to have
> explicit "use this bpc or fail" KMS property, maybe in the form of 'min
> bpc' as I recall being discussed in the past, and let userspace guess
> what might work. The former is easier to light up, but probing requires
> actually setting the modes. The latter may require a lot of
> trial-and-error from userspace to find anything that works, but it
> takes only time and not blinking - as far as things can be detected
> with TEST_ONLY commits. Then one still has to ask the user if it
> actually worked.
> 

min_bpc sounds like something we might want for HDR use-cases, unless
the compositor has a way to confirm the output box (and format). min_bpc
would allow the KMS driver to pick the lowest required bpc so the
compositor always has a guarantee of quality.

Harry

> 
> Thanks,
> pq



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux