Re: [PATCH v2] Displayport compliance testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Tuesday, July 29, 2014 2:53 PM
2014-07-22 18:11 GMT-03:00 Jesse Barnes <jbarnes@xxxxxxxxxxxxxxxx>:
On Tue, 22 Jul 2014 22:53:44 +0200
Daniel Vetter <daniel@xxxxxxxx> wrote:

On Tue, Jul 22, 2014 at 10:48 PM, Jesse Barnes <jbarnes@xxxxxxxxxxxxxxxx> wrote:
Are you saying
you'll reject this approach entirely?
I'm saying that I don't see terrible lot of value in adding a bunch of
code for a sticker, and that we should look into making it actually
useful by testing the paths that end-users end up using. And we have
to keep this working once it's merged.

But if it doesn't make sense to make this sticker useful while still
being able to get it then I'll reconsider.
Yeah I think it depends on the test.  We're supposed to go through
existing paths for testing e.g. link training with different params
(though with a fixed fb and mode), so getting coverage there is
something we want regardless.  But getting something like probing
covered as part of the compliance testing may be something else
entirely...
I was finally able to take some time to read the spec, and I agree
that the hybrid approach looks like the way to go. Some tests require
specifically-crafted FBs, while some other tests cause real hotplug
events to be sent from the sink. If there's an unknown/unspecified
user-space running when the tests are happening, who knows how it is
going to react? Of course, for tests that can be implemented directly
inside the Kernel still using the "standard" code paths, we should do
it in the Kernel.
There's no question that this is going to require considerable support within the kernel. The tests that can be placed inside the kernel are mostly going to be ones that simply can't be accomplished from userspace (the 400us AUX transaction retry, for instance) or are elements that are necessary to enable the userspace tests to execute properly. The tests themselves though should definitely be handled out in userspace where possible.
One possible approach that I thought would be the following:
- Each DP encoder provides its own debugfs file for DP test compiance
(e.g., /sys/kernel/debug/dri/0/i915_ddi_b_dp_test_compliance).
- If the file is not open, any requests for tests that require special
actions from our driver - outside of the normal behavior - will be
NACKed.
There's a couple options here that I've considered. This is a perfectly valid option but it's also possible to just use one debugfs file for the compliance work. It's unlikely that multiple Displayport connectors will ever be testing simultaneously, so the need to adjust the parameters on a per-connector basis is arguable. Only the connector for the sink that issued the test request would be required to read the debugfs file and adhere to its parameters.

For general debugging of Displayport though, this is a better option as it allows for considerably more flexibility and utility. Personally I prefer the individual file approach, even though it might end up being somewhat more complicated to implement. The single-file approach does have merit though, so if anyone has a solid argument in favor of that, it's worth the discussion.
- If the file is open, we ACK test requests and print special strings
to the debugfs file telling the user-space app what it's supposed to
do. We could use simple strings like "set the preferred mode", "set
failsafe mode", "set mode using FB test pattern Y", etc. A stringly
typed protocol :)
Probably better to use clearly-defined constants instead of strings, but yes, that's the idea. :)
- The user-space app needs to be the DRM master, open the debugfs
file, parse the operations it prints and act accordingly, and listen
to the hotplug events sent by the Kernel.
Agreed.
- If some special corner quirky case needs to be done (e.g., train
link with a specific number of lanes), the Kernel should store this
information at struct intel_dp, and then when a modeset is done on
this encoder, we check if the debugfs file is open (i.e., we're doing
compliance testing) and then we use the specified configuration. With
this, we can probably avoid special uevents or debug-only
connector/encoder properties.
I would argue here that it's better to leave as much of the active configuration undisturbed as possible and only update the fields necessary to complete the test(s). Additionally, any fields that are changed should be saved and subsequently restored upon completion of the test(s)
- The user-space app could be part of intel-gpu-tools.
Cool, this is what I was planning on doing. One thing I was looking into was potentially launching the test app from a uevent in the kernel. But none of the solutions I could find were all that good and some were downright scary. So having the app as a separate entity that needs to be launched by the tester isn't such a bad idea.

Anyway, this is just an alternate idea to Daniel's suggestion, and
many other possible implementation ideas would work for me. Todd, what
is your opinion?

I will continue the review of the patches we currently have, since it
seems we didn't decide what we're gonna merge.

--
Jesse Barnes, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Tuesday, July 22, 2014 2:11 PM
On Tue, 22 Jul 2014 22:53:44 +0200

Yeah I think it depends on the test. We're supposed to go through
existing paths for testing e.g. link training with different params
(though with a fixed fb and mode), so getting coverage there is
something we want regardless. But getting something like probing
covered as part of the compliance testing may be something else
entirely...

Tuesday, July 22, 2014 1:53 PM

I'm saying that I don't see terrible lot of value in adding a bunch of
code for a sticker, and that we should look into making it actually
useful by testing the paths that end-users end up using. And we have
to keep this working once it's merged.

But if it doesn't make sense to make this sticker useful while still
being able to get it then I'll reconsider.
-Daniel
Tuesday, July 22, 2014 1:48 PM
On Tue, 22 Jul 2014 08:41:11 +0200

This amounts to a complete rewrite with a different goal. In this
series, I only see two big new test functions added: one for EDID
testing and one for link training. The EDID one could probably share
some more code with the core, but for link training it seems pretty
simple, unless we need a full mode & fb for some reason (I'm presuming
Todd has tested this with the equipment). Paulo had some good
feedback, but overall this is a small addition to get the compliance
tests working and get some coverage on the link training paths which we
desperately need.

Having some additional user level tests would be great, but that's a
much bigger and different task than simply implementing what's required
for the DP compliance test. Asking Todd to take on a huge new task
just because he posted this series is a big request. Are you saying
you'll reject this approach entirely?

Monday, July 21, 2014 11:41 PM

Ok, high level review of the overall approach.

So your design is to implement special functions for each test procedure.
This has two issues:
- We have duplicated code we test, which has a really high chance to be
different from what users actually run on their systems. And so dp
validation won't actually validate the real code. Example would be the
fallback edid reading tests using defer/nacks. The drm i2c helpers
already have logic for this (including fallback modes), but very likely
it doesn't quite match the dp requirements. So we need to adjust that
(presuming the dp standard is sane).
- Even if we can fix this there's still lots of important code in the
probe paths we can't test with this. Userspace updates edids through the
get_connector ioctl, and there's lots of additional logic in-between
until we reach the dp connector implementation in intel_dp.c.

So what I'd like to see is that the test procedures are implemented in
userspace, using nothing more than the standard kms interfaces.

Afaics we need three steps overall:
- Get a shot hpd pulse and notice that we should do a validation test
sequence. We need to get this information to userspace, and the best
approach would be a uevent on the connector in sysfs. We can supply any
additional metadata (like the test mode) userspace needs with uevent
attributes.

- Do the test from userspace. Depending upon what exactly needs to be done
we might need to extend the properties exposed to userspace, e.g. dp
link parameters.

- Give the result (edid checksum, ...) back to the dp sink/validator. A
generic dp aux transaction interface for userspace in the dp helpers,
maybe even as a real /dev node should fit the bill.

dp validation has the potential to automatically test a pile of code for
which we currently have 0 automated test coverage at all. I want to fully
seize this opportunity instead of doing the bare minimum to get a (rather
useless) certification.
-Daniel

--
Sent with Postbox
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux