On 04/08/17 12:41, Chris Wilson wrote:
Quoting Lionel Landwerlin (2017-08-04 12:20:32)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@xxxxxxxxx>
---
tests/perf.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tests/perf.c b/tests/perf.c
index 279ff0c6..65a1606d 100644
--- a/tests/perf.c
+++ b/tests/perf.c
@@ -1271,9 +1271,7 @@ read_2_oa_reports(int format_id,
/* Note: we allocate a large buffer so that each read() iteration
* should scrape *all* pending records.
*
- * The largest buffer the OA unit supports is 16MB and the smallest
- * OA report format is 64bytes allowing up to 262144 reports to
- * be buffered.
+ * The largest buffer the OA unit supports is 16MB.
Out of curiosity, how is userspace meant to know? Or is it part of the
platform specific details that we spread around kernel/userspace?
-Chris
The current implementation always uses the largest buffer size (16Mb).
Some of our tests verify that correct behavior at the limits (like
overflow event & correct recovery after disable/enable).
We could make that information available, but I'm not sure it's going to
be that useful because context-switch reports will prevent estimation on
how much the buffer gets filled over time. The behavior from userspace
should be to use poll for monitoring when data is available and read it
as often as it's made available.
-
Lionel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx