Re: ✓ Fi.CI.BAT: success for IGT PMU support (rev18)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 22, 2017 at 11:57:19AM +0000, Tvrtko Ursulin wrote:
> 
> Hi guys,
> 
> On 22/11/2017 11:41, Patchwork wrote:
> 
> [snip]
> 
> > Testlist changes:
> > +igt@perf_pmu@all-busy-check-all
> > +igt@perf_pmu@busy-bcs0
> > +igt@perf_pmu@busy-check-all-bcs0
> > +igt@perf_pmu@busy-check-all-rcs0
> > +igt@perf_pmu@busy-check-all-vcs0
> > +igt@perf_pmu@busy-check-all-vcs1
> > +igt@perf_pmu@busy-check-all-vecs0
> > +igt@perf_pmu@busy-no-semaphores-bcs0
> > +igt@perf_pmu@busy-no-semaphores-rcs0
> > +igt@perf_pmu@busy-no-semaphores-vcs0
> > +igt@perf_pmu@busy-no-semaphores-vcs1
> > +igt@perf_pmu@busy-no-semaphores-vecs0
> > +igt@perf_pmu@busy-rcs0
> > +igt@perf_pmu@busy-vcs0
> > +igt@perf_pmu@busy-vcs1
> > +igt@perf_pmu@busy-vecs0
> > +igt@perf_pmu@cpu-hotplug
> > +igt@perf_pmu@event-wait-rcs0
> > +igt@perf_pmu@frequency
> > +igt@perf_pmu@idle-bcs0
> > +igt@perf_pmu@idle-no-semaphores-bcs0
> > +igt@perf_pmu@idle-no-semaphores-rcs0
> > +igt@perf_pmu@idle-no-semaphores-vcs0
> > +igt@perf_pmu@idle-no-semaphores-vcs1
> > +igt@perf_pmu@idle-no-semaphores-vecs0
> > +igt@perf_pmu@idle-rcs0
> > +igt@perf_pmu@idle-vcs0
> > +igt@perf_pmu@idle-vcs1
> > +igt@perf_pmu@idle-vecs0
> > +igt@perf_pmu@init-busy-bcs0
> > +igt@perf_pmu@init-busy-rcs0
> > +igt@perf_pmu@init-busy-vcs0
> > +igt@perf_pmu@init-busy-vcs1
> > +igt@perf_pmu@init-busy-vecs0
> > +igt@perf_pmu@init-sema-bcs0
> > +igt@perf_pmu@init-sema-rcs0
> > +igt@perf_pmu@init-sema-vcs0
> > +igt@perf_pmu@init-sema-vcs1
> > +igt@perf_pmu@init-sema-vecs0
> > +igt@perf_pmu@init-wait-bcs0
> > +igt@perf_pmu@init-wait-rcs0
> > +igt@perf_pmu@init-wait-vcs0
> > +igt@perf_pmu@init-wait-vcs1
> > +igt@perf_pmu@init-wait-vecs0
> > +igt@perf_pmu@interrupts
> > +igt@perf_pmu@invalid-init
> > +igt@perf_pmu@most-busy-check-all-bcs0
> > +igt@perf_pmu@most-busy-check-all-rcs0
> > +igt@perf_pmu@most-busy-check-all-vcs0
> > +igt@perf_pmu@most-busy-check-all-vcs1
> > +igt@perf_pmu@most-busy-check-all-vecs0
> > +igt@perf_pmu@multi-client-bcs0
> > +igt@perf_pmu@multi-client-rcs0
> > +igt@perf_pmu@multi-client-vcs0
> > +igt@perf_pmu@multi-client-vcs1
> > +igt@perf_pmu@multi-client-vecs0
> > +igt@perf_pmu@other-init-0
> > +igt@perf_pmu@other-init-1
> > +igt@perf_pmu@other-init-2
> > +igt@perf_pmu@other-init-3
> > +igt@perf_pmu@other-init-4
> > +igt@perf_pmu@other-init-5
> > +igt@perf_pmu@other-init-6
> > +igt@perf_pmu@other-read-0
> > +igt@perf_pmu@other-read-1
> > +igt@perf_pmu@other-read-2
> > +igt@perf_pmu@other-read-3
> > +igt@perf_pmu@other-read-4
> > +igt@perf_pmu@other-read-5
> > +igt@perf_pmu@other-read-6
> > +igt@perf_pmu@rc6
> > +igt@perf_pmu@rc6p
> > +igt@perf_pmu@render-node-busy-bcs0
> > +igt@perf_pmu@render-node-busy-rcs0
> > +igt@perf_pmu@render-node-busy-vcs0
> > +igt@perf_pmu@render-node-busy-vcs1
> > +igt@perf_pmu@render-node-busy-vecs0
> > +igt@perf_pmu@semaphore-wait-bcs0
> > +igt@perf_pmu@semaphore-wait-rcs0
> > +igt@perf_pmu@semaphore-wait-vcs0
> > +igt@perf_pmu@semaphore-wait-vcs1
> > +igt@perf_pmu@semaphore-wait-vecs0
> 
> Would it be possible to have a test run of these new tests on the shards?

The shard run will run them automatically, you just have to check the
results manually, and look at shards-all.html instead of shards.html



-- 
Petri Latvala
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux