+ Daniel, Chris On Thu, 2017-12-07 at 09:21 +0000, Tvrtko Ursulin wrote: > On 04/12/2017 15:02, Lionel Landwerlin wrote: > > Hi, > > > > After discussion with Chris, Joonas & Tvrtko, this series adds an > > additional commit to link the render node back to the card through a > > symlink. Making it obvious from an application using a render node to > > know where to get the information it needs. > > Important thing to mention as well is that it is trivial to get from the > master drm fd to the sysfs root, via fstat and opendir > /sys/dev/char/<major>:<minor>. With the addition of the card symlink to > render nodes it is trivial for render node fd as well. > > I am happy with this approach - it is extensible, flexible and avoids > issues with ioctl versioning or whatnot. With one value per file it is > trivial for userspace to access. > > So for what I'm concerned, given how gputop would use all of this and so > be the userspace, if everyone else is happy, I think we could do a > detailed review and prehaps also think about including gputop in some > distribution to make the case 100% straightforward. For the GPU topology I agree this is the right choice, it's going to be about topology after all, and directory tree is the perfect candidate. And if a new platform appears, then it's a new platform and may change the topology well the hardware topology has changed. For the engine enumeration, I'm not equally sold for sysfs exposing it. It's a "linear list of engine instances with flags" how the userspace is going to be looking at them. And it's also information about what to pass to an IOCTL as arguments after decision has been made, and then you already have the FD you know you'll be dealing with, at hand. So another IOCTL for that seems more convenient. So I'd say for the GPU topology part, we go forward with the review and make sure we don't expose driver internal bits that could break when refactoring code. If the exposed N bits of information are strictly tied to the underlying hardware, we should have no trouble maintaining that for the foreseeable future. Then we can continue on about the engine discovery in parallel, not blocking GPU topology discovery. Regards, Joonas > > Regards, > > Tvrtko > > > > > Cheers, > > > > Lionel Landwerlin (5): > > drm: add card symlink in render sysfs directory > > drm/i915: store all subslice masks > > drm/i915/debugfs: reuse max slice/subslices already stored in sseu > > drm/i915: expose engine availability through sysfs > > drm/i915: expose EU topology through sysfs > > > > drivers/gpu/drm/drm_drv.c | 11 + > > drivers/gpu/drm/i915/i915_debugfs.c | 50 ++-- > > drivers/gpu/drm/i915/i915_drv.c | 2 +- > > drivers/gpu/drm/i915/i915_drv.h | 56 ++++- > > drivers/gpu/drm/i915/i915_sysfs.c | 386 +++++++++++++++++++++++++++++++ > > drivers/gpu/drm/i915/intel_device_info.c | 169 ++++++++++---- > > drivers/gpu/drm/i915/intel_engine_cs.c | 12 + > > drivers/gpu/drm/i915/intel_lrc.c | 2 +- > > drivers/gpu/drm/i915/intel_ringbuffer.h | 6 +- > > 9 files changed, 617 insertions(+), 77 deletions(-) > > > > -- > > 2.15.1 > > _______________________________________________ > > Intel-gfx mailing list > > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx > > > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/intel-gfx -- Joonas Lahtinen Open Source Technology Center Intel Corporation _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx