On Fri, Nov 08, 2019 at 09:46:48AM +0100, Daniel Vetter wrote: > On Fri, Nov 08, 2019 at 10:16:59AM +0200, Pekka Paalanen wrote: > > On Thu, 7 Nov 2019 16:02:59 -0500 > > Sean Paul <sean@xxxxxxxxxx> wrote: > > > > > From: Sean Paul <seanpaul@xxxxxxxxxxxx> > > > > > > Hey all, > > > I'm back with another trace events patchset. My first attempt [1] went > > > better than expected, with enthusiasm for the idea and distain for the > > > implementation. > > > > > > As promised, I went through and added proper trace events. > > > > > > Before I get _too_ far, I wanted to post this set to get feedback on the > > > direction I'm going. I've gone back and forth on whether to introduce a > > > bunch of trace systems vs using the trace class enum. I've settled on > > > the trace class enum since it seems more extensible and easier to use in > > > production that just having the blunt "enable this system", or > > > the tedious "enable each event I'm interested in individually" methods. > > > > > > So, consider this one an RFC unless we're all in agreement, in which > > > case we should apply it :) > > > > > > > Hi Sean, > > > > thanks for working on this. I'm not at all familiar with the trace > > infrastructure, so I'd like to know how flight recorder type of > > behaviour (a continuously written fixed size ring buffer) can be > > achieved. Let us have a display server in userspace which in production > > hits an unexpected condition with DRM and needs to record information > > for post-mortem debugging. What should it do to have a trace log after > > the fact to attach with a bug report? > > Yeah I think this has the uapi smell, so would need the userspace side > too, once we agree on something. > > > I assume that is a major use case for this new feature. Is performance > > profiling another use case? > > > > Is it ok to build userspace to rely on these trace events during normal > > operations, e.g. for continuous adjustment of timings/timers? > > Yeah I'm kinda freaked out about this. In the old thread we discussed the > question of whether dumping dmesg into tracepoints would make them uapi, > but no conclusion was reached. > > Now instead we're going to duplicate a lot of these debug outputs, with > specific trace events, which is going to make them even more uapi. And a > lot of these are very much implementation details (drivers might change > their opinion on which other states they need for check/commit, e.g. if > they move a bunch of global state from crtcs into a new driver private > internal object). Explicit tracepoints with symbol value imo will have a > much higher chance that someone starts relying on them. > > Also, dmesg is uapi too. Except there's a gentlemen's agreement that you > should depend upon it because it will make kernel engineers really pissed > if they can't change/move a specific string. So from that point I think > just dumping the existing debug strings as unstructured strings into trace > buffers is a much better idea. Plus no need to have the same information > duplicated in both a debug output and a tracepoint. > > I think even slightly structured debug tracepoints (e.g. a simple > (drm_object_id, string) pair) would be too structured since then userspace > could start watching for events on specific things and maybe use that for > their retry loops after TEST_ONLY failures. Even though we have a lot of > debug output strings where that bit of structure is essentially there > already. > > Aside from this a question on process ... I re-read the old thread and I'm > not sure how you concluded we've reached consensus on this here, seemed > all pretty inconclusive on the details and discussions died out? Reading the thread again I guess it was Joonas comment that tracepoints are also uapi, and you concluded we can't do this at all. The trouble is, _everything_ that's observeable from userspace is uapi. So dmesg, timing of the code, ... For a real funky example of this look at the ext4 regression, where better code resulted in less entropy generated and the system hung at boot-up. But on the flip side, it's only uapi we can't change if someone depends upon it. For dmesg most people understand that parsing dmesg is bad taste, so generally not a big problem. For tracepoints, especially if they come with lots of structured uapi, it's a lot more "this is baked in forever". But it doesn't have to be, with changed them quite a few times. But as a rule of thumb, the more structured the data is you present to userspace, the more likely they'll use it in actual control flow, and the more likely a change will break userspace in a meaningful way. Hence for debugging I think least amount of structure is best, plus huge warnings that you really shouldn't use this for anything else than debug logs. Anything more that leaks all these implementation details gives me the chillies. And Pekka's question right off the bat with "can I use this for tuning timers" kinda proves my point. If it loosk like intentional uapi userspace will treat it like intentional uapi. -Daniel > -Daniel > > > > > > > > Thanks, > > pq > > > > > > > > > > [1]- https://patchwork.freedesktop.org/patch/335350/ > > > > > > Sean Paul (6): > > > drm: trace: Make the vblank queued/delivered events classed > > > drm: trace: Introduce drm_trace() and trace event classes > > > drm: trace: Add trace events for atomic state lifetime > > > drm: trace: Add crtc state trace events > > > drm: trace: Add connector state tracing > > > drm: trace: Add plane state tracing > > > > > > Documentation/gpu/drm-internals.rst | 9 + > > > drivers/gpu/drm/drm_atomic.c | 61 ++- > > > drivers/gpu/drm/drm_trace.h | 563 ++++++++++++++++++++++++++-- > > > drivers/gpu/drm/drm_vblank.c | 8 +- > > > 4 files changed, 609 insertions(+), 32 deletions(-) > > > > > > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel