On 8/13/2018 3:36 PM, Stefan Beller wrote:
On Mon, Aug 13, 2018 at 12:25 PM Jeff King <peff@xxxxxxxx> wrote:
I can buy the argument that it's nice to have some form of profiling
that works everywhere, even if it's lowest-common-denominator. I just
wonder if we could be investing effort into tooling around existing
solutions that will end up more powerful and flexible in the long run.
The issue AFAICT is that running perf is done by $YOU, the specialist,
whereas the performance framework put into place here can be
"turned on for the whole fleet" and the ability to collect data from
non-specialists is there. (Note: At GitHub you do the serving side,
whereas Google, MS also control the shipped binary on the client
side; asking a random engineer to run perf on their Git thing only
helps their special case and is unstructured; what helps is colorful
dashboards aggregating all the results from all the people).
So it really is "works everywhere," but not as you envisioned
(cross platform vs more machines) ;-)
I currently use GIT_TRACE_PERFORMANCE primarily to communicate
performance measurements on the mailing list. While it is convenient
occasionally to run with it turned on locally, primarily it gives me a
common reference when communicating with others on the list about
performance.
We have several excellent profiling tools available on Windows
(perfview, VS, wpa, etc) so for any detailed investigations, I use
those. They obviously don't require any instrumenting in the code.
For our internal end user performance data, we'll use structured logging
and our custom telemetry solution rather than the GIT_TRACE_PERFORMANCE
mechanism. We never ask end users to turn on GIT_TRACE_PERFORMANCE. If
we need more than what we can gather using telemetry, we ask them to
capture a perfview along with other diagnostic data and send it to us
for evaluation.