Re: [PATCH v2 00/10] Add 'ls-files --debug-json' to dump the index in json

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Duy,

On Tue, 25 Jun 2019, Duy Nguyen wrote:

> On Tue, Jun 25, 2019 at 6:27 PM Johannes Schindelin
> <Johannes.Schindelin@xxxxxx> wrote:
> >
> > On Tue, 25 Jun 2019, Duy Nguyen wrote:
> >
> > > On Tue, Jun 25, 2019 at 1:00 AM Johannes Schindelin
> > > <Johannes.Schindelin@xxxxxx> wrote:
> > > > > - extension location is printed, in case you need to decode the
> > > > >   extension by yourself (previously only the size is printed)
> > > > > - all extensions are printed in the same order they appear in the file
> > > > >   (previously eoie and ieot are printed first because that's how we
> > > > >   parse)
> > > > > - resolve undo extension is reorganized a bit to be easier to read
> > > > > - tests added. Example json files are in t/t3011
> > > >
> > > > It might actually make sense to optionally disable showing extensions.
> > > >
> > > > You also forgot to mention that you explicitly disable handling
> > > > `<pathspec>`, which I find a bit odd, personally, as that would probably
> > > > come in real handy at times,
> > >
> > > No. I mentioned the land of high level languages before. Filtering in
> > > any Python, Ruby, Scheme, JavaScript, Java is a piece of cake and much
> > > more flexible than pathspec.
> >
> > I heard that type of argument before. I was working on the initial Windows
> > port of Git, uh, of course I was working on a big refactoring of a big C++
> > application backed by a database. A colleague suggested that filtering
> > could be done much better in C++, on the desktop, than in SQL. And so they
> > changed the paradigm to "simplify" the SQL query, and instead dropped the
> > unwanted data after it had hit the RAM of the client machine.
> >
> > Turns out it was a bad idea. A _really_ bad idea. Because it required
> > downloading 30MB of data for about several dozens computers in parallel,
> > at the start of every shift.
> >
> > This change was reverted in one big hurry, and the colleague was tasked to
> > learn them some SQL.
> >
> > Why am I telling you this story? Because you fall into the exact same trap
> > as my colleague.
> >
> > In this instance, it may not be so much network bandwidth, but it is still
> > quite a suboptimal idea to render JSON for possibly tens of thousands of
> > files, then parse the same JSON on the receiving side, the spend even more
> > time to drop all but a dozen files.
>
> This was mentioned before [1]. Of course I don't work on giant index
> files, but I would assume the cost of parsing JSON (at least with a
> stream-based one, not loading the whole thing in core) is still
> cheaper.

You may have heard that a few thousand of my colleagues are working on
what they call the largest repository on this planet.

No, the cost of parsing JSON only to throw away the majority of the parsed
information is not cheap. It is a clear sign of a design in want of being
improved.

> And you could still do it in iteration, saving every step until you have
> the reasonable small dataset to work on. The other side of the story is,
> are we sure parsing and executing pathspec is cheap? I'm not so sure,
> especially when pathspec code is not exactly optimized.

Let's not try to slap on workaround over workaround. Let's fix the root
cause. (Being: don't filter at the wrong end.)

> Consider the amount of code to support something like that.

Given that I am pretty familiar with the pathspec machinery due to working
with it in the `git stash` and `git add -p` built-ins, I have a very easy
time considering the amount of code. It makes me smile how little code
will be needed.

> I'd rather wait until a real example come up and no good solution found
> without modify git.git, before actually supporting it.

Oh hey, there you go: Team Explorer. Visual Studio Code. Literally every
single 3rd-party application that needs to deal with real-world loads.
Every single one.

> > And this is _even more_ relevant when you want to debug things.
> >
> > In short: I am quite puzzled why this is even debated here. There is a
> > reason, a good reason, why `git ls-files` accepts pathspecs. I would not
> > want to ignore the lessons of history as willfully here.
>
> I guess you and I have different ways of debugging things.

Yep, I'm with Lincoln here: Give me six hours to debug a problem and I
will spend the first four optimizing the edit-build-test cycle.

> > > Even with shell scripts, jq could do a much better job than pathspec. If
> > > you filter by pathspec, good luck trying that on extensions.
> >
> > You keep harping on extensions, but the reality of the matter is that they
> > are rarely interesting. I would even wager a bet that we will end up
> > excluding them from the JSON output by default.
> >
> > Most of the times when I had to decode the index file manually in the
> > past, it was about the regular file entries.
> >
> > There was *one* week in which I had to decode the untracked cache a bit,
> > to the point where I patched the test helper locally to help me with that.
> >
> > If my experience in debugging these things is any indicator, extensions do
> > not matter even 10% of the non-extension data.
>
> Again our experiences differ. Mine is mostly about extensions,
> probably because I had to work on them more often. For normal entries
> "ls-files --debug" gives you 99% what's in the index file already.

Like the device. And the ctime. And the file size. And the uid/gid. Is
that what you mean?

I don't know whether I missed a joke or not.

> > You don't need JSON if you want to debug things. That would be a lot of
> > love lost, if debugging was your goal.
>
> No, I did think of some other line-based format before I ended up with
> JSON. I did not want to use it in the beginning.

Then why bother.

> The thing is, a giant table to cover all fields and entries in the
> main index is not as easy to navigate, or search even in 'less'. And
> the hierarchical structure of some extensions is hard to represent in
> good way (at least without writing lots of code). On top of that I
> still need some easy way to parse and post-process, like how much
> saving I could gain if I compressed stat data. And the final nail is
> json-writer.c is already there, much less work.
>
> So JSON was the best option I found to meet all those points.

Well, as I said: you're obviously dead-set to optimize this for debugging
your own problems. The beauty of open source is that it can be turned into
something of wider use.

Ciao,
Johannes




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux