On Tue, Jun 25, 2019 at 6:27 PM Johannes Schindelin <Johannes.Schindelin@xxxxxx> wrote: > > Hi Duy, > > On Tue, 25 Jun 2019, Duy Nguyen wrote: > > > On Tue, Jun 25, 2019 at 1:00 AM Johannes Schindelin > > <Johannes.Schindelin@xxxxxx> wrote: > > > > - extension location is printed, in case you need to decode the > > > > extension by yourself (previously only the size is printed) > > > > - all extensions are printed in the same order they appear in the file > > > > (previously eoie and ieot are printed first because that's how we > > > > parse) > > > > - resolve undo extension is reorganized a bit to be easier to read > > > > - tests added. Example json files are in t/t3011 > > > > > > It might actually make sense to optionally disable showing extensions. > > > > > > You also forgot to mention that you explicitly disable handling > > > `<pathspec>`, which I find a bit odd, personally, as that would probably > > > come in real handy at times, > > > > No. I mentioned the land of high level languages before. Filtering in > > any Python, Ruby, Scheme, JavaScript, Java is a piece of cake and much > > more flexible than pathspec. > > I heard that type of argument before. I was working on the initial Windows > port of Git, uh, of course I was working on a big refactoring of a big C++ > application backed by a database. A colleague suggested that filtering > could be done much better in C++, on the desktop, than in SQL. And so they > changed the paradigm to "simplify" the SQL query, and instead dropped the > unwanted data after it had hit the RAM of the client machine. > > Turns out it was a bad idea. A _really_ bad idea. Because it required > downloading 30MB of data for about several dozens computers in parallel, > at the start of every shift. > > This change was reverted in one big hurry, and the colleague was tasked to > learn them some SQL. > > Why am I telling you this story? Because you fall into the exact same trap > as my colleague. > > In this instance, it may not be so much network bandwidth, but it is still > quite a suboptimal idea to render JSON for possibly tens of thousands of > files, then parse the same JSON on the receiving side, the spend even more > time to drop all but a dozen files. This was mentioned before [1]. Of course I don't work on giant index files, but I would assume the cost of parsing JSON (at least with a stream-based one, not loading the whole thing in core) is still cheaper. And you could still do it in iteration, saving every step until you have the reasonable small dataset to work on. The other side of the story is, are we sure parsing and executing pathspec is cheap? I'm not so sure, especially when pathspec code is not exactly optimized. Consider the amount of code to support something like that. I'd rather wait until a real example come up and no good solution found without modify git.git, before actually supporting it. [1] https://public-inbox.org/git/45e49624-be8e-deff-bf9d-aee052991189@xxxxxxxxx/ > And this is _even more_ relevant when you want to debug things. > > In short: I am quite puzzled why this is even debated here. There is a > reason, a good reason, why `git ls-files` accepts pathspecs. I would not > want to ignore the lessons of history as willfully here. I guess you and I have different ways of debugging things. > > Even with shell scripts, jq could do a much better job than pathspec. If > > you filter by pathspec, good luck trying that on extensions. > > You keep harping on extensions, but the reality of the matter is that they > are rarely interesting. I would even wager a bet that we will end up > excluding them from the JSON output by default. > > Most of the times when I had to decode the index file manually in the > past, it was about the regular file entries. > > There was *one* week in which I had to decode the untracked cache a bit, > to the point where I patched the test helper locally to help me with that. > > If my experience in debugging these things is any indicator, extensions do > not matter even 10% of the non-extension data. Again our experiences differ. Mine is mostly about extensions, probably because I had to work on them more often. For normal entries "ls-files --debug" gives you 99% what's in the index file already. > > > especially when we offer this as a better way for 3rd-party > > > applications to interact with Git (which I think will be the use case > > > for this feature that will be _far_ more common than using it for > > > debugging). > > > > We may have conflicting goals. For me, first priority is the debug > > tool for Git developers. 3rd-party support is a stretch. I could move > > all this back to test-tool, then you can provide a 3rd-party API if > > you want. Or I'll withdraw this series and go back to my original > > plan. > > You don't need JSON if you want to debug things. That would be a lot of > love lost, if debugging was your goal. No, I did think of some other line-based format before I ended up with JSON. I did not want to use it in the beginning. The thing is, a giant table to cover all fields and entries in the main index is not as easy to navigate, or search even in 'less'. And the hierarchical structure of some extensions is hard to represent in good way (at least without writing lots of code). On top of that I still need some easy way to parse and post-process, like how much saving I could gain if I compressed stat data. And the final nail is json-writer.c is already there, much less work. So JSON was the best option I found to meet all those points. -- Duy