On Mon, 14 July 2008, Petr Baudis wrote: > On Fri, Jul 11, 2008 at 03:11:05AM +0200, Lea Wiemann wrote: > > > > This also adds the Git::Commit and Git::Tag classes, which are used by > > Git::Repo, the Git::Object base class, and the Git::RepoRoot helper > > factory class. > > I really miss some more detailed writeup on the envisioned design here. > And if we are redoing the API in a better way, we better should have > some vision. Once again: if you are adding some large amount of code, you'd better describe "whys" of it. > Most importantly, how is Git::Repo interacting with working copies, and > how is it going to interact with them as the new OO model shapes up? > You mention briefly Git::WC later, but it is not really clear how the > interaction should work. > > > First, I don't think it's good idea at all to put the pipe-related stuff > to Git::Repo - this is botched up API just like the current one. This > all is independent from any repository instances, in fact it's perfectly > valid to call standalone remote commands (like ls-remote or, actually, > clone). There are three classes of git commands: 1. standalone i.e. those that doesn't require even repository, like e.g. git-ls-remote, git-clone or git-init (or git wrapper, like for example in "git --version"). 2. those that require repository (and should use --git-dir=<path>), like for example git-cat-file, git-log / git-rev-list, git-for-each-ref / git-show-refs, git-diff-tree, git-ls-tree; 3. those that require both repository and working copy (and should probably use both --git-dir=<path> and --work-tree=<path>), like git-commit, git-clean, git-ls-files (the last one can require only index). 3'. those that require both repository and working copy, and whose behavior depends on where in working copy we (current directory) is[*1*]. *All* git commands should know path to "git" wrapper binary, or where $GIT_EXEC_PATH is. [*1*] It is (besides pointer to Git::Repo instance) what Git::WC differs from simple directory: the pointer where we are, and wc_path(), wc_subdir(), [wc_cdup(),] and wc_chdir(<SUBDIR>) methods. > Here is an idea: Introduce Git::Command object that will have very > general interface and look like > > my $c = Git::Command->new(['git', '--git-dir=.', 'cat-file', \ > '-p', 'bla'], {pipe_out=>1}) > ... > $c->close(); Errr... how do you read from such a pipe? <$c> I think wouldn't work, unless you would use some trickery... > and a Git::CommandFactory with a nicer interface that would look like > > my $cf = Git::CommandFactory->new('git', '--git-dir=.'); > my $c = $cf->output_pipe('cat-file', '-p', 'bla'); > $c->close(); > > Then, Git::Repo would have a single Git::CommandFactory instance > pre-initialized with the required calling convention, and returned by > e.g. cmd() method. Then, from the user POV, you would just: > > my $repo = Git::Repo->new; > $repo->cmd->output_pipe('cat-file', '-p', 'bla'); > > Or am I overdoing it? You are probably overdoing it. I think it would be good to have the following interface Git->output_pipe('ls-remotes', $URL, '--heads'); [...] $r = Git::Repo->new(<git_dir>); $r->output_pipe('ls_tree', 'HEAD'); [...] $nb = Git::Repo::NonBare->new(<git_dir>[, <working_area>]); $nb->output_pipe('ls-files'); How can it be done with minimal effort, unfortunately I don't know... > Git::Repo considers only bare repositories. Now, since "working copy" by > itself has nothing to do with Git and is just an ordinary directory > tree, Well, it does provide also current subdir pointer, pointer to git repository it is associated with, and a few methods to examine and change both. > I think Git::WC does not make that much sense, but something like > Git::Repo::Nonbare certainly would. This would be a Git::Repo subclass > with: > > * Different constructor > > * Different Git::CommandFactory instance > > * Git::Index object aside the existing ones (like Git::Config, > Git::RefTree, ...) > > * Some kind of wc_root() method to help directory navigation > > And that pretty much covers it? Good idea, I think. > Another thing is clearly describing how error handling is going to work. > I have not much against ditching Error.pm, but just saying "die + eval" > does not cut it - how about possible sideband data? E.g. the failure > mode of Git.pm's command() method includes passing the error'd command > output in the exception object. How are we going to handle it? Now, it > might be actually okay to say that we _aren't_ going to handle this if > it is deemed unuseful, but that needs to be determined too. I don't know > off the top of my head. I think that the solution might be some output_pipe option on how to treat command exit status, command STDERR, and errors when invoking command (for example command not found). Mentioned http://http://www.perl.com/pub/a/2002/11/14/exception.html explains why one might want to use Error.pm. > > --- > > > > Please note before starting a reply to this: This is not an argument; > > I'm just explaining why I implemented it the way I did. So please > > don't try to argue with me about what I should or should have done. > > I'm not going to refactor Git::Repo to use Git.pm or vice versa; it's > > really a much more non-trivial task than you might think at first > > glance. [...] > I agree that your main objective is caching for gitweb, but that's not > what everything revolves around for the rest of us. If you chose the way > of caching within the Git API and introducing the API to gitweb, I think > you should spend the effort to deal with the API properly now. I think the idea is that gitweb caching as it is implemented (data caching) requires some kind of Perl API, and that existing Git.pm didn't cut -- therefore Git::Repo and friends was created. But the focus is gitweb caching, not Perl API (besides Perl API having to be usable). -- Jakub Narebski Poland -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html