On 2/15/2022 10:59 AM, Jeff Hostetler via GitGitGadget wrote: > From: Jeff Hostetler <jeffhost@xxxxxxxxxxxxx> > > Teach Git to detect remote working directories on macOS and mark them as > incompatible with FSMonitor. > > With this, `git fsmonitor--daemon run` will error out with a message > like it does for bare repos. > > Client commands, like `git status`, will not attempt to start the daemon. ... > + * A client machine (such as a laptop) may choose to suspend/resume > + * and it is unclear (without lots of testing) whether the watcher can > + * resync after a resume. We might be able to treat this as a normal > + * "events were dropped by the kernel" event and do our normal "flush > + * and resync" --or-- we might need to close the existing (zombie?) > + * notification fd and create a new one. > + * > + * In theory, the above issues need to be addressed whether we are > + * using the Hook or IPC API. The only thing I can think about is a case where the filesystem monitor is actually running on the remote machine and Git communicates with it over the network. This is currently possible with the hook, but I am not aware of a hook implementation that does this. We can find a way to update the hook interface to communicate to Git that a remote disk is an appropriate case, but only if there is a real need for that. > + * For the builtin FSMonitor, we create the Unix domain socket for the > + * IPC in the .git directory. If the working directory is remote, > + * then the socket will be created on the remote file system. This The socket is on the remote file system, but the daemon process is still local, so I still see this a problem. > + * can fail if the remote file system does not support UDS file types > + * (e.g. smbfs to a Windows server) or if the remote kernel does not > + * allow a non-local process to bind() the socket. (These problems > + * could be fixed by moving the UDS out of the .git directory and to a > + * well-known local directory on the client machine, but care should > + * be taken to ensure that $HOME is actually local and not a managed > + * file share.) > + * > + * So (for now at least), mark remote working directories as > + * incompatible. > + */ > +static enum fsmonitor_reason is_remote(struct repository *r) > +{ > + struct statfs fs; > + > + if (statfs(r->worktree, &fs) == -1) { > + int saved_errno = errno; > + trace_printf_key(&trace_fsmonitor, "statfs('%s') failed: %s", > + r->worktree, strerror(saved_errno)); > + errno = saved_errno; > + return FSMONITOR_REASON_ZERO; So if we fail to inspect the filesystem, we report it as compatible? I suppose other things are likely to fail if checks like this are fialing, but I wonder if we should preempt that by marking this as incompatible due to filesystem errors. > + } > + > + trace_printf_key(&trace_fsmonitor, > + "statfs('%s') [type 0x%08x][flags 0x%08x] '%s'", > + r->worktree, fs.f_type, fs.f_flags, fs.f_fstypename); > + > + if (!(fs.f_flags & MNT_LOCAL)) > + return FSMONITOR_REASON_REMOTE; I do see that we need a successful response to give this specific reason for incompatibility. > + return FSMONITOR_REASON_ZERO; > +} > > enum fsmonitor_reason fsm_os__incompatible(struct repository *r) > { > + enum fsmonitor_reason reason; > + > + reason = is_remote(r); > + if (reason) > + return reason; This organization is looking like you want to short-circuit the checks if you find an incompatibility, with the intent of having multiple checks in the future. But this can be done with simple || operators: return is_remote() || reason_check_2() || reason_check_3(); Thanks, -Stolee