Re: [PATCH] reftable: ignore file-in-use errors when unlink(3p) fails on Windows

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Patrick Steinhardt <ps@xxxxxx> writes:

> diff --git a/compat/mingw.c b/compat/mingw.c
> index 1d5b211b54..0e4b6a70a4 100644
> --- a/compat/mingw.c
> +++ b/compat/mingw.c
> @@ -302,7 +302,7 @@ static wchar_t *normalize_ntpath(wchar_t *wbuf)
>  	return wbuf;
>  }
>  
> -int mingw_unlink(const char *pathname)
> +int mingw_unlink(const char *pathname, int handle_in_use_error)
>  {
>  	int ret, tries = 0;
>  	wchar_t wpathname[MAX_PATH];
> @@ -317,6 +317,9 @@ int mingw_unlink(const char *pathname)
>  	while ((ret = _wunlink(wpathname)) == -1 && tries < ARRAY_SIZE(delay)) {
>  		if (!is_file_in_use_error(GetLastError()))
>  			break;
> +		if (!handle_in_use_error)
> +			return ret;
> +
>  		/*
>  		 * We assume that some other process had the source or
>  		 * destination file open at the wrong moment and retry.

So this is how we can avoid falling into the retry plus interaction.
This underlying function is prepared to offer both choices at
runtime.

> diff --git a/compat/mingw.h b/compat/mingw.h
> index ebfb8ba423..a555af8d54 100644
> --- a/compat/mingw.h
> +++ b/compat/mingw.h
> @@ -224,8 +224,12 @@ int uname(struct utsname *buf);
>   * replacements of existing functions
>   */
>  
> -int mingw_unlink(const char *pathname);
> -#define unlink mingw_unlink
> +int mingw_unlink(const char *pathname, int handle_in_use_error);
> +#ifdef MINGW_DONT_HANDLE_IN_USE_ERROR
> +# define unlink(path) mingw_unlink(path, 0)
> +#else
> +# define unlink(path) mingw_unlink(path, 1)
> +#endif

This one is yucky.  All calls to unlink() used in compilation units
with the CPP macro defined are going to fail on a path that is in
use, but in other code paths, there will be the retry loop.

Regardless of the platform, the code must be prepared to see its
unlink() fail and deal with the failure, but I wonder how much the
initial "if file-in-use caused problem, retry with delay without
bugging the end user" loop is helping.  

After that retry loop expires, we go interactive, and I can
understand why end-users may be annoyed by the code going
interactive at that point.

But wouldn't it be too drastic a change to break out of the retry
loop immediately after the initial failure?

Unless the case found in reftable is that the process that has the
file in use is ourselves but somebody else that is not under our
control, it could be that the current users are being helped by the
retry loop because these other users would quickly close and exit
while we are retrying before going interactive.  What I am getting
at is that it might be a less drastic move that helps users better
if we moved the "let's just accept the failure and return to the
caller" after that non-interactive retry loop, instead of "return
after even the first failure."  That way, we'll still keep the
automatic and non-interactive recovery, and punt a bit earlier than
before before we go into the other interactive retry loop.

Of course, if we are depending on the ability to unlink what _we_
ourselves are using, we should stop doing that by reorganizing the
code.  I recall we have done such a code shuffling to avoid removing
open files by flipping the order between unlink and close before
only to mollify Windows already in other code paths.  But if we are
failing due to random other users having the file open at the same
time, at least the earlier non-interactive retry loop sounds like a
reasonable workaround for quirks in the underlying filesystem to me.

Thanks.






[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux