Re: [PATCH] config: use a static lock_file struct

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/30/2017 06:55 AM, Jeff King wrote:
> [...]
> Something like this, which AFAICT is about as safe as the existing code
> in its list manipulation. It retains the "active" flag as an additional
> check which I think isn't strictly necessary, but potentially catches
> some logic errors.
> 
> The strbuf_reset() calls become strbuf_release(), since we're promising
> the caller that they could now free the struct (or let it go out of
> scope) if they chose. Probably during a signal handler we should skip
> that (we know the struct is off the list and non-active at that point,
> but we could possibly hit libc's free() mutex).

This looks OK to me aside from the one caveat below.

> diff --git a/tempfile.c b/tempfile.c
> index 6843710670..a7d964ebf8 100644
> --- a/tempfile.c
> +++ b/tempfile.c
> [...]
> +static void deactivate_tempfile(struct tempfile *tempfile)
> +{
> +	struct tempfile *volatile *p;
> +
> +	if (!tempfile->active)
> +		return;
> +
> +	tempfile->active = 0;
> +	for (p = &tempfile_list; *p; p = &(*p)->next) {
> +		if (*p == tempfile) {
> +			*p = tempfile->next;
> +			break;
> +		}
>  	}
>  }

`deactivate_tempfile()` is O(N) in the number of active tempfiles. This
could get noticeable for, say, updating 30k references, which involves
obtaining 30k reference locks. I think that code adds the locks in
alphabetical order and also removes them in alphabetical order, so the
overall effort would go like O(N²). I'm guessing that this would be
measurable but not fatal for realistic numbers of references, but it
should at least be benchmarked.

There are three obvious ways to make this O(1) again:

* Make the list doubly-linked.
* Make sure that all significant callers remove items in the *reverse*
order that they were added, in which case the item to be removed would
always be found near the head of the list.
* Change this class to keep track of a tail pointer and add new items at
the end of the list, and ensure that all significant callers remove
locks in the *same* order that they were added.

The second and third options might be easy or difficult (depend on how
much current callers need to be changed) and certainly would be easy to
break again as new callers are added in the future.

> [...]

Michael



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux