Re: [PATCH v1 3/3] Add initial parallel version of unpack_trees()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ben Peart <Ben.Peart@xxxxxxxxxxxxx> writes:

> +	 * Fetch the tree from the ODB for each peer directory in the
> +	 * n commits.
> +	 *
> +	 * For 2- and 3-way traversals, we try to avoid hitting the
> +	 * ODB twice for the same OID.  This should yield a nice speed
> +	 * up in checkouts and merges when the commits are similar.
> +	 *
> +	 * We don't bother doing the full O(n^2) search for larger n,
> +	 * because wider traversals don't happen that often and we
> +	 * avoid the search setup.

It is sensible to optimize for common cases while leaving out the
complexity that is only needed to support rare cases.

> +	 * When 2 peer OIDs are the same, we just copy the tree
> +	 * descriptor data.  This implicitly borrows the buffer
> +	 * data from the earlier cell.

cell meaning...?


> +	for (i = 0; i < n; i++, dirmask >>= 1) {
> +		if (i > 0 && are_same_oid(&names[i], &names[i - 1]))
> +			newinfo->t[i] = newinfo->t[i - 1];
> +		else if (i > 1 && are_same_oid(&names[i], &names[i - 2]))
> +			newinfo->t[i] = newinfo->t[i - 2];
> +		else {
> +			const struct object_id *oid = NULL;
> +			if (dirmask & 1)
> +				oid = names[i].oid;
> +
> +			/*
> +			 * fill_tree_descriptor() will load the tree from the
> +			 * ODB. Accessing the ODB is not thread safe so
> +			 * serialize access using the odb_mutex.
> +			 */
> +			pthread_mutex_lock(&o->odb_mutex);
> +			newinfo->buf[newinfo->nr_buf++] =
> +				fill_tree_descriptor(newinfo->t + i, oid);
> +			pthread_mutex_unlock(&o->odb_mutex);
> +		}
> +	}
> +
> +	/*
> +	 * We can't play games with the cache bottom as we are processing
> +	 * the tree objects in parallel.
> +	 * newinfo->bottom = switch_cache_bottom(&newinfo->info);
> +	 */

Would the resulting code match corresponding entries from two/three
trees correctly with a tree with entries "foo." (blob), "foo/" (has
subtree), and "foo0" (blob) at the same time, without adjusting the
bottom?  I am worried because cache_bottom stuff is not about
optimization but is about correctness.

> +	/* All I really need here is fetch_and_add() */
> +	pthread_mutex_lock(&o->work_mutex);
> +	o->remaining_work++;
> +	pthread_mutex_unlock(&o->work_mutex);
> +	mpmcq_push(&o->queue, &newinfo->entry);

Nice.  I like the general idea.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux