Re: [PATCH] hashmap: add API to disable item counting when threaded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Sep 02, 2017 at 01:31:19AM +0200, Johannes Schindelin wrote:

> > https://public-inbox.org/git/adb37b70139fd1e2bac18bfd22c8b96683ae18eb.1502780344.git.martin.agren@xxxxxxxxx/
> [...]
> > Add API to hashmap to disable item counting and to disable automatic
> > rehashing.  Also include APIs to re-enable item counting and automatica
> > rehashing.
> [...]
> 
> The Git contribution process forces me to point out lines longer than 80
> columns. I wish there was already an automated tool to fix that, but we
> (as in "the core Git developers") have not yet managed to agree on one. So
> I'll have to ask you to identify and fix them manually.

Perhaps it would be helpful if you pointed out the lines that are too
long. Because I don't see any being added by the patch. There are two in
the commit message. One is a URL. For the other, which is 82 characters,
I'm not sure there is a better tool than "turn on text wrapping in your
editor".

> > +	/* TODO Consider counting them and returning that. */
> 
> I'd rather not. If counting is disabled, it is disabled.
> 
> > +	die("hashmap_get_size: size not set");
> 
> Before anybody can ask for this message to be wrapped in _(...) to be
> translateable, let me suggest instead to add the prefix "BUG: ".

Agreed on both (and Jonathan's suggestion to just use BUG()).

> > +static inline void hashmap_enable_item_counting(struct hashmap *map)
> > +{
> > +	void *item;
> > +	unsigned int n = 0;
> > +	struct hashmap_iter iter;
> > +
> > +	hashmap_iter_init(map, &iter);
> > +	while ((item = hashmap_iter_next(&iter)))
> > +		n++;
> > +
> > +	map->do_count_items = 1;
> > +	map->private_size = n;
> > +}
> 
> BTW this made me think that we may have a problem in our code since
> switching from my original hashmap implementation to the bucket one added
> in 6a364ced497 (add a hashtable implementation that supports O(1) removal,
> 2013-11-14): while it is not expected that there are many collisions, the
> "grow_at" logic still essentially assumes the number of buckets to be
> equal to the number of hashmap entries.

I'm confused about what the problem is. If I am reading the code
correctly, "size" is always the number of elements and "grow_at" is the
table size times a load factor. Those are the same numbers you'd use to
decide to grow in an open-address table.

It's true that this does not take into account the actual number of
collisions we see (or the average per bucket, or however you want to
count it). But generally nor do open-address schemes (and certainly our
other hash tables just use load factor to decide when to grow).

Am I missing something?

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux