unordered_set::erase returns an iterator to the next element in the hash table. When the hash table is empty, this means checking every single bucket. For a hash table that used to have a lot of elements (say one million) which were all removed and now has only a few elements (say two), erasing an element is very slow. I'm not using the iterator returned by erase. Is there a way to avoid this situation? I'm not very keen on checking the load_factor and resizing the number buckets. Thanks, Shaun