Greetings, * Peter J. Holzer (hjp-pgsql@xxxxxx) wrote: > On 2019-05-18 19:16:19 -0500, Ron wrote: > > > If the the table fills up, you increase nr_buckets, reallocate and > > > rehash all entries. > > > > Ouch. Response time on a big table would take a serious hit if that rehash > > happened in the middle of the day on a big OLTP system. As noted below, that isn't actually how it works with PG's hash indexes. > So that might be a reason not to use hash indexes tables where the worst > case insert time is a concern (the average insert time is still O(1)). The worst-case insert time is certainly worse than the average but it's not nearly as bad as being imagined here. > You can split buckets lazily, and in fact PostgreSQL does this. I just > looked at the code briefly, but I don't really understand how it works. > Guess I would have to read Margo Seltzer's paper paper for that. > > Still, even with that optimization (or tradeoff - it may make the lookup > slower) the README notes that splits are expensive. There's multiple different levels, really, from "best case" where the new item can just be placed directly on to a page that has free space, to "slightly worse case" where an overflow page has to be added, to "even worse case" where a prior page split has to be finished, to "probably worst case" where a full page split has to happen. There might even be some pathological cases where a prior page split has to be finished and then an overflow page has to be added and then another page split has to be done, or some such. Even so, for equality-based lookups (where the data set has few or zero duplicates), hash indexes can work quite well, but it's of course good to understand that inserts aren't always going to be exactly the same speed every time. Thanks, Stephen
Attachment:
signature.asc
Description: PGP signature