2010/12/8 Tom Lane <tgl@xxxxxxxxxxxxx>: > Robert Haas <robertmhaas@xxxxxxxxx> writes: >>> Yeah, that was my concern, too, though Tom seems skeptical (perhaps >>> rightly). ÅAnd I'm not really sure why the PROCLOCKs need to be in a >>> hash table anyway - if we know the PROC and LOCK we can surely look up >>> the PROCLOCK pretty expensively by following the PROC SHM_QUEUE. > >> Err, pretty INexpensively. > > There are plenty of scenarios in which a proc might hold hundreds or > even thousands of locks. Âpg_dump, for example. ÂYou do not want to be > doing seq search there. > > Now, it's possible that you could avoid *ever* needing to search for a > specific PROCLOCK, in which case eliminating the hash calculation > overhead might be worth it. That seems like it might be feasible. The backend that holds the lock ought to be able to find out whether there's a PROCLOCK by looking at the LOCALLOCK table, and the LOCALLOCK has a pointer to the PROCLOCK. It's not clear to me whether there's any other use case for doing a lookup for a particular combination of PROC A + LOCK B, but I'll have to look at the code more closely. > Of course, you'd still have to replicate > all the space-management functionality of a shared hash table. Maybe we ought to revisit Markus Wanner's wamalloc. Although given our recent discussions, I'm thinking that you might want to try to design any allocation system so as to minimize cache line contention. For example, you could hard-allocate each backend 512 bytes of dedicated shared memory in which to record the locks it holds. If it needs more, it allocates additional 512 byte chunks. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance