no i mean not we end users, postgres does it (?) via the xmin and xmax fields from inherited tables :) if that is what you wanted in a why or are you asking, does postgres even update those rows and i am wrong assuming it that way? since the values need to be atomic, consider the below analogy assuming i(postgres) am person giving out token to people(connections/tx) in a queue. if there is a single line, (sequential) then it is easy for me to simply give them 1 token incrementing the value and so on. but if there are thousands of users in parallel lines, i am only one person delivering the token, will operate sequentially, and the other person is "blocked" for sometime before it gets the token with the required value. so if there are 1000s or users with the "delay" may impact my performance coz i need to maintain the value of the token to be able to know what token value i need to give to next person? i do not know if am explaining it correctly, pardon my analogy, Regards, Vijay On Wed, Mar 13, 2019 at 1:10 AM Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote: > > On 3/12/19 12:19 PM, Vijaykumar Jain wrote: > > I was asked this question in one of my demos, and it was interesting one. > > > > we update xmin for new inserts with the current txid. > > Why? > > > now in a very high concurrent scenario where there are more than 2000 > > concurrent users trying to insert new data, > > will updating xmin value be a bottleneck? > > > > i know we should use pooling solutions to reduce concurrent > > connections but given we have enough resources to take care of > > spawning a new process for a new connection, > > > > Regards, > > Vijay > > > > > > > -- > Adrian Klaver > adrian.klaver@xxxxxxxxxxx