Search Postgresql Archives

Re: IO related waits

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Sat, 2024-09-21 at 10:20 -0700, Adrian Klaver wrote:
> On 9/21/24 02:36, veem v wrote:
> > 
> > 
> 
> > 
> > Actually here the application is using kafka and  flink stream and
> > is 
> > using one of existing code in which it was doing row by row commit
> > which 
> > is now changed to Batch commit i.e. the commit point is shifted
> > from row 
> > by row to batch now. There are multiple sessions spawning at the
> > same 
> > time to process the incoming messages 24/7. And also as described
> > in 
> > another ongoing thread and also we saw in the past we did not see
> > much 
> > difference between "batch commit" and "Batch insert" performance.
> > We 
> > only realized the row by row commit is having worse performance.
> 
> The bottom line is that to solve this a cost is going to have to be
> paid 
> somewhere. Previously  it was done with autocommit in the form of
> slow 
> insert performance. You improved the speed of the inserts by wrapping
> multiple inserts in transactions and that led you to this problem,
> where 
> open transactions across sessions is leading to deadlock issues due
> to 
> the same id being inserted in concurrent open sessions. Per my and
> Greg 
> Sabino Mullane comments the solution is going to need planning. Right
> now you are playing a round of Whack-a-Mole by making ad-hoc changes
> of 
> portions of the process without reference to the process as a
> whole.At 
> some point the parallel duplicate ids(records) need to be
> straightened 
> out into a serial application of data. You and the other people
> involved 
> need to come up with a coherent view of the process as whole with a
> goal 
> to achieving that. Then you can start planning on where that cost is 
> best paid: 1) In the data before the streaming. 2) In the streaming 
> process itself 3) In the database or 4) Spread out across 1-4.
> 

You are using Apache's flink to pump data into your database. It seems
to me that you have this occurring:-

pipelineA --> sessionA
pipelineB --> sessionB
       etc.

You haven't said if the SQL code doing the inserts is coming out of
flink or if it is your own code.
If it is your own code make sure you are handling SQLException events
properly. 

If there is a flink mailing list or user group, you'll probably find
more help with other flink users.

Follow Adrian's advice. 

HTH,
Rob







[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux