On 12/8/05, Scott Marlowe <smarlowe@xxxxxxxxxxxxxxxxx> wrote: > Please keep replies on list, this may help others in the future, and > also, don't top post (i.e. put your responses after my responses... > Thanks) > > On Tue, 2005-12-06 at 20:16, Kathy Lo wrote: > > For a back-end database server running Postgresql 8.0.3, it's OK. But, > > this problem seriously affects the performance of my application > > server. > > > > I upgraded my application server from > > > > Redhat 7.3 > > unixODBC 2.2.4 > > Postgresql 7.2.1 with ODBC driver > > > > to > > > > Redhat 9.0 > > unixODBC 2.2.11 > > Postgresql 8.0.3 > > psqlodbc-08.01.0101 > > pg_autovacuum runs as background job > > > > Before upgrading, the application server runs perfectly. After > > upgrade, this problem appears. > > > > When the application server receives the request from a client, it > > will access the back-end database server using both simple and complex > > query. Then, it will create a database locally to store the matched > > rows for data processing. After some data processing, it will return > > the result to the requested client. If the client finishes browsing > > the result, it will drop the local database. > > OK, there could be a lot of problems here. Are you actually doing > "create database ..." for each of these things? I'm not sure that's a > real good idea. Even create schema, which would be better, strikes me > as not the best way to handle this. > Actually, my program is written using C++ so I use "create database" SQL to create database. If not the best way, please tell me another method to create database in C++ program. > > At the same time, this application server can serve many many clients > > so the application server has many many local databases at the same > > time. > > Are you sure that you're better off with databases on your application > server? You might be better off with either running these temp dbs on > the backend server in the same cluster, or creating a cluster just for > these jobs that is somewhat more conservative in its memory usage. I > would lean towards doing this all on the backend server in one database > using multiple schemas. > Because the data are distributed in many back-end database servers (physically, in different hardware machines), I need to use Application server to temporarily store the data retrieved from different machines and then do the data processing. And, for security reason, all the users cannot directly access the back-end database servers. So, I use the database in application server to keep the result of data processing. > > After running the application server for a few days, the memory of the > > application server nearly used up and start to use the swap memory > > and, as a result, the application server runs very very slow and the > > users complain. > > Could you provide us with your evidence that the memory is "used up?" > What is the problem, and what you perceive as the problem, may not be > the same thing. Is it the output of top / free, and if so, could we see > it, or whatever output is convincing you you're running out of memory? > When the user complains the system becomes very slow, I use top to view the memory statistics. In top, I cannot find any processes that use so many memory. I just found that all the memory was used up and the Swap memory nearly used up. I said it is the problem because, before upgrading the application server, no memory problem even running the application server for 1 month. After upgrading the application server, this problem appears just after running the application server for 1 week. Why having this BIG difference between postgresql 7.2.1 on Redhat 7.3 and postgresql 8.0.3 on Redhat 9.0? I only upgrade the OS, postgresql, unixODBC and postgresql ODBC driver. The program I written IS THE SAME. > > I tested the application server without accessing the local database > > (not store matched rows). The testing program running in the > > application server just retrieved rows from the back-end database > > server and then returned to the requested client directly. The memory > > usage of the application server becomes normally and it can run for a > > long time. > > Again, what you think is normal, and what normal really are may not be > the same thing. Evidence. Please show us the output of top / free or > whatever that is showing this. > After I received the user's complain, I just use top to view the memory statistic. I forgot to save the output. But, I am running a test to get back the problem. So, after running the test, I will give you the output of the top/free. > > I found this problem after I upgrading the application server. > > > > On 12/7/05, Scott Marlowe <smarlowe@xxxxxxxxxxxxxxxxx> wrote: > > > On Tue, 2005-12-06 at 03:22, Kathy Lo wrote: > > > > Hi, > > > > > > > > > > > In this program, it will access this database server using simple and > > > > complex (joining tables) SQL Select statement and retrieve the matched > > > > rows. For each access, it will connect the database and disconnect it. > > > > > > > > I found that the memory of the databaser server nearly used up (total > 2G > > > RAM). > > > > > > > > After I stop the program, the used memory did not free. > > > > > > Ummmm. What exactly do you mean? Can we see the output of top and / or > > > free? I'm guessing that what Tom said is right, you're just seeing a > > > normal state of how unix does things. > > > > > > If your output of free looks like this: > > > > > > -bash-2.05b$ free > > > total used free shared buffers cached > > > Mem:6096912 6069588 27324 0 260728 5547264 > > > -/+ buffers/cache: 261596 5835316 > > > Swap: 4192880 16320 4176560 > > > > > > Then that's normal. > > > > > > That's the output of free on a machine with 6 gigs that runs a reporting > > > database. Note that while it shows almost ALL the memory as used, it is > > > being used by the kernel, which is a good thing. Note that 5547264 or > > > about 90% of memory is being used as kernel cache. That's a good thing. > > > > > > Note you can also get yourself in trouble with top. It's not uncommon > > > for someone to see a bunch of postgres processes each eating up 50 or > > > more megs of ram, and panic and think that they're running out of > > > memory, when, in fact, 44 meg for each of those processes is shared, and > > > the real usage per backend is 6 megs or less. > > > > > > Definitely grab yourself a good unix / linux sysadmin guide. The "in a > > > nutshell" books from O'Reilley (sp?) are a good starting point. > > > > > > > > > -- > > Kathy Lo > -- Kathy Lo