Sorry if this question had been asked before. Although I had googled, but find no answer. I try to use C++, to iterate the array returned from stored procedure. std::stringstream ss; ss << "SELECT * FROM get_array_test()"; res = PQexec(conn, ss.str().c_str()); int nrows = PQntuples(res); int nfields = PQnfields(res); printf("number of rows returned = %d\n", nrows); printf("number of fields returned = %d\n", nfields); for (int r = 0; r < nrows; r++) { for (int n = 0; n < nfields; n++) printf(" %s = %s(%d),", PQfname(res, n), PQgetvalue(res, r, n), PQgetlength(res, r, n)); printf("\n"); } Here is my result : number of rows returned = 1 number of fields returned = 1 get_array_test = {100,200,300}(13), Here is my stored procedure : CREATE OR REPLACE FUNCTION get_array_test() RETURNS integer[] AS DECLARE i int4[]; BEGIN i[1] = 100; i[2] = 200; i[3] = 300; return i; END; Is there any way, I can obtain the stored procedure result in c/c++ int array, instead of 1 line of char*? Thanks and Regards Yan Cheng CHEOK --- On Thu, 1/7/10, Yan Cheng Cheok <yccheok@xxxxxxxxx> wrote: > From: Yan Cheng Cheok <yccheok@xxxxxxxxx> > Subject: Re: PostgreSQL Write Performance > To: "Greg Smith" <greg@xxxxxxxxxxxxxxx> > Cc: "Dann Corbit" <DCorbit@xxxxxxxxx>, pgsql-general@xxxxxxxxxxxxxx > Date: Thursday, January 7, 2010, 1:17 PM > Thanks for the valuable advice! Will > take them into consideration seriously. > > From my point of view, my current requirement is limited by > so-called "overhead" during communication with database. See > the following result from SQL Shell : > > SemiconductorInspection=# \timing on > Timing is on. > SemiconductorInspection=# ; > Time: 0.660 ms > SemiconductorInspection=# ; > Time: 0.517 ms > SemiconductorInspection=# ; > Time: 2.249 ms > SemiconductorInspection=# > > I assume there shall be no hard disc activity involved, as > I am sending "empty" SQL statement over. > > Thanks and Regards > Yan Cheng CHEOK > > > --- On Thu, 1/7/10, Greg Smith <greg@xxxxxxxxxxxxxxx> > wrote: > > > From: Greg Smith <greg@xxxxxxxxxxxxxxx> > > Subject: Re: PostgreSQL Write Performance > > To: "Yan Cheng Cheok" <yccheok@xxxxxxxxx> > > Cc: "Dann Corbit" <DCorbit@xxxxxxxxx>, > pgsql-general@xxxxxxxxxxxxxx > > Date: Thursday, January 7, 2010, 12:49 PM > > Yan Cheng Cheok wrote: > > > The time taken to perform measurement per unit is > in > > term of ~30 milliseconds. We need to record down the > > measurement result for every single unit. Hence, the > time > > taken by record down the measurement result shall be > far > > more less than milliseconds, so that it will have > nearly 0 > > impact on the machine speed (If not, machine need to > wait > > for database to finish writing, before performing > > measurement on next unit) > > > > > > > Saving a piece of data to a hard disk permanently > takes a > > few milliseconds. As pointed out already, exactly > how > > many depends on the drive, but it's probably going to > be 8ms > > or longer on your system. There are a few options > > here: > > > > 1) Add a battery-backed write caching controller to > your > > system. Then the battery will help make sure the > data > > doesn't get lost even if the power goes out before > the > > driver writes it out. This will cost you around > $300. > > > > 2) Use some other type of faster storage, such as a > SSD > > drive that has a battery on it to cache any > unfinished > > writes. Probably also going to be around that > price, > > the cheaper SSDs (and some of the expensive ones) > don't take > > data integrity very seriously. > > > > 3) Write the data to a flat file. Periodically > import > > the results into the database in a batch. > > > > The thing you should realize is that using (3) is > going to > > put you in a position where it's possible you've told > the > > machine the measurement was saved, but if the system > crashes > > it won't actually be in the database. If you're > saving > > to a flat file now, you're already in this > position--you > > can't write to a flat file and make sure the result is > on > > disk in less than around 8ms either, you just > probably > > haven't tested that out yet. Just because the write > > has returned successfully, that doesn't mean it's > really > > stored permanently. Power the system off in the > window > > between that write and when the memory cache goes out > to > > disk, and you'll discover the data missing from the > file > > after the system comes back up. > > > > If you're OK with the possibility of losing a > measurement > > in the case of a system crash, then you should just > write > > measurements to a series of flat files, then have > another > > process altogether (one that isn't holding up the > machine) > > load those files into the database. The fact that > it > > takes a few ms to write to disk is a physical > limitation you > > can't get around without using more expensive hardware > to > > improve the situation. If you haven't been seeing > that > > in your app already, I assure you it's just because > you > > haven't looked for the issue before--this limitation > on disk > > write speed has been there all along, the database is > just > > forcing you to address it. > > > > -- Greg Smith > > 2ndQuadrant Baltimore, MD > > PostgreSQL Training, Services and Support > > greg@xxxxxxxxxxxxxxx > > www.2ndQuadrant.com > > > > > > > > -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general