Re: fast read of binary data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12-11-2012 11:45, Eildert Groeneveld wrote:
Dear All

I am currently implementing using a compressed binary storage scheme
genotyping data. These are basically vectors of binary data which may be
megabytes in size.

Our current implementation uses the data type bit varying.

Wouldn't 'bytea' be a more logical choice for binary data?
http://www.postgresql.org/docs/9.2/interactive/datatype-binary.html

What we want to do is very simple: we want to retrieve such records from
the database and transfer it unaltered to the client which will do
something (uncompressing) with it. As massive amounts of data are to be
moved, speed is of great importance, precluding any to and fro
conversions.

Our current implementation uses Perl DBI; we can retrieve the data ok,
but apparently there is some converting going on.

Further, we would like to use ODBC from Fortran90 (wrapping the
C-library)  for such transfers. However, all sorts funny things happen
here which look like conversion issues.

In old fashioned network database some decade ago (in pre SQL times)
this was no problem. Maybe there is someone here who knows the PG
internals sufficiently well to give advice on how big blocks of memory
(i.e. bit varying records) can between transferred UNALTERED between
backend and clients.

Although I have no idea whether bytea is treated differently in this context. Bit varying should be about as simple as possible (given that it only has 0's and 1's)

Best regards,

Arjen


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux