On Wed, Nov 06, 2019 at 03:15:53PM -0800, Andres Freund wrote:
Hi,
On 2019-11-06 22:54:48 +0100, Tomas Vondra wrote:
If we're only talking about FPGA I/O acceleration, essentially FPGA
between the database and storage, it's likely possible to get that
working without any extensive executor changes. Essentially create an
FPGA-aware variant of SeqScan and you're done. Or an FPGA-aware
tuplesort, or something like that. Neither of this should require
significant planner/executor changes, except for costing.
I doubt that that is true. For one, you either need to teach the FPGA
to understand at least enough about the intricacies of postgres storage
format, to be able to make enough sense of visibility information to
know when it safe to look at a tuple (you can't evaluate qual's before
visibility information). It also needs to be fed a lot of information
about the layout of the table, involved operators etc. And even if you
define those away somehow, you still need to make sure that the on-disk
state is coherent with the in-memory state - which definitely requires
reaching outside of just a replacement seqscan node.
That's true, of course - the new node would have to know a lot of
details about the on-disk format, meaning of operators, etc. Not
trivial, that's for sure. (I think PGStrom does this)
What I had in mind were extensive changes to how the executor works in
general, because the OP mentioned changing the executor from pull to
push, or abandoning the iterative executor design. And I think that
would not be necessary ...
I've a hard time believing that, even though some storage vendors are
pushing this model heavily, the approach of performing qual evaluation
on the storage level is actually useful for anything close to a general
purpose database, especially a row store.
I agree with this too - it's unlikely to be a huge win for "regular"
workloads, it's usually aimed at (some) analytical workloads.
And yes, row store is not the most efficient format for this type of
accelerators (I don't have much experience with FPGA, but for GPUs it's
very inefficient).
It's more realistic to have a model where the fpga is fed pre-processed
data, and it streams out the processed results. That way there are no
problems with coherency, one can can transparently handle parts of
reading the data that the FPGA can't, etc.
Well, the whole idea is that the FPGA does a lot of "simple" filtering
before the data even get into RAM / CPU, etc. So I don't think this
model would perform well - I assume the "processing" necessary could
easily be more expensive than the gains.
But I admit I'm sceptical even the above model is relevant for
postgres. The potential market seems likely to stay small, and there's
so much more performance work that's applicable to everyone using PG,
even without access to special purpose hardware.
Not sure. It certainly is irrelevant for everyone who does not have
access to systems with FPGAs, and useful only for some workloads. How
large the market is, I don't know.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services