* david@xxxxxxx (david@xxxxxxx) wrote: > while I fully understand the 'benchmark your situation' need, this isn't > that simple. It really is. You know your application, you know it's primary use cases, and probably have some data to play with. You're certainly in a much better situation to at least *try* and benchmark it than we are. > in this case we are trying to decide what API/interface to use in a > infrastructure tool that will be distributed in common distros (it's now > the default syslog package of debian and fedora), there are so many > variables in hardware, software, and load that trying to benchmark it > becomes effectivly impossible. You don't need to know how it will perform in every situation. The main question you have is if using prepared queries is faster or not, so pick a common structure, create a table, get some data, and test. I can say that prepared queries will be more likely to give you a performance boost with wider tables (more columns). > based on Stephan's explination of where binary could help, I think the > easy answer is to decide not to bother with it (the postgres text to X > converters get far more optimization attention than anything rsyslog > could deploy) While that's true, there's no substitute for not having to do a conversion at all. After all, it's alot cheaper to do a bit of byte-swapping on an integer value that's already an integer in memory than to sprintf and atoi it. Thanks, Stephen
Attachment:
signature.asc
Description: Digital signature