On Mon, Oct 11, 2010 at 06:41:16AM +0800, Craig Ringer wrote: > On 10/11/2010 01:14 AM, Mladen Gogala wrote: > >> I can provide measurements, but from Oracle RDBMS. Postgres doesn't >> allow tuning of that aspect, so no measurement can be done. Would the >> numbers from Oracle RDBMS be acceptable? > > Well, they'd tell me a lot about Oracle's performance as I/O chunk size > scales, but almost nothing about the cost of small I/O operations vs > larger ones in general. > > Typically dedicated test programs that simulate the database read > patterns would be used for this sort of thing. I'd be surprised if > nobody on -hackers has already done suitable testing; I was mostly > asking because I was interested in how you were backing your assertions. One thing a test program would have to take into account is multiple concurrent users. What speeds up the single user case may well hurt the multi user case, and the behaviors that hurt single user cases may have been put in place on purpose to allow decent multi-user performance. Of course, all of that is "might" and "maybe", and I can't prove any assertions about block size either. But the fact of multiple users needs to be kept in mind. It was asserted that reading bigger chunks would help performance; a response suggested that, at least in Linux, setting readahead on a device would essentially do the same thing. Or that's what I got from the thread, anyway. I'm interested to know how similar performance might be between the large block size case and the large readahead case. Comments, anyone? -- Joshua Tolley / eggyknap End Point Corporation http://www.endpoint.com
Attachment:
signature.asc
Description: Digital signature