Cool - seems like the posters caught that "auto memory pick" problem before you posted, but you got the 16GB/8k parts right. Now we're looking at realistic numbers - 790 seeks/second, 244MB/s sequential write, but only 144MB/s sequential reads, perhaps 60% of what it should be. Seems like a pretty good performer in general - if it was Linux I'd play with the max readahead in the I/O scheduler to improve the sequential reads. - Luke On 8/15/06 1:21 PM, "Bucky Jordan" <bjordan@xxxxxxxxxx> wrote: > Luke, > > For some reason it looks like bonnie is picking a 300M file. > >> bonnie++ -d bonnie > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP > 300M 179028 99 265358 41 270175 57 167989 99 +++++ +++ > +++++ +++ > ------Sequential Create------ --------Random > Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > /sec %CP > 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ > +++++ +++ > ,300M,179028,99,265358,41,270175,57,167989,99,+++++,+++,+++++,+++,16,+++ > ++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ > > So here's results when I force it to use a 16GB file, which is twice the > amount of physical ram in the system: > >> bonnie++ -d bonnie -s 16000:8k > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP > 16000M 158539 99 244430 50 58647 29 83252 61 144240 21 > 789.8 7 > ------Sequential Create------ --------Random > Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > /sec %CP > 16 7203 54 +++++ +++ +++++ +++ 24555 42 +++++ +++ > +++++ +++ > ,16000M,158539,99,244430,50,58647,29,83252,61,144240,21,789.8,7,16,7203, > 54,+++++,+++,+++++,+++,24555,42,+++++,+++,+++++,+++ > > ... from Vivek... > which is an issue with freebsd and bonnie++ since it doesn't know > that freebsd can use large files natively (ie, no large file hacks > necessary). the freebsd port of bonnie takes care of this, if you > use that instead of compiling your own. > ... > > Unfortunately I had to download and build by hand, since only bonnie++ > 1.9x is available in BSD 6.1 ports when I checked. > > One other question- would the following also be mostly a test of RAM? I > wouldn't think so since it should force it to sync to disk... > time bash -c "(dd if=/dev/zero of=/data/bigfile count=125000 bs=8k && > sync)" > > Oh, and while I'm thinking about it, I believe Postgres uses 8k data > pages correct? On the RAID, I'm using 128k stripes. I know there's been > posts on this before, but is there any way to tell postgres to use this > in an effective way? > > Thanks, > > Bucky > > -----Original Message----- > From: pgsql-performance-owner@xxxxxxxxxxxxxx > [mailto:pgsql-performance-owner@xxxxxxxxxxxxxx] On Behalf Of Vivek Khera > Sent: Tuesday, August 15, 2006 3:18 PM > To: Pgsql-Performance ((E-mail)) > Subject: Re: [PERFORM] Dell PowerEdge 2950 performance > > > On Aug 15, 2006, at 2:50 PM, Luke Lonergan wrote: > >> I don't know why I missed this the first time - you need to let >> bonnie++ >> pick the file size - it needs to be 2x memory or the results you >> get will >> not be accurate. > > which is an issue with freebsd and bonnie++ since it doesn't know > that freebsd can use large files natively (ie, no large file hacks > necessary). the freebsd port of bonnie takes care of this, if you > use that instead of compiling your own. > >