Re: High IOWAIT times, low iops? Need Help with configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



we've tried benchmarking the array, the data array can write at
800mb/s for files less than 256mb (raid write cache), after which it
can sustain 300mb/s, it seems like it can also handle 6-700 iops when
benchmarking. it seems to work as expected outside of postgres, I
guess we can look at the drivers, let me know if you guys have any
other suggestions, thanks for your help, -evan

On 6/28/07, Richard Huxton <dev@xxxxxxxxxxxx> wrote:
Evan Reiser wrote:
> I was wondering if you guys have some suggested settings for our server, i
> think we are not hardware limited but the configureation is set up
> incorrectly.  For some reason our database seems to have trouble handling
> 10+ inserts per second which seems to be a pretty trivial load for this
> hardware, we're seeing very high %iowait, this is a pretty typical output
> for #iostat -m 5
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                 0.41    0.00     0.41       96.28       0.00     2.90
>
> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> sda              90.63         0.08         0.56          0          2
> sdc               0.00         0.00         0.00          0          0
> sdd              94.09         0.19         1.74          0          8
>
>
> sda = 2x320GB 7200rpm in RAID1
> sdc = 2x150GB 10krpm in RAID1    (transaction log is on this array)
> sdd = 6x150GB 10krpm in RAID 10 (database is on the array)

OK, so no write activity on the transaction log, and hardly any reading
on sdd. Your disks are practically idle, and yet iowait is at 96% - very
strange.

> raid controller = 3ware 9650 12port - 256MB cache
>
> 8GB RAM, core 2 duo - quad core
>
> it would seem like the io subsystem is the limiting factor, but i feel
like
> we should be barely hitting a wall, you can see from the example its
> writing
> < 2MB/s to the array
>
> Here's some of our settings
>
> shared_buffers = 256MB                  # min 128kB or
max_connections*16kB
> temp_buffers = 32MB                     # min 800kB
> max_prepared_transactions = 50          # can be 0 or more
> work_mem = 32MB                         # min 64kB
> maintenance_work_mem = 32MB             # min 1MB
> max_stack_depth = 7MB                   # min 100kB
>
> max_fsm_pages = 512000          # min max_fsm_relations*16, 6 bytes

Well, you might want to tweak these, but they're not going to completely
kill your io.

> fsync = off                             # turns forced synchronization

You'll be turning this back on in production, I take it?

Hmm - ideas
1. Run a VACUUM FULL on your database(s) and see what happens with your
io then
2. Test a block copy, something like (but a directory on sdd):
    dd if=/dev/zero of=/tmp/empty count=1000000
    That should show an upper limit for your write speed.
3. Google around and check there aren't any issues with your raid
controller and kernel/driver versions.

--
   Richard Huxton
   Archonet Ltd



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux