Re: High IOWAIT times, low iops? Can't figure out what the bottleneck is...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here is the table structure for the table with all the inserts

id(PK)        integer
dateinserted  datetime without timezone
dateexamined 
datetime without timezone
lockedby      integer
done          boolean
failed       
boolean
invalid       boolean


>select * from table limit 1;

id      | done | failed |     dateinserted    | dateexamined   | lockedby | invalid
--------+------+--------+---------------------+----------------+----------+---------
3130902 | f    | f      | 1900-01-01 00:00:00 |                |        0 | f

There is a clustered index on dateinserted,
+ a non clustered index on id
+ a non clustered index on done,failed,invalid
+ a non clustered index on lockedby

There are no triggers



On 7/3/07, Evan Reiser <evan.reiser@xxxxxxxxx> wrote:
I was wondering if you guys have some suggested settings for our server, i think we are not hardware limited but the configureation is set up incorrectly.  For some reason our database seems to have trouble handling 5-10+ inserts per second which seems to be a pretty trivial load for this hardware, we're seeing very high %iowait, this is a pretty typical output for #iostat -m 5

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                0.97      0.00    0.82         89.69     0.00      8.52

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              85.40         0.18         0.72                 0                3
sdb               0.00         0.00         0.00                  0                0
sdc              30.43         0.00         0.76                 0                3
sdd              93.91         0.71         0.12                 3                0




sda = 2x320GB 7200rpm in RAID1 (operating system)
sdc = 2x150GB 10krpm in RAID1    (transaction log is on this array)
sdd = 6x150GB 10krpm in RAID 10 (database is on the array)

we're using ReiserFS on each of the arrays as the file system
OS is debian Etch x64

raid controller = 3ware 9650 12port - 256MB cache

8GB RAM, core 2 duo - quad core 2.4ghz 8MB L2 Cache

it would seem like the io subsystem is the limiting factor, but i feel like we should be barely hitting a wall, you can see from the example its writing < 2MB/s to the array.  If i try to copy a big file (1-2GB) over to the array, the array hits 300MB/s sustained, it can burst write files < 256mb AT 800MB/S

Here's some of our settings


shared_buffers = 1024MB               
temp_buffers = 32MB                  
max_prepared_transactions = 50       
work_mem = 64MB                      
maintenance_work_mem = 256MB         
max_stack_depth = 7MB                  
max_fsm_pages = 512000         

vacuum_cost_delay = 20ms                      

fsync = on       
wal_buffers = 1MB 
checkpoint_segments = 32

effective_cache_size = 4128MB

If you guys have any suggestions it would be greatly appreciated

-Evan Reiser


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux