Re: Disk IO issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 31 Dec 2008, Mike McGrath wrote:

> Lets pool some knowledge together because at this point, I'm missing
> something.
>
> I've been doing all measurements with sar as bonnie, etc, causes builds to
> timeout.
>
> Problem: We're seeing slower then normal disk IO.  At least I think we
> are.  This is a PERC5/E and MD1000 array.
>
> When I try to do a normal copy "cp -adv /mnt/koji/packages /tmp/" I get
> around 4-6MBytes/s
>
> When I do a cp of a large file "cp /mnt/koji/out /tmp/" I get
> 30-40MBytes/s.
>
> Then I "dd if=/dev/sde of=/dev/null" I get around 60-70 MBytes/s read.
>
> If I "cat /dev/sde > /dev/null" I get between 225-300MBytes/s read.
>
> The above tests are pretty consistent.  /dev/sde is a raid5 array,
> hardware raid.
>
> So my question here is, wtf?  I've been working to do a backup which I
> would think would either cause network utilization to max out, or disk io
> to max out.  I'm not seeing either.  Sar says the disks are 100% utilized
> but I can cause major increases in actual disk reads and writes by just
> running additional commands.  Also, if the disks were 100% utilized I'd
> expect we would see lots more iowait.  We're not though, iowait on the box
> is only %0.06 today.
>
> So, long story short, we're seeing much better performance when just
> reading or writing lots of data (though dd is many times slower then cat).
> But with our real-world traffic, we're just seeing crappy crappy IO.
>
> Thoughts, theories or opinions?  Some of the sysadmin noc guys have access
> to run diagnostic commands, if you want more info about a setting, let me
> know.
>
> I should also mention there's lots going on with this box, for example its
> hardware raid, lvm and I've got xen running on it (though the tests above
> were not in a xen guest).
>

Also for the curious:

dumpe2fs 1.39 (29-May-2006)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              1342177280
Block count:              2684354560
Reserved block count:     134217728
Free blocks:              1407579323
Free inodes:              1336866363
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      384
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         16384
Inode blocks per group:   512
Filesystem created:       Thu Jan 17 14:52:03 2008
Last mount time:          Fri Dec  5 18:51:44 2008
Last write time:          Fri Dec  5 18:51:44 2008
Mount count:              17
Maximum mount count:      24
Last checked:             Sat May 24 03:14:41 2008
Check interval:           15552000 (6 months)
Next check after:         Thu Nov 20 03:14:41 2008
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               128
Journal inode:            8
Default directory hash:   tea
Directory Hash Seed:      1b6393b1-472c-4005-ae87-9603eea9f45b
Journal backup:           inode blocks
Journal size:             128M

_______________________________________________
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list

[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux