RE: Multipath I/O stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

-----Original Message-----
From: Yong Huang [mailto:yong321@xxxxxxxxx] 
Sent: Friday, May 21, 2010 12:19 AM
To: Allen, Jack
Cc: redhat-list@xxxxxxxxxx
Subject: RE: Multipath I/O stats

> > With multipath set up to access a SAN with some number of LUNs
> > and for this question 2 paths set for round robin, how can the
> > I/O stats be seen/gathered to see the throughput on each path
> > and how balanced the I/O is?
> 
> I think we can do this. multiptha -l tells you what disks are combined
> to form a mapper path. Then you can use iostat to check I/O stats of
> each disk along with each mapper. It won't be hard to write a shell
> script to re-print the lines of iostat nicely, grouping the lines of
the
> disks under their respective mapper path.
> 
> Yong Huang
> 
> ===========================
> Thanks for the reply.
> 
> This is the output of just one of the mpaths that I monitored for a
> while.
> 
> mpath13 (360060e8005491000000049100000703c) dm-0 HP,OPEN-V
> [size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
> \_ round-robin 0 [prio=0][active]
> \_ 2:0:0:2  sdaa 65:160 [active][undef]
> \_ 2:0:1:2  sdam 66:96  [active][undef]
> \_ 1:0:0:2  sdc  8:32   [active][undef]
> \_ 1:0:1:2  sdo  8:224  [active][undef]
> 
> 
> Below is the command I used and the results. I know this is a small
> sampling and I have eliminated the ones that had 0 I/O to save space
> here. But it appears the I/O is not really being done round-robin as I
> think it should be. You will notice sdam and sdb are the only ones
that
> do any I/O. Now maybe this is because of some preferred path and
> controller relationship, I don't know. Any help understanding this
would
> be helpful.
> 
> iostat -d -p sdaa -p sdam -p sdc -p sdb -p dm-0 2 20 > /tmp/zzxx
> 
> Linux 2.6.18-164.el5PAE (h0009)         05/20/2010
> 
> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> sdaa              1.53        49.43        15.03   60932848   18523880
> sdam              1.53        49.35        15.10   60833016   18616608
> sdc               1.53        49.41        15.04   60905568   18542936
> sdb               1.38        57.21         3.68   70522704    4533144
> dm-0             32.23       197.56        60.24  243542080   74259264
> 
> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> sdaa              0.00         0.00         0.00          0          0
> sdam              4.50        72.00         0.00        144          0
> sdc               0.00         0.00         0.00          0          0
> sdb               4.50        72.00         0.00        144          0
> dm-0              9.00        72.00         0.00        144          0
> 
> ...

Jack,

You can look at the first iteration of your iostat output, which is the
accumulative stats since bootup (the later iterations are each a
2-second sample). If your iostat had argument -p sdo instead of -p sdb
(it must be a typo compared with the outpuf of your multipath command),
you would see all four paths have almost perfectly equal I/O stats,
because all your paths are active. Numbers below this accumulative stats
indicate your currently selected paths are sdam and (likely) sdo (not
shown due to typo). After rr_min_io seconds I think, they'll switch to
the other two paths.

Your multipath command seems to have the 4 path lines missing leading
space; they should be indented below the priority group line.

Is it OK you show me the first part of /etc/multipath.conf, uncommented
lines before the actual multipaths section?

BTW, I used a wrong word in my last message. Instead of "disk", I really
should say "device".

Yong Huang
==========

Thanks for the follow up.

You are correct I entered the wrong device name. I monitored again with
the correct device names for an over all longer period of time and it
did rotate through all the device. But it seemed to take about a few
minutes to rotate from one path to the next. So I added rr_min_io 2 in
the default section, ran multipathd -k, reconfigure, but it did not have
any effect. I am reading the multipath.conf man page now to see if I can
find out anything.

You are correct there are 4 paths, in my original question I just used 2
as an example. Then you asked questions and I provided more information.
And the lack of a space in the output of multipath -l is probably due to
my copy and pasting.

multipath.conf
VVVVVVVVVVVVVV
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^cciss!c[0-9]d[0-9]*"
        devnode "^hd[a-z]"
        devnode "^vg*"
        }

## Use user friendly names, instead of using WWIDs as names.
defaults {
        user_friendly_names     yes
        polling_interval        10
        path_checker            readsector0
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        failback                5
        no_path_retry           5
        rr_min_io               2
        bindings_file           "/etc/multipath.bindings"
}
^^^^^^^^^^^^^^^^^^

Everything else is commented out. It is using the build in multipath
rule/configuration.
        device {
                vendor (HITACHI|HP)
                product OPEN-.*
                path_checker tur
                failback immediate
                no_path_retry 12
                rr_min_io 1000
        }

Which when I was copy and pasting I noticed the rr_min_io 1000 which
probably why it is taking a while to rotate through the paths.

-----
Jack Allen


-- 
redhat-list mailing list
unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list

[Index of Archives]     [CentOS]     [Kernel Development]     [PAM]     [Fedora Users]     [Red Hat Development]     [Big List of Linux Books]     [Linux Admin]     [Gimp]     [Asterisk PBX]     [Yosemite News]     [Red Hat Crash Utility]


  Powered by Linux