rbd loaded 100%

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!
I have a server (ceph version 0.80.7, links 10Gb), there is set: 1 pool is write to 5 osd. I'm using the iscsi-target write to this pool (disk rbd3) some data from other server. And speed on network is near 150 Mbit / sec. In this case, iostat shows the usage rbd3 drive 100%, but drives on which there are 5 osd (sdc sdd sde sdf sdg) loaded in the region of 20% each. Who knows why this could be and what i can run the utility for the diagnosis?

iostat -x 1

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.80    0.00    1.46    0.71    0.00   96.03

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdb               0.00     9.00    0.00    6.00     0.00    68.00    22.67     0.00    0.67    0.00    0.67   0.67   0.40
sdc               0.00     2.00    0.00   33.00     0.00  7756.00   470.06     2.76   83.76    0.00   83.76   5.45  18.00
sdd               0.00     0.00    0.00   59.00     0.00  9236.00   313.08     0.57    9.69    0.00    9.69   6.58  38.80
sde               0.00     0.00    0.00   29.00     0.00  5112.00   352.55     0.43   13.93    0.00   13.93   7.03  20.40
sdf               0.00     0.00    0.00   28.00     0.00  4612.00   329.43     0.26    9.14    0.00    9.14   6.57  18.40
sdg               0.00     0.00    0.00   24.00     0.00  4032.00   336.00     0.22    8.67    0.00    8.67   6.67  16.00
rbd0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
rbd1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
rbd2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
rbd3              0.00     0.00    0.00  318.00     0.00 20045.00   126.07     7.28   28.29    0.00   28.29   3.13  99.60
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux