Re: extract disk usage stats from running ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I would like to understand why the OSD HDDs on node2 of my three identical ceph hosts claim to have processed 10 times more reads/writes than the other two nodes.

OSD weights are all the similar, disk usage in space also, same disk sizes, same reported disk usage hours etc, etc. All data (QEMU VMs) is in one large 3/2 pool, called "ceph-storage". This is on ceph version 12.2.10.

The spreadsheet with all the stats:
https://docs.google.com/spreadsheets/d/1n8aOC1tpPPMi2iALhxfHzSQTRmfIz6wCERlagNShVss/edit?usp=sharing

I hope the information is now both complete and readable, let me know if anything else is needed.

Curious for any insights.

The previously requested outputs below.

root@node1:~# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
 0   hdd 3.64000  1.00000 3.64TiB 2.01TiB 1.62TiB 55.35 0.98 137
 1   hdd 3.64000  1.00000 3.64TiB 2.09TiB 1.54TiB 57.56 1.02 141
 2   hdd 3.63689  1.00000 3.64TiB 1.92TiB 1.72TiB 52.80 0.94 128
 3   hdd 3.64000  1.00000 3.64TiB 2.07TiB 1.57TiB 56.91 1.01 143
12   hdd 3.64000  1.00000 3.64TiB 2.15TiB 1.48TiB 59.19 1.05 138
13   hdd 3.64000  1.00000 3.64TiB 1.99TiB 1.64TiB 54.81 0.97 131
14   hdd 3.64000  1.00000 3.64TiB 1.93TiB 1.70TiB 53.14 0.94 127
15   hdd 3.64000  1.00000 3.64TiB 2.19TiB 1.45TiB 60.11 1.07 143
 4   hdd 3.64000  1.00000 3.64TiB 2.11TiB 1.53TiB 57.98 1.03 142
 5   hdd 3.64000  1.00000 3.64TiB 1.97TiB 1.67TiB 54.11 0.96 134
 6   hdd 3.64000  1.00000 3.64TiB 2.12TiB 1.51TiB 58.41 1.04 142
 7   hdd 3.64000  1.00000 3.64TiB 1.97TiB 1.66TiB 54.29 0.97 128
16   hdd 3.64000  1.00000 3.64TiB 2.00TiB 1.64TiB 54.90 0.98 133
17   hdd 3.64000  1.00000 3.64TiB 2.33TiB 1.30TiB 64.15 1.14 153
18   hdd 3.64000  1.00000 3.64TiB 1.97TiB 1.67TiB 54.08 0.96 132
19   hdd 3.64000  1.00000 3.64TiB 1.89TiB 1.75TiB 51.94 0.92 124
 8   hdd 3.64000  1.00000 3.64TiB 1.79TiB 1.85TiB 49.25 0.88 123
 9   hdd 3.64000  1.00000 3.64TiB 2.17TiB 1.46TiB 59.73 1.06 144
10   hdd 3.64000  1.00000 3.64TiB 2.40TiB 1.24TiB 65.89 1.17 157
11   hdd 3.64000  1.00000 3.64TiB 2.06TiB 1.58TiB 56.65 1.01 133
20   hdd 3.64000  1.00000 3.64TiB 2.19TiB 1.45TiB 60.24 1.07 148
21   hdd 3.64000  1.00000 3.64TiB 1.74TiB 1.90TiB 47.80 0.85 115
22   hdd 3.64000  1.00000 3.64TiB 2.05TiB 1.59TiB 56.28 1.00 138
23   hdd 3.63689  1.00000 3.64TiB 1.96TiB 1.67TiB 54.02 0.96 130
                    TOTAL 87.3TiB 49.1TiB 38.2TiB 56.23
MIN/MAX VAR: 0.85/1.17  STDDEV: 4.08

and

root@node1:~# ceph osd tree
ID CLASS WEIGHT   TYPE NAME      STATUS REWEIGHT PRI-AFF
-1       87.35376 root default
-2       29.11688     host node1
 0   hdd  3.64000         osd.0      up  1.00000 1.00000
 1   hdd  3.64000         osd.1      up  1.00000 1.00000
 2   hdd  3.63689         osd.2      up  1.00000 1.00000
 3   hdd  3.64000         osd.3      up  1.00000 1.00000
12   hdd  3.64000         osd.12     up  1.00000 1.00000
13   hdd  3.64000         osd.13     up  1.00000 1.00000
14   hdd  3.64000         osd.14     up  1.00000 1.00000
15   hdd  3.64000         osd.15     up  1.00000 1.00000
-3       29.12000     host node2
 4   hdd  3.64000         osd.4      up  1.00000 1.00000
 5   hdd  3.64000         osd.5      up  1.00000 1.00000
 6   hdd  3.64000         osd.6      up  1.00000 1.00000
 7   hdd  3.64000         osd.7      up  1.00000 1.00000
16   hdd  3.64000         osd.16     up  1.00000 1.00000
17   hdd  3.64000         osd.17     up  1.00000 1.00000
18   hdd  3.64000         osd.18     up  1.00000 1.00000
19   hdd  3.64000         osd.19     up  1.00000 1.00000
-4       29.11688     host node3
 8   hdd  3.64000         osd.8      up  1.00000 1.00000
 9   hdd  3.64000         osd.9      up  1.00000 1.00000
10   hdd  3.64000         osd.10     up  1.00000 1.00000
11   hdd  3.64000         osd.11     up  1.00000 1.00000
20   hdd  3.64000         osd.20     up  1.00000 1.00000
21   hdd  3.64000         osd.21     up  1.00000 1.00000
22   hdd  3.64000         osd.22     up  1.00000 1.00000
23   hdd  3.63689         osd.23     up  1.00000 1.00000

MJ
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux