Re: extract disk usage stats from running ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>>And it seems smartctl on our seagate ST4000NM0034 drives do not give us
data on total bytes written or read

If it's a SAS device, it's not always obvious where to find this information.

You can use Seagate's openseachest toolset.

For any (SAS/SATA, HDD/SSD) device, the --deviceInfo will give you
some of the info you are looking for; e.g.

sudo ./openSeaChest_Info -d /dev/sg1 --deviceInfo | grep Total
Total Bytes Read (TB): 82.46
Total Bytes Written (TB): 311.56



On Tue, Feb 11, 2020 at 3:10 AM lists <lists@xxxxxxxxxxxxx> wrote:
>
> Hi Joe and Mehmet!
>
> Thanks for your responses!
>
> The requested outputs at the end of the message.
>
> But to make my question more clear:
>
> What we are actually after, is not about CURRENT usage of our OSDs, but
> stats on total GBs written in the cluster, per OSD, and read/write ratio.
>
> With those numbers, we would be able to identify suitable replacement
> SSDs for our current HDDs, and select specifically for OUR typical use.
> (taking into account endurance, speed, price, etc, etc)
>
> And it seems smartctl on our seagate ST4000NM0034 drives do not give us
> data on total bytes written or read. (...or are we simply not looking in
> the right place..?)
>
> Requested outputs below:
>
> > root@node1:~# ceph osd df tree
> > ID CLASS WEIGHT   REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS TYPE NAME
> > -1       87.35376        - 87.3TiB 49.1TiB 38.2TiB 56.22 1.00   - root default
> > -2       29.11688        - 29.1TiB 16.4TiB 12.7TiB 56.23 1.00   -     host node1
> >  0   hdd  3.64000  1.00000 3.64TiB 2.01TiB 1.62TiB 55.34 0.98 137         osd.0
> >  1   hdd  3.64000  1.00000 3.64TiB 2.09TiB 1.54TiB 57.56 1.02 141         osd.1
> >  2   hdd  3.63689  1.00000 3.64TiB 1.92TiB 1.72TiB 52.79 0.94 128         osd.2
> >  3   hdd  3.64000  1.00000 3.64TiB 2.07TiB 1.57TiB 56.90 1.01 143         osd.3
> > 12   hdd  3.64000  1.00000 3.64TiB 2.15TiB 1.48TiB 59.18 1.05 138         osd.12
> > 13   hdd  3.64000  1.00000 3.64TiB 1.99TiB 1.64TiB 54.80 0.97 131         osd.13
> > 14   hdd  3.64000  1.00000 3.64TiB 1.93TiB 1.70TiB 53.13 0.94 127         osd.14
> > 15   hdd  3.64000  1.00000 3.64TiB 2.19TiB 1.45TiB 60.10 1.07 143         osd.15
> > -3       29.12000        - 29.1TiB 16.4TiB 12.7TiB 56.22 1.00   -     host node2
> >  4   hdd  3.64000  1.00000 3.64TiB 2.11TiB 1.53TiB 57.97 1.03 142         osd.4
> >  5   hdd  3.64000  1.00000 3.64TiB 1.97TiB 1.67TiB 54.11 0.96 134         osd.5
> >  6   hdd  3.64000  1.00000 3.64TiB 2.12TiB 1.51TiB 58.40 1.04 142         osd.6
> >  7   hdd  3.64000  1.00000 3.64TiB 1.97TiB 1.66TiB 54.28 0.97 128         osd.7
> > 16   hdd  3.64000  1.00000 3.64TiB 2.00TiB 1.64TiB 54.90 0.98 133         osd.16
> > 17   hdd  3.64000  1.00000 3.64TiB 2.33TiB 1.30TiB 64.14 1.14 153         osd.17
> > 18   hdd  3.64000  1.00000 3.64TiB 1.97TiB 1.67TiB 54.07 0.96 132         osd.18
> > 19   hdd  3.64000  1.00000 3.64TiB 1.89TiB 1.75TiB 51.93 0.92 124         osd.19
> > -4       29.11688        - 29.1TiB 16.4TiB 12.7TiB 56.22 1.00   -     host node3
> >  8   hdd  3.64000  1.00000 3.64TiB 1.79TiB 1.85TiB 49.24 0.88 123         osd.8
> >  9   hdd  3.64000  1.00000 3.64TiB 2.17TiB 1.47TiB 59.72 1.06 144         osd.9
> > 10   hdd  3.64000  1.00000 3.64TiB 2.40TiB 1.24TiB 65.88 1.17 157         osd.10
> > 11   hdd  3.64000  1.00000 3.64TiB 2.06TiB 1.58TiB 56.64 1.01 133         osd.11
> > 20   hdd  3.64000  1.00000 3.64TiB 2.19TiB 1.45TiB 60.23 1.07 148         osd.20
> > 21   hdd  3.64000  1.00000 3.64TiB 1.74TiB 1.90TiB 47.80 0.85 115         osd.21
> > 22   hdd  3.64000  1.00000 3.64TiB 2.05TiB 1.59TiB 56.27 1.00 138         osd.22
> > 23   hdd  3.63689  1.00000 3.64TiB 1.96TiB 1.67TiB 54.01 0.96 130         osd.23
> >                      TOTAL 87.3TiB 49.1TiB 38.2TiB 56.22
> > MIN/MAX VAR: 0.85/1.17  STDDEV: 4.08
> > root@node1:~# ceph osd status
> > +----+------+-------+-------+--------+---------+--------+---------+-----------+
> > | id | host |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
> > +----+------+-------+-------+--------+---------+--------+---------+-----------+
> > | 0  | node1  | 2061G | 1663G |   38   |  5168k  |    3   |  1491k  | exists,up |
> > | 1  | node1  | 2143G | 1580G |    4   |  1092k  |    9   |  2243k  | exists,up |
> > | 2  | node1  | 1965G | 1758G |   20   |  3643k  |    5   |  1758k  | exists,up |
> > | 3  | node1  | 2119G | 1605G |   17   |  99.5k  |    4   |  3904k  | exists,up |
> > | 4  | node2  | 2158G | 1565G |   12   |   527k  |    1   |  2632k  | exists,up |
> > | 5  | node2  | 2014G | 1709G |   15   |   239k  |    0   |   889k  | exists,up |
> > | 6  | node2  | 2174G | 1549G |   11   |  1677k  |    5   |  1931k  | exists,up |
> > | 7  | node2  | 2021G | 1702G |    2   |   597k  |    0   |  1638k  | exists,up |
> > | 8  | node3  | 1833G | 1890G |    4   |   564k  |    4   |  5595k  | exists,up |
> > | 9  | node3  | 2223G | 1500G |    6   |  1124k  |   10   |  4864k  | exists,up |
> > | 10 | node3  | 2453G | 1270G |    8   |  1257k  |    3   |  1447k  | exists,up |
> > | 11 | node3  | 2109G | 1614G |   14   |  2889k  |    3   |  1449k  | exists,up |
> > | 12 | node1  | 2204G | 1520G |   17   |  1596k  |    4   |  1806k  | exists,up |
> > | 13 | node1  | 2040G | 1683G |   15   |  2526k  |    0   |   819k  | exists,up |
> > | 14 | node1  | 1978G | 1745G |   11   |  1713k  |    8   |  3489k  | exists,up |
> > | 15 | node1  | 2238G | 1485G |   25   |  5151k  |    5   |  2715k  | exists,up |
> > | 16 | node2  | 2044G | 1679G |    2   |  43.3k  |    1   |  3371k  | exists,up |
> > | 17 | node2  | 2388G | 1335G |   14   |  1736k  |    9   |  5315k  | exists,up |
> > | 18 | node2  | 2013G | 1710G |    8   |  1907k  |    2   |  2004k  | exists,up |
> > | 19 | node2  | 1934G | 1790G |   15   |  2115k  |    4   |  3248k  | exists,up |
> > | 20 | node3  | 2243G | 1481G |   15   |  3292k  |    1   |  1763k  | exists,up |
> > | 21 | node3  | 1780G | 1944G |    8   |  1636k  |    0   |  86.4k  | exists,up |
> > | 22 | node3  | 2095G | 1628G |   23   |  5012k  |    4   |  1654k  | exists,up |
> > | 23 | node3  | 2011G | 1712G |    9   |  1662k  |    1   |  2457k  | exists,up |
> > +----+------+-------+-------+--------+---------+--------+---------+-----------+
>
> Thanks!
>
> MJ
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux