Hi Martin, We've already got the collection in place, and we (in retrospect) see some errors on the sub-interface in question. We'll be adding alerting for this specific scenario as it was missed in more general alerting, and the bonded interfaces themselves don't show the errors - only the underlying interfaces. We only discovered the issue when looking at the SendQs across the cluster and noting that many were to a specific host, at which point we discovered errors with a sub-interface of a bonded NIC. Thank you for the suggestion! I am curious, though, how one might have pin-pointed a troublesome host/OSD prior to this. Looking back at some of the detail when attempting to diagnose, we do see some ops taking longer in sub_op_committed, but not really a lot else. We'd get an occasional slow operation on OSD warning, but the OSDs were spread across various ceph nodes, not just the one with issues, I'm assuming due to EC. There was no real clarity on where the 'jam' was happening, at least in anything we looked at. I'm wondering if there's a better way to see what, specifically, is "slow" on a cluster. Looking at even the OSD perf output wasn't helpful, because all of that was fine - it was likely due to EC and write operations to OSDs on that specific node in question. Is there some way to look at a cluster and see which hosts are problematic/leading to slowness in an EC-based setup? Thanks, David On Fri, Feb 26, 2021 at 1:16 PM Martin Verges <martin.verges@xxxxxxxx> wrote: > > Hello, > > within croit, we have a network latency monitoring that would have > shown you the packetlos. > We therefore suggest to install something like a smokeping on your > infrastructure to monitor the quality of your network. > > Why does it affect your cluster? > > The network is the central component of a Ceph cluster. If this does > not function stably and reliably, Ceph cannot work properly either. It > is practically the backbone of the scale-out cluster and cannot be > replaced by anything. Single packet loss, for example, leads to > retransmits of packets, increased latency and thus reduced data > throughput. This in turn leads to a higher impact during replication > work, which is particularly prevalent in EC. In EC, not only write > accesses but also read accesses must be loaded from several OSDs. > > -- > Martin Verges > Managing director > > Mobile: +49 174 9335695 > E-Mail: martin.verges@xxxxxxxx > Chat: https://t.me/MartinVerges > > croit GmbH, Freseniusstr. 31h, 81247 Munich > CEO: Martin Verges - VAT-ID: DE310638492 > Com. register: Amtsgericht Munich HRB 231263 > > Web: https://croit.io > YouTube: https://goo.gl/PGE1Bx > > Am Fr., 26. Feb. 2021 um 20:00 Uhr schrieb David Orman <ormandj@xxxxxxxxxxxx>: > > > > We figured this out - it was a leg of an LACP-based interface that was > > misbehaving. Once we dropped it, everything went back to normal. Does > > anybody know a good way to get a sense of what might be slowing down a > > cluster in this regard, with EC? We didn't see any indication of a single > > host as a problem until digging into the socket statistics and seeing high > > sendqs to that host. > > > > On Thu, Feb 25, 2021 at 7:33 PM David Orman <ormandj@xxxxxxxxxxxx> wrote: > > > > > Hi, > > > > > > We've got an interesting issue we're running into on Ceph 15.2.9. We're > > > experiencing VERY slow performance from the cluster, and extremely slow > > > misplaced object correction, with very little cpu/disk/network utilization > > > (almost idle) across all nodes in the cluster. > > > > > > We have 7 servers in this cluster, 24 rotational OSDs, and two NVMEs with > > > 12 OSD's worth of DB/WAL files on them. The OSDs are all equal weighted, so > > > the tree is pretty straightforward: > > > > > > root@ceph01:~# ceph osd tree > > > > > > Inferring fsid 41bb9256-c3bf-11ea-85b9-9e07b0435492 > > > > > > Inferring config > > > /var/lib/ceph/41bb9256-c3bf-11ea-85b9-9e07b0435492/mon.ceph01/config > > > > > > Using recent ceph image > > > docker.io/ceph/ceph@sha256:4e710662986cf366c282323bfb4c4ca507d7e117c5ccf691a8273732073297e5 > > > > > > ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF > > > > > > -1 2149.39062 root default > > > > > > -2 2149.39062 rack rack1 > > > > > > -5 307.05579 host ceph01 > > > > > > 0 hdd 12.79399 osd.0 up 1.00000 1.00000 > > > > > > 1 hdd 12.79399 osd.1 up 1.00000 1.00000 > > > > > > 2 hdd 12.79399 osd.2 up 1.00000 1.00000 > > > > > > 3 hdd 12.79399 osd.3 up 1.00000 1.00000 > > > > > > 4 hdd 12.79399 osd.4 up 1.00000 1.00000 > > > > > > 5 hdd 12.79399 osd.5 up 1.00000 1.00000 > > > > > > 6 hdd 12.79399 osd.6 up 1.00000 1.00000 > > > > > > 7 hdd 12.79399 osd.7 up 1.00000 1.00000 > > > > > > 8 hdd 12.79399 osd.8 up 1.00000 1.00000 > > > > > > 9 hdd 12.79399 osd.9 up 1.00000 1.00000 > > > > > > 10 hdd 12.79399 osd.10 up 1.00000 1.00000 > > > > > > 11 hdd 12.79399 osd.11 up 1.00000 1.00000 > > > > > > 12 hdd 12.79399 osd.12 up 1.00000 1.00000 > > > > > > 13 hdd 12.79399 osd.13 up 1.00000 1.00000 > > > > > > 14 hdd 12.79399 osd.14 up 1.00000 1.00000 > > > > > > 15 hdd 12.79399 osd.15 up 1.00000 1.00000 > > > > > > 16 hdd 12.79399 osd.16 up 1.00000 1.00000 > > > > > > 17 hdd 12.79399 osd.17 up 1.00000 1.00000 > > > > > > 18 hdd 12.79399 osd.18 up 1.00000 1.00000 > > > > > > 19 hdd 12.79399 osd.19 up 1.00000 1.00000 > > > > > > 20 hdd 12.79399 osd.20 up 1.00000 1.00000 > > > > > > 21 hdd 12.79399 osd.21 up 1.00000 1.00000 > > > > > > 22 hdd 12.79399 osd.22 up 1.00000 1.00000 > > > > > > 23 hdd 12.79399 osd.23 up 1.00000 1.00000 > > > > > > -7 307.05579 host ceph02 > > > > > > 24 hdd 12.79399 osd.24 up 1.00000 1.00000 > > > > > > 25 hdd 12.79399 osd.25 up 1.00000 1.00000 > > > > > > 26 hdd 12.79399 osd.26 up 1.00000 1.00000 > > > > > > 27 hdd 12.79399 osd.27 up 1.00000 1.00000 > > > > > > 28 hdd 12.79399 osd.28 up 1.00000 1.00000 > > > > > > 29 hdd 12.79399 osd.29 up 1.00000 1.00000 > > > > > > 30 hdd 12.79399 osd.30 up 1.00000 1.00000 > > > > > > 31 hdd 12.79399 osd.31 up 1.00000 1.00000 > > > > > > 32 hdd 12.79399 osd.32 up 1.00000 1.00000 > > > > > > 33 hdd 12.79399 osd.33 up 1.00000 1.00000 > > > > > > 34 hdd 12.79399 osd.34 up 1.00000 1.00000 > > > > > > 35 hdd 12.79399 osd.35 up 1.00000 1.00000 > > > > > > 36 hdd 12.79399 osd.36 up 1.00000 1.00000 > > > > > > 37 hdd 12.79399 osd.37 up 1.00000 1.00000 > > > > > > 38 hdd 12.79399 osd.38 up 1.00000 1.00000 > > > > > > 39 hdd 12.79399 osd.39 up 1.00000 1.00000 > > > > > > 40 hdd 12.79399 osd.40 up 1.00000 1.00000 > > > > > > 41 hdd 12.79399 osd.41 up 1.00000 1.00000 > > > > > > 42 hdd 12.79399 osd.42 up 1.00000 1.00000 > > > > > > 43 hdd 12.79399 osd.43 up 1.00000 1.00000 > > > > > > 44 hdd 12.79399 osd.44 up 1.00000 1.00000 > > > > > > 45 hdd 12.79399 osd.45 up 1.00000 1.00000 > > > > > > 46 hdd 12.79399 osd.46 up 1.00000 1.00000 > > > > > > 47 hdd 12.79399 osd.47 up 1.00000 1.00000 > > > > > > -9 307.05579 host ceph03 > > > > > > 48 hdd 12.79399 osd.48 up 1.00000 1.00000 > > > > > > 49 hdd 12.79399 osd.49 up 1.00000 1.00000 > > > > > > 50 hdd 12.79399 osd.50 up 1.00000 1.00000 > > > > > > 51 hdd 12.79399 osd.51 up 1.00000 1.00000 > > > > > > 52 hdd 12.79399 osd.52 up 1.00000 1.00000 > > > > > > 53 hdd 12.79399 osd.53 up 1.00000 1.00000 > > > > > > 54 hdd 12.79399 osd.54 up 1.00000 1.00000 > > > > > > 55 hdd 12.79399 osd.55 up 1.00000 1.00000 > > > > > > 56 hdd 12.79399 osd.56 up 1.00000 1.00000 > > > > > > 57 hdd 12.79399 osd.57 up 1.00000 1.00000 > > > > > > 58 hdd 12.79399 osd.58 up 1.00000 1.00000 > > > > > > 59 hdd 12.79399 osd.59 up 1.00000 1.00000 > > > > > > 60 hdd 12.79399 osd.60 up 1.00000 1.00000 > > > > > > 61 hdd 12.79399 osd.61 up 1.00000 1.00000 > > > > > > 62 hdd 12.79399 osd.62 up 1.00000 1.00000 > > > > > > 63 hdd 12.79399 osd.63 up 1.00000 1.00000 > > > > > > 64 hdd 12.79399 osd.64 up 1.00000 1.00000 > > > > > > 65 hdd 12.79399 osd.65 up 1.00000 1.00000 > > > > > > 66 hdd 12.79399 osd.66 up 1.00000 1.00000 > > > > > > 67 hdd 12.79399 osd.67 up 1.00000 1.00000 > > > > > > 68 hdd 12.79399 osd.68 up 1.00000 1.00000 > > > > > > 69 hdd 12.79399 osd.69 up 1.00000 1.00000 > > > > > > 70 hdd 12.79399 osd.70 up 1.00000 1.00000 > > > > > > 71 hdd 12.79399 osd.71 up 1.00000 1.00000 > > > > > > -11 307.05579 host ceph04 > > > > > > 72 hdd 12.79399 osd.72 up 1.00000 1.00000 > > > > > > 73 hdd 12.79399 osd.73 up 1.00000 1.00000 > > > > > > 74 hdd 12.79399 osd.74 up 1.00000 1.00000 > > > > > > 75 hdd 12.79399 osd.75 up 1.00000 1.00000 > > > > > > 76 hdd 12.79399 osd.76 up 1.00000 1.00000 > > > > > > 77 hdd 12.79399 osd.77 up 1.00000 1.00000 > > > > > > 78 hdd 12.79399 osd.78 up 1.00000 1.00000 > > > > > > 79 hdd 12.79399 osd.79 up 1.00000 1.00000 > > > > > > 80 hdd 12.79399 osd.80 up 1.00000 1.00000 > > > > > > 81 hdd 12.79399 osd.81 up 1.00000 1.00000 > > > > > > 82 hdd 12.79399 osd.82 up 1.00000 1.00000 > > > > > > 83 hdd 12.79399 osd.83 up 1.00000 1.00000 > > > > > > 84 hdd 12.79399 osd.84 up 1.00000 1.00000 > > > > > > 85 hdd 12.79399 osd.85 up 1.00000 1.00000 > > > > > > 86 hdd 12.79399 osd.86 up 1.00000 1.00000 > > > > > > 87 hdd 12.79399 osd.87 up 1.00000 1.00000 > > > > > > 88 hdd 12.79399 osd.88 up 1.00000 1.00000 > > > > > > 89 hdd 12.79399 osd.89 up 1.00000 1.00000 > > > > > > 90 hdd 12.79399 osd.90 up 1.00000 1.00000 > > > > > > 91 hdd 12.79399 osd.91 up 1.00000 1.00000 > > > > > > 92 hdd 12.79399 osd.92 up 1.00000 1.00000 > > > > > > 93 hdd 12.79399 osd.93 up 1.00000 1.00000 > > > > > > 94 hdd 12.79399 osd.94 up 1.00000 1.00000 > > > > > > 95 hdd 12.79399 osd.95 up 1.00000 1.00000 > > > > > > -13 307.05579 host ceph05 > > > > > > 96 hdd 12.79399 osd.96 up 1.00000 1.00000 > > > > > > 97 hdd 12.79399 osd.97 up 1.00000 1.00000 > > > > > > 98 hdd 12.79399 osd.98 up 1.00000 1.00000 > > > > > > 99 hdd 12.79399 osd.99 up 1.00000 1.00000 > > > > > > 100 hdd 12.79399 osd.100 up 1.00000 1.00000 > > > > > > 101 hdd 12.79399 osd.101 up 1.00000 1.00000 > > > > > > 102 hdd 12.79399 osd.102 up 1.00000 1.00000 > > > > > > 103 hdd 12.79399 osd.103 up 1.00000 1.00000 > > > > > > 104 hdd 12.79399 osd.104 up 1.00000 1.00000 > > > > > > 105 hdd 12.79399 osd.105 up 1.00000 1.00000 > > > > > > 106 hdd 12.79399 osd.106 up 1.00000 1.00000 > > > > > > 107 hdd 12.79399 osd.107 up 1.00000 1.00000 > > > > > > 108 hdd 12.79399 osd.108 up 1.00000 1.00000 > > > > > > 109 hdd 12.79399 osd.109 up 1.00000 1.00000 > > > > > > 110 hdd 12.79399 osd.110 up 1.00000 1.00000 > > > > > > 111 hdd 12.79399 osd.111 up 1.00000 1.00000 > > > > > > 112 hdd 12.79399 osd.112 up 1.00000 1.00000 > > > > > > 113 hdd 12.79399 osd.113 up 1.00000 1.00000 > > > > > > 114 hdd 12.79399 osd.114 up 1.00000 1.00000 > > > > > > 115 hdd 12.79399 osd.115 up 1.00000 1.00000 > > > > > > 116 hdd 12.79399 osd.116 up 1.00000 1.00000 > > > > > > 117 hdd 12.79399 osd.117 up 1.00000 1.00000 > > > > > > 118 hdd 12.79399 osd.118 up 1.00000 1.00000 > > > > > > 119 hdd 12.79399 osd.119 up 1.00000 1.00000 > > > > > > -15 307.05579 host ceph06 > > > > > > 120 hdd 12.79399 osd.120 up 1.00000 1.00000 > > > > > > 121 hdd 12.79399 osd.121 up 1.00000 1.00000 > > > > > > 122 hdd 12.79399 osd.122 up 1.00000 1.00000 > > > > > > 123 hdd 12.79399 osd.123 up 1.00000 1.00000 > > > > > > 124 hdd 12.79399 osd.124 up 1.00000 1.00000 > > > > > > 125 hdd 12.79399 osd.125 up 1.00000 1.00000 > > > > > > 126 hdd 12.79399 osd.126 up 1.00000 1.00000 > > > > > > 127 hdd 12.79399 osd.127 up 1.00000 1.00000 > > > > > > 128 hdd 12.79399 osd.128 up 1.00000 1.00000 > > > > > > 129 hdd 12.79399 osd.129 up 1.00000 1.00000 > > > > > > 130 hdd 12.79399 osd.130 up 1.00000 1.00000 > > > > > > 131 hdd 12.79399 osd.131 up 1.00000 1.00000 > > > > > > 132 hdd 12.79399 osd.132 up 1.00000 1.00000 > > > > > > 133 hdd 12.79399 osd.133 up 1.00000 1.00000 > > > > > > 134 hdd 12.79399 osd.134 up 1.00000 1.00000 > > > > > > 135 hdd 12.79399 osd.135 up 1.00000 1.00000 > > > > > > 136 hdd 12.79399 osd.136 up 1.00000 1.00000 > > > > > > 137 hdd 12.79399 osd.137 up 1.00000 1.00000 > > > > > > 138 hdd 12.79399 osd.138 up 1.00000 1.00000 > > > > > > 139 hdd 12.79399 osd.139 up 1.00000 1.00000 > > > > > > 140 hdd 12.79399 osd.140 up 1.00000 1.00000 > > > > > > 141 hdd 12.79399 osd.141 up 1.00000 1.00000 > > > > > > 142 hdd 12.79399 osd.142 up 1.00000 1.00000 > > > > > > 143 hdd 12.79399 osd.143 up 1.00000 1.00000 > > > > > > -17 307.05579 host ceph07 > > > > > > 144 hdd 12.79399 osd.144 up 1.00000 1.00000 > > > > > > 145 hdd 12.79399 osd.145 up 1.00000 1.00000 > > > > > > 146 hdd 12.79399 osd.146 up 1.00000 1.00000 > > > > > > 147 hdd 12.79399 osd.147 up 1.00000 1.00000 > > > > > > 148 hdd 12.79399 osd.148 up 1.00000 1.00000 > > > > > > 149 hdd 12.79399 osd.149 up 1.00000 1.00000 > > > > > > 150 hdd 12.79399 osd.150 up 1.00000 1.00000 > > > > > > 151 hdd 12.79399 osd.151 up 1.00000 1.00000 > > > > > > 152 hdd 12.79399 osd.152 up 1.00000 1.00000 > > > > > > 153 hdd 12.79399 osd.153 up 1.00000 1.00000 > > > > > > 154 hdd 12.79399 osd.154 up 1.00000 1.00000 > > > > > > 155 hdd 12.79399 osd.155 up 1.00000 1.00000 > > > > > > 156 hdd 12.79399 osd.156 up 1.00000 1.00000 > > > > > > 157 hdd 12.79399 osd.157 up 1.00000 1.00000 > > > > > > 158 hdd 12.79399 osd.158 up 1.00000 1.00000 > > > > > > 159 hdd 12.79399 osd.159 up 1.00000 1.00000 > > > > > > 160 hdd 12.79399 osd.160 up 1.00000 1.00000 > > > > > > 161 hdd 12.79399 osd.161 up 1.00000 1.00000 > > > > > > 162 hdd 12.79399 osd.162 up 1.00000 1.00000 > > > > > > 163 hdd 12.79399 osd.163 up 1.00000 1.00000 > > > > > > 164 hdd 12.79399 osd.164 up 1.00000 1.00000 > > > > > > 165 hdd 12.79399 osd.165 up 1.00000 1.00000 > > > > > > 166 hdd 12.79399 osd.166 up 1.00000 1.00000 > > > > > > 167 hdd 12.79399 osd.167 up 1.00000 1.00000 > > > > > > root@ceph01:~# > > > > > > The data distribution looks relatively even (we use upmap as the balancer): > > > > > > root@ceph01:~# ceph osd df > > > > > > Inferring fsid 41bb9256-c3bf-11ea-85b9-9e07b0435492 > > > > > > Inferring config > > > /var/lib/ceph/41bb9256-c3bf-11ea-85b9-9e07b0435492/mon.ceph01/config > > > > > > Using recent ceph image > > > docker.io/ceph/ceph@sha256:4e710662986cf366c282323bfb4c4ca507d7e117c5ccf691a8273732073297e5 > > > > > > ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META > > > AVAIL %USE VAR PGS STATUS > > > > > > 0 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 5.0 MiB 14 > > > GiB 7.1 TiB 44.73 0.93 35 up > > > > > > 1 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 1.8 MiB 13 > > > GiB 7.2 TiB 43.46 0.90 33 up > > > > > > 2 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 4.4 MiB 16 > > > GiB 6.2 TiB 51.27 1.07 39 up > > > > > > 3 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 4.9 MiB 15 > > > GiB 6.7 TiB 47.38 0.99 36 up > > > > > > 4 hdd 12.79399 1.00000 13 TiB 5.5 TiB 5.5 TiB 812 KiB 13 > > > GiB 7.2 TiB 43.37 0.90 35 up > > > > > > 5 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 31 KiB 16 > > > GiB 6.2 TiB 51.27 1.07 40 up > > > > > > 6 hdd 12.79399 1.00000 13 TiB 5.5 TiB 5.5 TiB 506 KiB 13 > > > GiB 7.3 TiB 43.32 0.90 34 up > > > > > > 7 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 904 KiB 15 > > > GiB 6.4 TiB 49.93 1.04 38 up > > > > > > 8 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 3.6 MiB 14 > > > GiB 6.7 TiB 47.35 0.98 37 up > > > > > > 9 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 5.5 MiB 16 > > > GiB 6.1 TiB 52.58 1.09 40 up > > > > > > 10 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.6 TiB 75 MiB 16 > > > GiB 6.1 TiB 52.42 1.09 41 up > > > > > > 11 hdd 12.79399 1.00000 13 TiB 5.5 TiB 5.5 TiB 1.4 MiB 13 > > > GiB 7.2 TiB 43.36 0.90 34 up > > > > > > 12 hdd 12.79399 1.00000 13 TiB 5.5 TiB 5.5 TiB 586 KiB 13 > > > GiB 7.3 TiB 43.31 0.90 33 up > > > > > > 13 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 222 KiB 14 > > > GiB 7.1 TiB 44.71 0.93 35 up > > > > > > 14 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 1.0 MiB 16 > > > GiB 6.1 TiB 52.58 1.09 41 up > > > > > > 15 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 4.2 MiB 14 > > > GiB 7.1 TiB 44.73 0.93 34 up > > > > > > 16 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 1.0 MiB 16 > > > GiB 6.1 TiB 52.53 1.09 40 up > > > > > > 17 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 3.3 MiB 15 > > > GiB 6.4 TiB 49.92 1.04 38 up > > > > > > 18 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 1.0 MiB 16 > > > GiB 6.1 TiB 52.61 1.09 45 up > > > > > > 19 hdd 12.79399 1.00000 13 TiB 5.4 TiB 5.3 TiB 4.0 MiB 13 > > > GiB 7.4 TiB 42.15 0.88 34 up > > > > > > 20 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 705 KiB 16 > > > GiB 6.1 TiB 52.53 1.09 40 up > > > > > > 21 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 2.5 MiB 16 > > > GiB 6.1 TiB 52.58 1.09 40 up > > > > > > 22 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 4.7 MiB 16 > > > GiB 6.1 TiB 52.59 1.09 40 up > > > > > > 23 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 4.9 MiB 16 > > > GiB 6.1 TiB 52.58 1.09 41 up > > > > > > 24 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 3.0 MiB 14 > > > GiB 6.7 TiB 47.35 0.98 36 up > > > > > > 25 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 4.1 MiB 16 > > > GiB 6.1 TiB 52.56 1.09 41 up > > > > > > 26 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 592 KiB 14 > > > GiB 7.1 TiB 44.75 0.93 34 up > > > > > > 27 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 2.8 MiB 14 > > > GiB 6.9 TiB 46.07 0.96 36 up > > > > > > 28 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 5.4 MiB 16 > > > GiB 6.1 TiB 52.56 1.09 40 up > > > > > > 29 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 340 KiB 15 > > > GiB 6.4 TiB 49.92 1.04 39 up > > > > > > 30 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 93 KiB 14 > > > GiB 6.9 TiB 46.03 0.96 35 up > > > > > > 31 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 2.1 MiB 13 > > > GiB 7.2 TiB 43.57 0.91 35 up > > > > > > 32 hdd 12.79399 1.00000 13 TiB 6.2 TiB 6.2 TiB 5.0 MiB 15 > > > GiB 6.6 TiB 48.68 1.01 38 up > > > > > > 33 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 175 KiB 13 > > > GiB 7.2 TiB 43.44 0.90 33 up > > > > > > 34 hdd 12.79399 1.00000 13 TiB 6.9 TiB 6.8 TiB 77 MiB 16 > > > GiB 5.9 TiB 53.86 1.12 43 up > > > > > > 35 hdd 12.79399 1.00000 13 TiB 6.2 TiB 6.2 TiB 2.3 MiB 15 > > > GiB 6.6 TiB 48.68 1.01 37 up > > > > > > 36 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 1.2 MiB 16 > > > GiB 6.2 TiB 51.25 1.07 41 up > > > > > > 37 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 5.0 MiB 14 > > > GiB 6.7 TiB 47.37 0.99 36 up > > > > > > 38 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 3.7 MiB 13 > > > GiB 7.2 TiB 43.44 0.90 35 up > > > > > > 39 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 341 KiB 15 > > > GiB 6.7 TiB 47.38 0.99 37 up > > > > > > 40 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 4.1 MiB 14 > > > GiB 7.1 TiB 44.73 0.93 34 up > > > > > > 41 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 22 KiB 16 > > > GiB 6.1 TiB 52.59 1.09 40 up > > > > > > 42 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 1.9 MiB 14 > > > GiB 6.7 TiB 47.33 0.98 36 up > > > > > > 43 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 485 KiB 16 > > > GiB 6.1 TiB 52.59 1.09 40 up > > > > > > 44 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 2.1 MiB 16 > > > GiB 6.1 TiB 52.56 1.09 40 up > > > > > > 45 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 2.3 MiB 16 > > > GiB 6.1 TiB 52.53 1.09 40 up > > > > > > 46 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 124 KiB 14 > > > GiB 6.9 TiB 46.08 0.96 37 up > > > > > > 47 hdd 12.79399 1.00000 13 TiB 5.5 TiB 5.5 TiB 1.2 MiB 13 > > > GiB 7.2 TiB 43.37 0.90 33 up > > > > > > 48 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 141 KiB 16 > > > GiB 6.1 TiB 52.58 1.09 40 up > > > > > > 49 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 1.9 MiB 13 > > > GiB 7.2 TiB 43.64 0.91 33 up > > > > > > 50 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 745 KiB 16 > > > GiB 6.1 TiB 52.52 1.09 41 up > > > > > > 51 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 186 KiB 16 > > > GiB 6.1 TiB 52.55 1.09 41 up > > > > > > 52 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 4.0 MiB 16 > > > GiB 6.2 TiB 51.24 1.07 41 up > > > > > > 53 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 737 KiB 14 > > > GiB 6.7 TiB 47.33 0.98 38 up > > > > > > 54 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 1.3 MiB 14 > > > GiB 7.1 TiB 44.66 0.93 34 up > > > > > > 55 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 97 KiB 14 > > > GiB 7.1 TiB 44.73 0.93 34 up > > > > > > 56 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 1.2 MiB 14 > > > GiB 7.1 TiB 44.76 0.93 35 up > > > > > > 57 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 3.5 MiB 14 > > > GiB 7.1 TiB 44.78 0.93 35 up > > > > > > 58 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 80 MiB 16 > > > GiB 6.1 TiB 52.56 1.09 41 up > > > > > > 59 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 4.5 MiB 14 > > > GiB 6.9 TiB 46.08 0.96 36 up > > > > > > 60 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 903 KiB 14 > > > GiB 7.1 TiB 44.73 0.93 34 up > > > > > > 61 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 1 KiB 14 > > > GiB 7.1 TiB 44.65 0.93 34 up > > > > > > 62 hdd 12.79399 1.00000 13 TiB 6.9 TiB 6.8 TiB 425 KiB 16 > > > GiB 5.9 TiB 53.87 1.12 41 up > > > > > > 63 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 4.1 MiB 14 > > > GiB 7.2 TiB 43.56 0.91 34 up > > > > > > 64 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 578 KiB 16 > > > GiB 6.2 TiB 51.27 1.07 39 up > > > > > > 65 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 3.6 MiB 13 > > > GiB 7.2 TiB 43.44 0.90 33 up > > > > > > 66 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 288 KiB 13 > > > GiB 7.2 TiB 43.40 0.90 34 up > > > > > > 67 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 413 KiB 14 > > > GiB 6.7 TiB 47.34 0.98 37 up > > > > > > 68 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 4.3 MiB 16 > > > GiB 6.1 TiB 52.56 1.09 41 up > > > > > > 69 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 1.0 MiB 13 > > > GiB 7.2 TiB 43.44 0.90 33 up > > > > > > 70 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 3.2 MiB 16 > > > GiB 6.1 TiB 52.56 1.09 40 up > > > > > > 71 hdd 12.79399 1.00000 13 TiB 5.5 TiB 5.5 TiB 1.3 MiB 14 > > > GiB 7.2 TiB 43.36 0.90 35 up > > > > > > 72 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 4.7 MiB 16 > > > GiB 6.2 TiB 51.29 1.07 39 up > > > > > > 73 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 1.8 MiB 14 > > > GiB 7.1 TiB 44.73 0.93 34 up > > > > > > 74 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.6 TiB 1.2 MiB 14 > > > GiB 7.1 TiB 44.64 0.93 34 up > > > > > > 75 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 1.5 MiB 15 > > > GiB 6.4 TiB 49.95 1.04 38 up > > > > > > 76 hdd 12.79399 1.00000 13 TiB 6.2 TiB 6.2 TiB 3.8 MiB 15 > > > GiB 6.6 TiB 48.68 1.01 37 up > > > > > > 77 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 5.0 MiB 14 > > > GiB 7.1 TiB 44.65 0.93 34 up > > > > > > 78 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 1.8 MiB 14 > > > GiB 7.1 TiB 44.68 0.93 35 up > > > > > > 79 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 984 KiB 14 > > > GiB 6.7 TiB 47.36 0.98 36 up > > > > > > 80 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 2.6 MiB 13 > > > GiB 7.2 TiB 43.45 0.90 34 up > > > > > > 81 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 1.2 MiB 16 > > > GiB 6.2 TiB 51.26 1.07 41 up > > > > > > 82 hdd 12.79399 1.00000 13 TiB 6.2 TiB 6.2 TiB 4.9 MiB 15 > > > GiB 6.6 TiB 48.61 1.01 37 up > > > > > > 83 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 827 KiB 14 > > > GiB 6.7 TiB 47.33 0.98 36 up > > > > > > 84 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 2.7 MiB 16 > > > GiB 6.2 TiB 51.22 1.07 39 up > > > > > > 85 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 868 KiB 15 > > > GiB 6.4 TiB 49.94 1.04 40 up > > > > > > 86 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 4.1 MiB 16 > > > GiB 6.1 TiB 52.57 1.09 42 up > > > > > > 87 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 702 KiB 16 > > > GiB 6.1 TiB 52.54 1.09 41 up > > > > > > 88 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 12 KiB 14 > > > GiB 7.1 TiB 44.76 0.93 35 up > > > > > > 89 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 2.9 MiB 14 > > > GiB 6.9 TiB 46.07 0.96 35 up > > > > > > 90 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 1.8 MiB 13 > > > GiB 7.2 TiB 43.48 0.90 34 up > > > > > > 91 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 836 KiB 16 > > > GiB 6.1 TiB 52.55 1.09 40 up > > > > > > 92 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 4.0 MiB 13 > > > GiB 7.2 TiB 43.43 0.90 33 up > > > > > > 93 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 1.5 MiB 16 > > > GiB 6.1 TiB 52.57 1.09 41 up > > > > > > 94 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 459 KiB 13 > > > GiB 7.2 TiB 43.44 0.90 34 up > > > > > > 95 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 4.1 MiB 16 > > > GiB 6.2 TiB 51.28 1.07 39 up > > > > > > 96 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 4.1 MiB 14 > > > GiB 7.1 TiB 44.75 0.93 35 up > > > > > > 97 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 662 KiB 14 > > > GiB 7.1 TiB 44.75 0.93 34 up > > > > > > 98 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 2.9 MiB 14 > > > GiB 6.9 TiB 45.97 0.96 35 up > > > > > > 99 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 3.8 MiB 16 > > > GiB 6.1 TiB 52.56 1.09 40 up > > > > > > 100 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 1.8 MiB 14 > > > GiB 7.1 TiB 44.76 0.93 36 up > > > > > > 101 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 3.6 MiB 15 > > > GiB 6.4 TiB 49.95 1.04 40 up > > > > > > 102 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 4.9 MiB 14 > > > GiB 7.1 TiB 44.65 0.93 34 up > > > > > > 103 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 643 KiB 16 > > > GiB 6.1 TiB 52.57 1.09 41 up > > > > > > 104 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 5.5 MiB 15 > > > GiB 6.4 TiB 49.98 1.04 38 up > > > > > > 105 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 830 KiB 13 > > > GiB 7.2 TiB 43.46 0.90 33 up > > > > > > 106 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 1.0 MiB 14 > > > GiB 6.7 TiB 47.34 0.98 36 up > > > > > > 107 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 286 KiB 16 > > > GiB 6.2 TiB 51.23 1.07 39 up > > > > > > 108 hdd 12.79399 1.00000 13 TiB 6.2 TiB 6.2 TiB 961 KiB 15 > > > GiB 6.6 TiB 48.68 1.01 37 up > > > > > > 109 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 291 KiB 16 > > > GiB 6.1 TiB 52.56 1.09 42 up > > > > > > 110 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 1.0 MiB 14 > > > GiB 6.9 TiB 46.02 0.96 36 up > > > > > > 111 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 2.7 MiB 16 > > > GiB 6.2 TiB 51.22 1.07 39 up > > > > > > 112 hdd 12.79399 1.00000 13 TiB 6.2 TiB 6.2 TiB 631 KiB 15 > > > GiB 6.6 TiB 48.67 1.01 37 up > > > > > > 113 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 4.5 MiB 16 > > > GiB 6.1 TiB 52.56 1.09 41 up > > > > > > 114 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 4.0 MiB 15 > > > GiB 6.4 TiB 49.94 1.04 39 up > > > > > > 115 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 1.5 MiB 14 > > > GiB 6.9 TiB 46.05 0.96 35 up > > > > > > 116 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 4.1 MiB 15 > > > GiB 6.4 TiB 49.93 1.04 38 up > > > > > > 117 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 194 KiB 13 > > > GiB 7.2 TiB 43.44 0.90 35 up > > > > > > 118 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 3.1 MiB 15 > > > GiB 6.4 TiB 49.96 1.04 38 up > > > > > > 119 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 3.9 MiB 14 > > > GiB 7.2 TiB 43.46 0.90 33 up > > > > > > 120 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 1.8 MiB 16 > > > GiB 6.1 TiB 52.55 1.09 40 up > > > > > > 121 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 3.7 MiB 16 > > > GiB 6.1 TiB 52.54 1.09 40 up > > > > > > 122 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 2.4 MiB 15 > > > GiB 6.4 TiB 49.97 1.04 38 up > > > > > > 123 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 1003 KiB 14 > > > GiB 6.9 TiB 46.02 0.96 35 up > > > > > > 124 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 526 KiB 14 > > > GiB 6.7 TiB 47.35 0.98 36 up > > > > > > 125 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 92 KiB 15 > > > GiB 6.7 TiB 47.37 0.98 37 up > > > > > > 126 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 4.9 MiB 14 > > > GiB 7.1 TiB 44.74 0.93 34 up > > > > > > 127 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 5.5 MiB 16 > > > GiB 6.2 TiB 51.26 1.07 39 up > > > > > > 128 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 5.4 MiB 16 > > > GiB 6.1 TiB 52.56 1.09 40 up > > > > > > 129 hdd 12.79399 1.00000 13 TiB 6.2 TiB 6.2 TiB 3.6 MiB 15 > > > GiB 6.6 TiB 48.66 1.01 38 up > > > > > > 130 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 2.8 MiB 16 > > > GiB 6.1 TiB 52.56 1.09 41 up > > > > > > 131 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 5.1 MiB 16 > > > GiB 6.2 TiB 51.28 1.07 40 up > > > > > > 132 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 1.7 MiB 16 > > > GiB 6.2 TiB 51.25 1.07 39 up > > > > > > 133 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 230 KiB 16 > > > GiB 6.2 TiB 51.25 1.07 39 up > > > > > > 134 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 2.0 MiB 14 > > > GiB 7.2 TiB 43.45 0.90 34 up > > > > > > 135 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 4.4 MiB 13 > > > GiB 7.2 TiB 43.46 0.90 33 up > > > > > > 136 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 5.6 MiB 16 > > > GiB 6.2 TiB 51.29 1.07 39 up > > > > > > 137 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 3.6 MiB 14 > > > GiB 7.1 TiB 44.75 0.93 36 up > > > > > > 138 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 4.9 MiB 15 > > > GiB 6.4 TiB 49.98 1.04 40 up > > > > > > 139 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 21 KiB 16 > > > GiB 6.2 TiB 51.23 1.07 39 up > > > > > > 140 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 5.4 MiB 14 > > > GiB 7.1 TiB 44.74 0.93 36 up > > > > > > 141 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 359 KiB 13 > > > GiB 7.2 TiB 43.42 0.90 34 up > > > > > > 142 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 27 KiB 13 > > > GiB 7.2 TiB 43.46 0.90 33 up > > > > > > 143 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 395 KiB 14 > > > GiB 6.9 TiB 46.07 0.96 35 up > > > > > > 144 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 1.2 MiB 14 > > > GiB 7.1 TiB 44.75 0.93 34 up > > > > > > 145 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 3.3 MiB 14 > > > GiB 7.1 TiB 44.66 0.93 35 up > > > > > > 146 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 4.0 MiB 14 > > > GiB 6.9 TiB 46.06 0.96 35 up > > > > > > 147 hdd 12.79399 1.00000 13 TiB 6.7 TiB 6.7 TiB 46 KiB 16 > > > GiB 6.1 TiB 52.55 1.09 40 up > > > > > > 148 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 4.8 MiB 16 > > > GiB 6.2 TiB 51.23 1.07 39 up > > > > > > 149 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 4.4 MiB 14 > > > GiB 7.1 TiB 44.70 0.93 35 up > > > > > > 150 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 4.4 MiB 14 > > > GiB 7.1 TiB 44.65 0.93 35 up > > > > > > 151 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 3.7 MiB 14 > > > GiB 7.1 TiB 44.65 0.93 35 up > > > > > > 152 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 5.3 MiB 15 > > > GiB 6.2 TiB 51.26 1.07 39 up > > > > > > 153 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 2.3 MiB 15 > > > GiB 6.4 TiB 49.94 1.04 39 up > > > > > > 154 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 3.0 MiB 14 > > > GiB 6.7 TiB 47.37 0.99 36 up > > > > > > 155 hdd 12.79399 1.00000 13 TiB 6.1 TiB 6.0 TiB 3.0 MiB 14 > > > GiB 6.7 TiB 47.35 0.98 38 up > > > > > > 156 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 4.0 MiB 16 > > > GiB 6.2 TiB 51.26 1.07 41 up > > > > > > 157 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 2.8 MiB 14 > > > GiB 6.9 TiB 46.07 0.96 36 up > > > > > > 158 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 3.0 MiB 16 > > > GiB 6.2 TiB 51.25 1.07 40 up > > > > > > 159 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 3.9 MiB 15 > > > GiB 6.4 TiB 49.93 1.04 38 up > > > > > > 160 hdd 12.79399 1.00000 13 TiB 5.9 TiB 5.8 TiB 2.4 MiB 14 > > > GiB 6.9 TiB 46.03 0.96 35 up > > > > > > 161 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 1.1 MiB 16 > > > GiB 6.2 TiB 51.25 1.07 39 up > > > > > > 162 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.6 TiB 4.1 MiB 13 > > > GiB 7.1 TiB 44.64 0.93 34 up > > > > > > 163 hdd 12.79399 1.00000 13 TiB 5.7 TiB 5.7 TiB 445 KiB 14 > > > GiB 7.1 TiB 44.69 0.93 34 up > > > > > > 164 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 4.9 MiB 15 > > > GiB 6.4 TiB 49.97 1.04 38 up > > > > > > 165 hdd 12.79399 1.00000 13 TiB 6.6 TiB 6.5 TiB 5.5 MiB 16 > > > GiB 6.2 TiB 51.25 1.07 40 up > > > > > > 166 hdd 12.79399 1.00000 13 TiB 5.6 TiB 5.5 TiB 2.5 MiB 13 > > > GiB 7.2 TiB 43.45 0.90 33 up > > > > > > 167 hdd 12.79399 1.00000 13 TiB 6.4 TiB 6.3 TiB 4.0 MiB 15 > > > GiB 6.4 TiB 49.98 1.04 38 up > > > > > > TOTAL 2.1 PiB 1.0 PiB 1023 TiB 630 MiB 2.4 > > > TiB 1.1 PiB 48.09 > > > > > > MIN/MAX VAR: 0.88/1.12 STDDEV: 3.50 > > > > > > root@ceph01:~# > > > > > > > > > The cluster status, however, shows we have many misplaced objects and a > > > few remapped PGs. We had been performing maintenance (upgrade to Ceph > > > 15.2.9), and initially the misplaced object count was dropping rapidly at a > > > rate of about 400-500MB/s and 100-200 objects/s. However, once it hit a > > > certain point, it dropped to 0-4MB/s and 0 objects/s (it's not 0, it is > > > progressing, but it's below 1 object/s): > > > > > > > > > root@ceph01:~# ceph -s > > > > > > Inferring fsid 41bb9256-c3bf-11ea-85b9-9e07b0435492 > > > > > > Inferring config > > > /var/lib/ceph/41bb9256-c3bf-11ea-85b9-9e07b0435492/mon.ceph01/config > > > > > > Using recent ceph image > > > docker.io/ceph/ceph@sha256:4e710662986cf366c282323bfb4c4ca507d7e117c5ccf691a8273732073297e5 > > > > > > cluster: > > > > > > id: 41bb9256-c3bf-11ea-85b9-9e07b0435492 > > > > > > health: HEALTH_WARN > > > > > > 5 slow ops, oldest one blocked for 84 sec, daemons > > > [osd.141,osd.46] have slow ops. > > > > > > > > > > > > services: > > > > > > mon: 5 daemons, quorum ceph01,ceph04,ceph02,ceph03,ceph05 (age 25h) > > > > > > mgr: ceph05.yropto(active, since 11h), standbys: ceph01.aqkgbl, > > > ceph03.ytkuyr, ceph04.smbdew, ceph02.ndynmo > > > > > > osd: 168 osds: 168 up (since 9h), 168 in (since 6M); 10 remapped pgs > > > > > > > > > > > > task status: > > > > > > > > > > > > data: > > > > > > pools: 3 pools, 1057 pgs > > > > > > objects: 166.90M objects, 632 TiB > > > > > > usage: 1.0 PiB used, 1.1 PiB / 2.1 PiB avail > > > > > > pgs: 583177/1001370417 objects misplaced (0.058%) > > > > > > 1039 active+clean > > > > > > 11 active+clean+scrubbing > > > > > > 5 active+remapped+backfilling > > > > > > 1 active+clean+scrubbing+deep > > > > > > 1 active+clean+laggy > > > > > > > > > > > > io: > > > > > > client: 4.5 MiB/s rd, 31 MiB/s wr, 75 op/s rd, 59 op/s wr > > > > > > recovery: 2.7 MiB/s, 0 objects/s > > > > > > > > > > > > root@ceph01:~# > > > > > > > > > root@ceph01:~# ceph pg dump |grep remapped > > > > > > Inferring fsid 41bb9256-c3bf-11ea-85b9-9e07b0435492 > > > > > > Inferring config > > > /var/lib/ceph/41bb9256-c3bf-11ea-85b9-9e07b0435492/mon.ceph01/config > > > > > > Using recent ceph image > > > docker.io/ceph/ceph@sha256:4e710662986cf366c282323bfb4c4ca507d7e117c5ccf691a8273732073297e5 > > > > > > 2.3d1 163805 0 0 0 0 > > > 682273001472 0 0 7462 7462 active+*remapped*+backfilling > > > 2021-02-26T01:28:06.212904+0000 53782'2773494 53782:5593409 > > > [48,122,99,92,158,31] 48 [48,122,99,92,158,34] 48 > > > 50907'2762333 2021-02-23T20:13:50.501757+0000 50907'2762333 > > > 2021-02-23T20:13:50.501757+0000 0 > > > > > > 2.341 163180 0 0 136844 0 > > > 679698268160 0 0 8569 8569 active+*remapped*+backfilling > > > 2021-02-26T01:27:30.693520+0000 53782'2694466 53782:5497907 > > > [65,125,79,147,45,100] 65 [65,18,79,147,45,100] 65 > > > 51073'2688628 2021-02-24T09:38:41.588350+0000 51073'2688628 > > > 2021-02-24T09:38:41.588350+0000 0 > > > > > > 2.1e1 162402 0 0 150624 0 > > > 676505321472 0 0 7893 7893 active+*remapped*+backfilling > > > 2021-02-25T15:36:29.121352+0000 53782'3105605 53782:5885140 > > > [112,63,126,0,147,25] 112 [112,50,126,0,147,25] 112 > > > 50907'3086340 2021-02-22T23:14:49.510535+0000 47263'2883486 > > > 2021-02-01T15:43:32.089129+0000 0 > > > > > > 2.268 163459 0 0 136459 0 > > > 680743612416 0 0 7337 7337 active+*remapped*+backfilling > > > 2021-02-25T15:27:58.957497+0000 53782'2713107 53782:5536140 > > > [93,98,150,49,26,135] 93 [93,98,150,62,26,135] 93 > > > 50907'2698319 2021-02-23T14:16:11.171370+0000 50907'2698319 > > > 2021-02-23T14:16:11.171370+0000 0 > > > > > > 2.285 163380 0 0 158866 0 > > > 680507670528 0 0 8148 8148 active+*remapped*+backfilling > > > 2021-02-26T01:26:31.442018+0000 53782'3191803 53782:5986426 > > > [67,166,37,90,114,138] 67 [67,147,37,90,114,138] 67 > > > 51073'3177251 2021-02-23T23:51:51.751625+0000 47597'2917541 > > > 2021-02-03T20:14:38.762384+0000 0 > > > > > > dumped all > > > > > > root@ceph01:~# > > > > > > Even if we stop the scrubs/deep scrubs, things don't improve re: recovery > > > rate or cluster performance. We've monitored the HDDs/NVME backing these > > > OSDs and with the scrubs stopped, they are nearly idle. CPU is almost idle. > > > There's nothing to indicate any problem at all with the underlying hardware > > > that we can find, at least from a load/failure perspective. > > > > > > Also, interestingly, rados bench on a newly created **replicated** pool > > > is also extremely slow, as in < 50MB/s. > > > > > > Crush rule for our EC (4+2) pool, but we see the performance issues on > > > even a newly created triple-replicated pool: > > > > > > root@ceph01:~# ceph osd crush rule dump ecpool > > > > > > Inferring fsid 41bb9256-c3bf-11ea-85b9-9e07b0435492 > > > > > > Inferring config > > > /var/lib/ceph/41bb9256-c3bf-11ea-85b9-9e07b0435492/mon.ceph01/config > > > > > > Using recent ceph image > > > docker.io/ceph/ceph@sha256:4e710662986cf366c282323bfb4c4ca507d7e117c5ccf691a8273732073297e5 > > > > > > { > > > > > > "rule_id": 1, > > > > > > "rule_name": "ecpool", > > > > > > "ruleset": 1, > > > > > > "type": 3, > > > > > > "min_size": 3, > > > > > > "max_size": 6, > > > > > > "steps": [ > > > > > > { > > > > > > "op": "set_chooseleaf_tries", > > > > > > "num": 5 > > > > > > }, > > > > > > { > > > > > > "op": "set_choose_tries", > > > > > > "num": 100 > > > > > > }, > > > > > > { > > > > > > "op": "take", > > > > > > "item": -1, > > > > > > "item_name": "default" > > > > > > }, > > > > > > { > > > > > > "op": "chooseleaf_indep", > > > > > > "num": 0, > > > > > > "type": "host" > > > > > > }, > > > > > > { > > > > > > "op": "emit" > > > > > > } > > > > > > ] > > > > > > } > > > > > > > > > Cluster/pool space: > > > > > > > > > root@ceph01:~# ceph df > > > > > > Inferring fsid 41bb9256-c3bf-11ea-85b9-9e07b0435492 > > > > > > Inferring config > > > /var/lib/ceph/41bb9256-c3bf-11ea-85b9-9e07b0435492/mon.ceph01/config > > > > > > Using recent ceph image > > > docker.io/ceph/ceph@sha256:4e710662986cf366c282323bfb4c4ca507d7e117c5ccf691a8273732073297e5 > > > > > > --- RAW STORAGE --- > > > > > > CLASS SIZE AVAIL USED RAW USED %RAW USED > > > > > > hdd 2.1 PiB 1.1 PiB 1.0 PiB 1.0 PiB 48.09 > > > > > > TOTAL 2.1 PiB 1.1 PiB 1.0 PiB 1.0 PiB 48.09 > > > > > > > > > > > > --- POOLS --- > > > > > > POOL ID PGS STORED OBJECTS USED %USED MAX > > > AVAIL > > > > > > device_health_metrics 1 1 78 MiB 182 233 MiB 0 295 > > > TiB > > > > > > ecpool 2 1024 632 TiB 166.90M 1023 TiB 53.65 589 > > > TiB > > > > > > rbd 3 32 59 MiB 5 178 MiB 0 295 > > > TiB > > > > > > root@ceph01:~# > > > > > > We're at a loss for why this cluster has suddenly started behaving this > > > way. With the disks (all of them, on all nodes) effectively being idle with > > > all the scrubs disabled, plenty of CPU available, and no network issues > > > (iperf on all paths is full line rate - ~10GB/s - we have a dedicated > > > public and cluster network), it's befuddling. > > > > > > We appreciate any assistance you can provide, and are happy to provide any > > > information that might be useful. > > > > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx