Hey, Tim. Visualization is great to help get a better sense of OSD fillage than a table of numbers. A Grafana panel works, or a quick script: Grab this from from CERN: https://gitlab.cern.ch/ceph/ceph-scripts/-/blob/master/tools/histogram.py <https://gitlab.cern.ch/ceph/ceph-scripts/-/blob/master/tools/histogram.py> Here’s a wrapper #!/bin/bash case $(uname -n) in *admin* ) echo "Be sure that CEPH_ARGS is set to the proper cluster or you may get unexpected results" ;; *) esac if [ -z "$CEPH_ARGS" ]; then export CLUSTER=$(basename /etc/ceph/*conf | sed -e 's/.conf//') export CEPH_ARGS="--cluster $CLUSTER" fi case $(ceph -v | awk '{print $3}' ) in 9*) ceph $CEPH_ARGS osd df | egrep -v WEIGHT\|TOTAL\|MIN\|ID\|nan | sed -e 's/ssd//' -e 's/hdd//' | awk '{print 1, $7}' | /opt/storage/ceph-scripts/tools/histogram.py -a -b 200 -m 0 -x 100 -p --no-mvsd ;; 14*) ceph $CEPH_ARGS osd df | egrep -v WEIGHT\|TOTAL\|MIN\|ID\|nan | sed -e 's/ssd//' -e 's/hdd//' | awk '{print 1, $16}' | /opt/storage/ceph-scripts/tools/histogram.py -a -b 200 -m 0 -x 100 -p --no-mvsd ;; *) echo "Update this script to handle this Ceph version" ; exit 1 ;; esac If you’re using Pacific, you probably don’t want to use reweight-by-utilization. It works, but there are better ways. Left to itself, OSD fillage will sorta look like a bell curve, with a few under-full and over-full outliers. This is the nature of the CRUSH hash/algorithm, which Sage has called “probabilistic” As Josh mentions, the ceph-mgr balancer module should be your go-to here. With your uniform cluster it should do well; most likely it is turned off. Note that this is not the same as the PG autoscaler, they are often confused. > On Oct 24, 2022, at 09:11, Tim Bishop <tim-lists@xxxxxxxxxxx> wrote: > > Hi all, > > ceph version 16.2.9 (4c3647a322c0ff5a1dd2344e039859dcbd28c830) pacific (stable) > > We're having an issue with the spread of data across our OSDs. We have > 108 OSDs in our cluster, all identical disk size, same number in each > server, and the same number of servers in each rack. So I'd hoped we'd > end up with a pretty balanced distribution of data across the disks. > However, the fullest is at 85% full and the most empty is at 40% full. > > I've included the osd df output below, along with pool and crush rules. > > I've also looked at the reweight-by-utilization command which would > apparently help: > > # ceph osd test-reweight-by-utilization > moved 16 / 5715 (0.279965%) > avg 52.9167 > stddev 7.20998 -> 7.15325 (expected baseline 7.24063) > min osd.45 with 31 -> 31 pgs (0.585827 -> 0.585827 * mean) > max osd.22 with 70 -> 68 pgs (1.32283 -> 1.28504 * mean) > > oload 120 > max_change 0.05 > max_change_osds 4 > average_utilization 0.6229 > overload_utilization 0.7474 > osd.22 weight 1.0000 -> 0.9500 > osd.23 weight 1.0000 -> 0.9500 > osd.53 weight 1.0000 -> 0.9500 > osd.78 weight 1.0000 -> 0.9500 > no change > > But I'd like to make sure I understand why the problem is occuring > first so I can rule out a configuration issue, since it feels like the > cluster shouldn't be getting in to this state in the first place. > > I have some suspicions that the number of PGs may be a bit low on some > pools, but autoscale-status is set to "on" or "warn" for every pool, so > it's happy with the current numbers. Does it play nice with CephFS? > > Thanks for any advice. > Tim. > > ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS > 22 hdd 3.63199 1.00000 3.6 TiB 3.1 TiB 3.1 TiB 450 MiB 7.6 GiB 557 GiB 85.04 1.37 70 up > 23 hdd 3.63199 1.00000 3.6 TiB 2.9 TiB 2.9 TiB 459 MiB 7.5 GiB 759 GiB 79.64 1.28 64 up > 53 hdd 3.63199 1.00000 3.6 TiB 2.8 TiB 2.8 TiB 703 MiB 8.0 GiB 823 GiB 77.91 1.25 66 up > 78 hdd 3.63799 1.00000 3.6 TiB 2.8 TiB 2.8 TiB 187 MiB 5.9 GiB 851 GiB 77.15 1.24 61 up > 26 hdd 3.63199 1.00000 3.6 TiB 2.8 TiB 2.8 TiB 432 MiB 7.7 GiB 854 GiB 77.07 1.24 61 up > 39 hdd 3.63199 1.00000 3.6 TiB 2.8 TiB 2.8 TiB 503 MiB 7.2 GiB 874 GiB 76.55 1.23 65 up > 42 hdd 3.63199 1.00000 3.6 TiB 2.8 TiB 2.7 TiB 439 MiB 6.6 GiB 909 GiB 75.59 1.21 60 up > 101 hdd 3.63820 1.00000 3.6 TiB 2.7 TiB 2.7 TiB 306 MiB 7.1 GiB 913 GiB 75.50 1.21 61 up > 87 hdd 3.63820 1.00000 3.6 TiB 2.7 TiB 2.7 TiB 539 MiB 7.5 GiB 921 GiB 75.27 1.21 61 up > 59 hdd 3.63799 1.00000 3.6 TiB 2.7 TiB 2.7 TiB 721 MiB 7.9 GiB 957 GiB 74.30 1.19 64 up > 79 hdd 3.63799 1.00000 3.6 TiB 2.7 TiB 2.7 TiB 950 MiB 9.0 GiB 970 GiB 73.95 1.19 58 up > 34 hdd 3.63199 1.00000 3.6 TiB 2.7 TiB 2.7 TiB 202 MiB 6.0 GiB 974 GiB 73.85 1.19 57 up > 60 hdd 3.63799 1.00000 3.6 TiB 2.7 TiB 2.6 TiB 668 MiB 7.2 GiB 1009 GiB 72.91 1.17 59 up > 18 hdd 3.63199 1.00000 3.6 TiB 2.6 TiB 2.6 TiB 453 MiB 6.5 GiB 1021 GiB 72.59 1.17 60 up > 74 hdd 3.63799 1.00000 3.6 TiB 2.6 TiB 2.6 TiB 693 MiB 7.5 GiB 1.0 TiB 72.12 1.16 62 up > 19 hdd 3.63199 1.00000 3.6 TiB 2.6 TiB 2.6 TiB 655 MiB 7.9 GiB 1.0 TiB 71.71 1.15 63 up > 69 hdd 3.63799 1.00000 3.6 TiB 2.6 TiB 2.6 TiB 445 MiB 6.2 GiB 1.0 TiB 71.70 1.15 65 up > 43 hdd 3.63199 1.00000 3.6 TiB 2.6 TiB 2.6 TiB 170 MiB 4.7 GiB 1.0 TiB 71.62 1.15 63 up > 97 hdd 3.63820 1.00000 3.6 TiB 2.6 TiB 2.6 TiB 276 MiB 5.7 GiB 1.0 TiB 71.33 1.15 66 up > 67 hdd 3.63799 1.00000 3.6 TiB 2.6 TiB 2.6 TiB 430 MiB 6.3 GiB 1.0 TiB 71.22 1.14 54 up > 68 hdd 3.63799 1.00000 3.6 TiB 2.6 TiB 2.6 TiB 419 MiB 6.6 GiB 1.1 TiB 70.68 1.13 58 up > 31 hdd 3.63199 1.00000 3.6 TiB 2.6 TiB 2.5 TiB 419 MiB 5.2 GiB 1.1 TiB 70.16 1.13 63 up > 48 hdd 3.63199 1.00000 3.6 TiB 2.6 TiB 2.5 TiB 211 MiB 5.0 GiB 1.1 TiB 70.13 1.13 56 up > 73 hdd 3.63799 1.00000 3.6 TiB 2.5 TiB 2.5 TiB 765 MiB 7.1 GiB 1.1 TiB 69.52 1.12 57 up > 98 hdd 3.63820 1.00000 3.6 TiB 2.5 TiB 2.5 TiB 552 MiB 7.1 GiB 1.1 TiB 68.72 1.10 60 up > 58 hdd 3.63799 1.00000 3.6 TiB 2.5 TiB 2.5 TiB 427 MiB 6.3 GiB 1.2 TiB 68.39 1.10 53 up > 14 hdd 3.63199 1.00000 3.6 TiB 2.5 TiB 2.5 TiB 409 MiB 6.0 GiB 1.2 TiB 68.06 1.09 65 up > 47 hdd 3.63199 1.00000 3.6 TiB 2.5 TiB 2.5 TiB 166 MiB 5.5 GiB 1.2 TiB 67.84 1.09 55 up > 9 hdd 3.63199 1.00000 3.6 TiB 2.5 TiB 2.5 TiB 419 MiB 5.9 GiB 1.2 TiB 67.78 1.09 58 up > 90 hdd 3.63820 1.00000 3.6 TiB 2.5 TiB 2.5 TiB 277 MiB 6.3 GiB 1.2 TiB 67.56 1.08 57 up > 12 hdd 3.63199 1.00000 3.6 TiB 2.5 TiB 2.4 TiB 649 MiB 6.6 GiB 1.2 TiB 67.41 1.08 63 up > 49 hdd 3.63199 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 169 MiB 5.1 GiB 1.2 TiB 66.95 1.07 54 up > 41 hdd 3.63199 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 203 MiB 5.5 GiB 1.2 TiB 66.95 1.07 54 up > 55 hdd 3.63799 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 179 MiB 5.1 GiB 1.2 TiB 66.79 1.07 47 up > 2 hdd 3.63199 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 207 MiB 5.2 GiB 1.2 TiB 66.79 1.07 59 up > 36 hdd 3.63199 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 194 MiB 5.4 GiB 1.2 TiB 66.71 1.07 52 up > 62 hdd 3.63799 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 167 MiB 5.1 GiB 1.2 TiB 66.66 1.07 50 up > 17 hdd 3.63199 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 200 MiB 5.0 GiB 1.2 TiB 66.53 1.07 53 up > 52 hdd 3.63199 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 404 MiB 5.0 GiB 1.2 TiB 66.13 1.06 61 up > 1 hdd 3.63199 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 438 MiB 5.4 GiB 1.2 TiB 66.12 1.06 64 up > 96 hdd 3.63820 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 15 MiB 4.2 GiB 1.2 TiB 65.92 1.06 53 up > 54 hdd 3.63799 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 666 MiB 6.4 GiB 1.3 TiB 65.47 1.05 63 up > 37 hdd 3.63199 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 412 MiB 6.6 GiB 1.3 TiB 64.87 1.04 54 up > 93 hdd 3.63820 1.00000 3.6 TiB 2.4 TiB 2.3 TiB 304 MiB 6.0 GiB 1.3 TiB 64.63 1.04 57 up > 15 hdd 3.63199 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 200 MiB 5.8 GiB 1.3 TiB 64.22 1.03 51 up > 66 hdd 3.63799 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 427 MiB 5.2 GiB 1.3 TiB 63.98 1.03 50 up > 28 hdd 3.63199 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 169 MiB 4.6 GiB 1.3 TiB 63.75 1.02 54 up > 30 hdd 3.63199 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 400 MiB 5.6 GiB 1.3 TiB 63.28 1.02 53 up > 81 hdd 3.63820 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 55 MiB 5.6 GiB 1.3 TiB 63.02 1.01 47 up > 21 hdd 3.63199 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 180 MiB 4.2 GiB 1.3 TiB 63.00 1.01 51 up > 64 hdd 3.63799 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 667 MiB 7.0 GiB 1.4 TiB 62.73 1.01 56 up > 57 hdd 3.63799 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 167 MiB 4.9 GiB 1.4 TiB 62.49 1.00 48 up > 8 hdd 3.63199 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 682 MiB 6.9 GiB 1.4 TiB 62.25 1.00 52 up > 95 hdd 3.63820 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 791 MiB 7.8 GiB 1.4 TiB 61.74 0.99 57 up > 75 hdd 3.63799 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 1.1 GiB 8.1 GiB 1.4 TiB 61.30 0.98 53 up > 103 hdd 3.63820 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 34 MiB 4.5 GiB 1.4 TiB 61.29 0.98 50 up > 89 hdd 3.63820 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 25 MiB 4.3 GiB 1.4 TiB 60.99 0.98 53 up > 71 hdd 3.63799 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 421 MiB 5.3 GiB 1.4 TiB 60.95 0.98 56 up > 5 hdd 3.63199 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 158 MiB 3.8 GiB 1.4 TiB 60.69 0.97 50 up > 61 hdd 3.63799 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 166 MiB 4.6 GiB 1.4 TiB 60.29 0.97 55 up > 94 hdd 3.63820 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 302 MiB 6.1 GiB 1.4 TiB 60.26 0.97 55 up > 72 hdd 3.63799 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 170 MiB 4.0 GiB 1.5 TiB 59.91 0.96 51 up > 91 hdd 3.63820 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 18 MiB 3.9 GiB 1.5 TiB 59.75 0.96 51 up > 38 hdd 3.63199 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 429 MiB 5.4 GiB 1.5 TiB 59.42 0.95 49 up > 33 hdd 3.63199 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 980 MiB 6.9 GiB 1.5 TiB 59.36 0.95 52 up > 3 hdd 3.63199 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 174 MiB 4.5 GiB 1.5 TiB 58.79 0.94 49 up > 13 hdd 3.63199 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 177 MiB 3.9 GiB 1.5 TiB 58.75 0.94 54 up > 0 hdd 3.63199 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 179 MiB 4.7 GiB 1.5 TiB 58.63 0.94 50 up > 77 hdd 3.63799 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 429 MiB 5.3 GiB 1.5 TiB 58.59 0.94 50 up > 46 hdd 3.63199 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 406 MiB 5.8 GiB 1.5 TiB 58.25 0.94 49 up > 84 hdd 3.63820 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 849 MiB 7.1 GiB 1.5 TiB 57.97 0.93 52 up > 63 hdd 3.63799 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 144 MiB 4.3 GiB 1.5 TiB 57.94 0.93 55 up > 25 hdd 3.63199 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 153 MiB 4.8 GiB 1.5 TiB 57.87 0.93 53 up > 4 hdd 3.63199 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 682 MiB 6.4 GiB 1.5 TiB 57.76 0.93 47 up > 102 hdd 3.63820 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 49 MiB 4.9 GiB 1.6 TiB 57.25 0.92 45 up > 10 hdd 3.63199 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 446 MiB 4.5 GiB 1.6 TiB 57.24 0.92 53 up > 88 hdd 3.63820 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 550 MiB 6.6 GiB 1.6 TiB 56.57 0.91 50 up > 92 hdd 3.63820 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 39 MiB 4.5 GiB 1.6 TiB 56.53 0.91 44 up > 56 hdd 3.63799 1.00000 3.6 TiB 2.1 TiB 2.0 TiB 156 MiB 4.0 GiB 1.6 TiB 56.40 0.91 54 up > 85 hdd 3.63820 1.00000 3.6 TiB 2.1 TiB 2.0 TiB 35 MiB 4.3 GiB 1.6 TiB 56.38 0.91 47 up > 40 hdd 3.63199 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 156 MiB 3.9 GiB 1.6 TiB 56.22 0.90 47 up > 76 hdd 3.63799 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 166 MiB 4.6 GiB 1.6 TiB 56.18 0.90 46 up > 83 hdd 3.63820 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 35 MiB 4.3 GiB 1.6 TiB 56.17 0.90 49 up > 50 hdd 3.63199 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 419 MiB 5.2 GiB 1.6 TiB 55.29 0.89 51 up > 35 hdd 3.63199 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 415 MiB 5.5 GiB 1.6 TiB 55.27 0.89 45 up > 106 hdd 3.63820 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 266 MiB 4.8 GiB 1.6 TiB 55.19 0.89 47 up > 44 hdd 3.63199 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 378 MiB 5.2 GiB 1.6 TiB 55.11 0.88 45 up > 29 hdd 3.63199 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 416 MiB 4.8 GiB 1.6 TiB 54.87 0.88 52 up > 86 hdd 3.63820 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 559 MiB 6.7 GiB 1.6 TiB 54.80 0.88 47 up > 82 hdd 3.63820 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 298 MiB 5.2 GiB 1.6 TiB 54.67 0.88 49 up > 65 hdd 3.63799 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 918 MiB 6.7 GiB 1.7 TiB 54.15 0.87 51 up > 70 hdd 3.63799 1.00000 3.6 TiB 1.9 TiB 1.9 TiB 675 MiB 5.7 GiB 1.7 TiB 53.49 0.86 50 up > 11 hdd 3.63199 1.00000 3.6 TiB 1.9 TiB 1.9 TiB 165 MiB 4.6 GiB 1.7 TiB 52.97 0.85 46 up > 24 hdd 3.63199 1.00000 3.6 TiB 1.9 TiB 1.9 TiB 156 MiB 4.2 GiB 1.7 TiB 52.80 0.85 44 up > 105 hdd 3.63820 1.00000 3.6 TiB 1.9 TiB 1.9 TiB 38 MiB 4.4 GiB 1.7 TiB 52.45 0.84 47 up > 104 hdd 3.63820 1.00000 3.6 TiB 1.9 TiB 1.9 TiB 288 MiB 5.5 GiB 1.7 TiB 52.09 0.84 38 up > 107 hdd 3.63820 1.00000 3.6 TiB 1.9 TiB 1.9 TiB 290 MiB 5.5 GiB 1.8 TiB 51.57 0.83 41 up > 20 hdd 3.63199 1.00000 3.6 TiB 1.9 TiB 1.9 TiB 381 MiB 5.2 GiB 1.8 TiB 51.48 0.83 51 up > 32 hdd 3.63199 1.00000 3.6 TiB 1.8 TiB 1.8 TiB 116 MiB 4.1 GiB 1.8 TiB 50.37 0.81 48 up > 100 hdd 3.63820 1.00000 3.6 TiB 1.8 TiB 1.8 TiB 539 MiB 6.1 GiB 1.8 TiB 50.21 0.81 47 up > 7 hdd 3.63199 1.00000 3.6 TiB 1.8 TiB 1.8 TiB 173 MiB 3.9 GiB 1.8 TiB 50.14 0.81 45 up > 51 hdd 3.63199 1.00000 3.6 TiB 1.8 TiB 1.8 TiB 640 MiB 5.9 GiB 1.8 TiB 49.97 0.80 46 up > 16 hdd 3.63199 1.00000 3.6 TiB 1.8 TiB 1.8 TiB 158 MiB 3.9 GiB 1.8 TiB 49.87 0.80 41 up > 99 hdd 3.63820 1.00000 3.6 TiB 1.8 TiB 1.8 TiB 258 MiB 4.4 GiB 1.8 TiB 49.81 0.80 44 up > 6 hdd 3.63199 1.00000 3.6 TiB 1.7 TiB 1.7 TiB 149 MiB 3.8 GiB 1.9 TiB 47.96 0.77 42 up > 80 hdd 3.63799 1.00000 3.6 TiB 1.7 TiB 1.7 TiB 640 MiB 5.1 GiB 1.9 TiB 46.48 0.75 37 up > 27 hdd 3.63199 1.00000 3.6 TiB 1.6 TiB 1.6 TiB 338 MiB 3.9 GiB 2.1 TiB 42.72 0.69 36 up > 45 hdd 3.63199 1.00000 3.6 TiB 1.5 TiB 1.5 TiB 150 MiB 3.2 GiB 2.2 TiB 40.28 0.65 31 up > TOTAL 393 TiB 245 TiB 244 TiB 38 GiB 605 GiB 148 TiB 62.28 > MIN/MAX VAR: 0.65/1.37 STDDEV: 8.55 > > > --- RAW STORAGE --- > CLASS SIZE AVAIL USED RAW USED %RAW USED > hdd 393 TiB 148 TiB 245 TiB 245 TiB 62.29 > TOTAL 393 TiB 148 TiB 245 TiB 245 TiB 62.29 > > --- POOLS --- > POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL > pool28 28 256 9.9 TiB 2.61M 30 TiB 43.28 13 TiB > pool29 29 256 9.5 TiB 2.48M 28 TiB 42.13 13 TiB > pool36 36 128 6.0 TiB 1.58M 18 TiB 31.67 13 TiB > pool39 39 256 20 TiB 5.20M 60 TiB 60.37 13 TiB > pool43 43 32 1.9 TiB 503.92k 5.7 TiB 12.79 13 TiB > pool44 44 16 385 GiB 133.10k 1.1 TiB 2.80 13 TiB > pool45 45 32 9.1 KiB 0 27 KiB 0 13 TiB > pool46 46 32 1.3 TiB 236.34k 3.9 TiB 9.14 13 TiB > pool47 47 128 4.0 TiB 1.04M 12 TiB 23.35 13 TiB > pool48 48 32 6.0 MiB 0 18 MiB 0 13 TiB > pool49 49 32 8.3 GiB 1.49M 25 GiB 0.06 13 TiB > pool50 50 32 3.0 MiB 3 9.2 MiB 0 13 TiB > pool51 51 128 32 TiB 32.47M 55 TiB 58.30 26 TiB > pool53 52 1 56 KiB 108 169 KiB 0 13 TiB > pool53 53 128 3.3 TiB 864.88k 9.9 TiB 20.21 13 TiB > pool54 54 32 2.2 MiB 4 7.0 MiB 0 13 TiB > pool57 57 128 14 TiB 3.55M 21 TiB 34.80 26 TiB > > > pool 28 'pool28' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 150301 flags hashpspool,nearfull stripe_width 0 application rbd > pool 29 'pool29' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 150301 flags hashpspool,nearfull stripe_width 0 application rbd > pool 36 'pool36' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 150301 flags hashpspool,nearfull,selfmanaged_snaps stripe_width 0 application rbd > pool 39 'pool39' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 150301 flags hashpspool,nearfull,selfmanaged_snaps stripe_width 0 application rbd > pool 43 'pool43' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 150301 lfor 0/140049/140047 flags hashpspool,nearfull,selfmanaged_snaps stripe_width 0 application rbd > pool 44 'pool44' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn last_change 150301 lfor 0/66919/66917 flags hashpspool,nearfull,selfmanaged_snaps stripe_width 0 application rbd > pool 45 'pool45' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 150301 lfor 0/58489/67317 flags hashpspool,nearfull stripe_width 0 application rbd > pool 46 'pool46' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 150301 lfor 0/66943/66941 flags hashpspool,nearfull,selfmanaged_snaps stripe_width 0 application rbd > pool 47 'pool47' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 150301 flags hashpspool,nearfull,selfmanaged_snaps stripe_width 0 application rbd > pool 48 'pool48' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 150301 lfor 0/64382/67327 flags hashpspool,nearfull stripe_width 0 application cephfs > pool 49 'pool49' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 150301 lfor 0/66981/67337 flags hashpspool,nearfull stripe_width 0 application cephfs > pool 50 'pool50' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 150301 lfor 0/66733/67322 flags hashpspool,nearfull,selfmanaged_snaps stripe_width 0 application benchmarks > pool 51 'pool51' erasure profile ec42 size 6 min_size 4 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 150301 lfor 0/80117/81361 flags hashpspool,ec_overwrites,nearfull,selfmanaged_snaps stripe_width 16384 application cephfs > pool 52 'pool52' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode warn last_change 150301 flags hashpspool,nearfull stripe_width 0 pg_num_min 1 application mgr_devicehealth > pool 53 'pool53' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 150301 lfor 0/92746/101633 flags hashpspool,nearfull,selfmanaged_snaps stripe_width 0 application rbd > pool 54 'pool54' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 150301 lfor 0/96348/96346 flags hashpspool,nearfull stripe_width 0 application rbd > pool 57 'pool57' erasure profile ec42 size 6 min_size 4 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 150301 lfor 0/96618/101635 flags hashpspool,ec_overwrites,nearfull,selfmanaged_snaps stripe_width 16384 application rbd > > [ > { > "rule_id": 0, > "rule_name": "replicated_ruleset", > "ruleset": 0, > "type": 1, > "min_size": 1, > "max_size": 10, > "steps": [ > { > "op": "take", > "item": -1, > "item_name": "default" > }, > { > "op": "chooseleaf_firstn", > "num": 0, > "type": "rack" > }, > { > "op": "emit" > } > ] > }, > { > "rule_id": 1, > "rule_name": "ec42_ruleset", > "ruleset": 1, > "type": 3, > "min_size": 3, > "max_size": 6, > "steps": [ > { > "op": "set_chooseleaf_tries", > "num": 5 > }, > { > "op": "set_choose_tries", > "num": 100 > }, > { > "op": "take", > "item": -1, > "item_name": "default" > }, > { > "op": "choose_indep", > "num": 3, > "type": "rack" > }, > { > "op": "chooseleaf_indep", > "num": 2, > "type": "host" > }, > { > "op": "emit" > } > ] > }, > { > "rule_id": 2, > "rule_name": "erasure-code", > "ruleset": 2, > "type": 3, > "min_size": 3, > "max_size": 3, > "steps": [ > { > "op": "set_chooseleaf_tries", > "num": 5 > }, > { > "op": "set_choose_tries", > "num": 100 > }, > { > "op": "take", > "item": -1, > "item_name": "default" > }, > { > "op": "chooseleaf_indep", > "num": 0, > "type": "host" > }, > { > "op": "emit" > } > ] > } > ] > > > ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF > -1 392.58435 root default > -8 130.86145 rack R1 > -4 43.62048 host C1 > 1 hdd 3.63199 osd.1 up 1.00000 1.00000 > 12 hdd 3.63199 osd.12 up 1.00000 1.00000 > 16 hdd 3.63199 osd.16 up 1.00000 1.00000 > 22 hdd 3.63199 osd.22 up 1.00000 1.00000 > 27 hdd 3.63199 osd.27 up 1.00000 1.00000 > 31 hdd 3.63199 osd.31 up 1.00000 1.00000 > 55 hdd 3.63799 osd.55 up 1.00000 1.00000 > 57 hdd 3.63799 osd.57 up 1.00000 1.00000 > 58 hdd 3.63799 osd.58 up 1.00000 1.00000 > 81 hdd 3.63820 osd.81 up 1.00000 1.00000 > 82 hdd 3.63820 osd.82 up 1.00000 1.00000 > 99 hdd 3.63820 osd.99 up 1.00000 1.00000 > -12 43.62048 host C2 > 37 hdd 3.63199 osd.37 up 1.00000 1.00000 > 40 hdd 3.63199 osd.40 up 1.00000 1.00000 > 44 hdd 3.63199 osd.44 up 1.00000 1.00000 > 47 hdd 3.63199 osd.47 up 1.00000 1.00000 > 50 hdd 3.63199 osd.50 up 1.00000 1.00000 > 53 hdd 3.63199 osd.53 up 1.00000 1.00000 > 56 hdd 3.63799 osd.56 up 1.00000 1.00000 > 59 hdd 3.63799 osd.59 up 1.00000 1.00000 > 60 hdd 3.63799 osd.60 up 1.00000 1.00000 > 83 hdd 3.63820 osd.83 up 1.00000 1.00000 > 84 hdd 3.63820 osd.84 up 1.00000 1.00000 > 100 hdd 3.63820 osd.100 up 1.00000 1.00000 > -2 43.62048 host C3 > 0 hdd 3.63199 osd.0 up 1.00000 1.00000 > 2 hdd 3.63199 osd.2 up 1.00000 1.00000 > 3 hdd 3.63199 osd.3 up 1.00000 1.00000 > 4 hdd 3.63199 osd.4 up 1.00000 1.00000 > 5 hdd 3.63199 osd.5 up 1.00000 1.00000 > 6 hdd 3.63199 osd.6 up 1.00000 1.00000 > 54 hdd 3.63799 osd.54 up 1.00000 1.00000 > 61 hdd 3.63799 osd.61 up 1.00000 1.00000 > 62 hdd 3.63799 osd.62 up 1.00000 1.00000 > 87 hdd 3.63820 osd.87 up 1.00000 1.00000 > 88 hdd 3.63820 osd.88 up 1.00000 1.00000 > 101 hdd 3.63820 osd.101 up 1.00000 1.00000 > -9 130.86145 rack R2 > -3 43.62048 host C4 > 13 hdd 3.63199 osd.13 up 1.00000 1.00000 > 20 hdd 3.63199 osd.20 up 1.00000 1.00000 > 23 hdd 3.63199 osd.23 up 1.00000 1.00000 > 28 hdd 3.63199 osd.28 up 1.00000 1.00000 > 33 hdd 3.63199 osd.33 up 1.00000 1.00000 > 35 hdd 3.63199 osd.35 up 1.00000 1.00000 > 63 hdd 3.63799 osd.63 up 1.00000 1.00000 > 65 hdd 3.63799 osd.65 up 1.00000 1.00000 > 67 hdd 3.63799 osd.67 up 1.00000 1.00000 > 93 hdd 3.63820 osd.93 up 1.00000 1.00000 > 94 hdd 3.63820 osd.94 up 1.00000 1.00000 > 103 hdd 3.63820 osd.103 up 1.00000 1.00000 > -11 43.62048 host C5 > 36 hdd 3.63199 osd.36 up 1.00000 1.00000 > 39 hdd 3.63199 osd.39 up 1.00000 1.00000 > 42 hdd 3.63199 osd.42 up 1.00000 1.00000 > 45 hdd 3.63199 osd.45 up 1.00000 1.00000 > 48 hdd 3.63199 osd.48 up 1.00000 1.00000 > 51 hdd 3.63199 osd.51 up 1.00000 1.00000 > 69 hdd 3.63799 osd.69 up 1.00000 1.00000 > 70 hdd 3.63799 osd.70 up 1.00000 1.00000 > 71 hdd 3.63799 osd.71 up 1.00000 1.00000 > 95 hdd 3.63820 osd.95 up 1.00000 1.00000 > 96 hdd 3.63820 osd.96 up 1.00000 1.00000 > 104 hdd 3.63820 osd.104 up 1.00000 1.00000 > -7 43.62048 host C6 > 10 hdd 3.63199 osd.10 up 1.00000 1.00000 > 15 hdd 3.63199 osd.15 up 1.00000 1.00000 > 19 hdd 3.63199 osd.19 up 1.00000 1.00000 > 25 hdd 3.63199 osd.25 up 1.00000 1.00000 > 30 hdd 3.63199 osd.30 up 1.00000 1.00000 > 34 hdd 3.63199 osd.34 up 1.00000 1.00000 > 64 hdd 3.63799 osd.64 up 1.00000 1.00000 > 66 hdd 3.63799 osd.66 up 1.00000 1.00000 > 68 hdd 3.63799 osd.68 up 1.00000 1.00000 > 91 hdd 3.63820 osd.91 up 1.00000 1.00000 > 92 hdd 3.63820 osd.92 up 1.00000 1.00000 > 102 hdd 3.63820 osd.102 up 1.00000 1.00000 > -10 130.86145 rack R3 > -5 43.62048 host C7 > 9 hdd 3.63199 osd.9 up 1.00000 1.00000 > 14 hdd 3.63199 osd.14 up 1.00000 1.00000 > 17 hdd 3.63199 osd.17 up 1.00000 1.00000 > 24 hdd 3.63199 osd.24 up 1.00000 1.00000 > 29 hdd 3.63199 osd.29 up 1.00000 1.00000 > 32 hdd 3.63199 osd.32 up 1.00000 1.00000 > 73 hdd 3.63799 osd.73 up 1.00000 1.00000 > 76 hdd 3.63799 osd.76 up 1.00000 1.00000 > 79 hdd 3.63799 osd.79 up 1.00000 1.00000 > 97 hdd 3.63820 osd.97 up 1.00000 1.00000 > 98 hdd 3.63820 osd.98 up 1.00000 1.00000 > 105 hdd 3.63820 osd.105 up 1.00000 1.00000 > -13 43.62048 host C8 > 38 hdd 3.63199 osd.38 up 1.00000 1.00000 > 41 hdd 3.63199 osd.41 up 1.00000 1.00000 > 43 hdd 3.63199 osd.43 up 1.00000 1.00000 > 46 hdd 3.63199 osd.46 up 1.00000 1.00000 > 49 hdd 3.63199 osd.49 up 1.00000 1.00000 > 52 hdd 3.63199 osd.52 up 1.00000 1.00000 > 72 hdd 3.63799 osd.72 up 1.00000 1.00000 > 75 hdd 3.63799 osd.75 up 1.00000 1.00000 > 78 hdd 3.63799 osd.78 up 1.00000 1.00000 > 85 hdd 3.63820 osd.85 up 1.00000 1.00000 > 86 hdd 3.63820 osd.86 up 1.00000 1.00000 > 107 hdd 3.63820 osd.107 up 1.00000 1.00000 > -6 43.62048 host C9 > 7 hdd 3.63199 osd.7 up 1.00000 1.00000 > 8 hdd 3.63199 osd.8 up 1.00000 1.00000 > 11 hdd 3.63199 osd.11 up 1.00000 1.00000 > 18 hdd 3.63199 osd.18 up 1.00000 1.00000 > 21 hdd 3.63199 osd.21 up 1.00000 1.00000 > 26 hdd 3.63199 osd.26 up 1.00000 1.00000 > 74 hdd 3.63799 osd.74 up 1.00000 1.00000 > 77 hdd 3.63799 osd.77 up 1.00000 1.00000 > 80 hdd 3.63799 osd.80 up 1.00000 1.00000 > 89 hdd 3.63820 osd.89 up 1.00000 1.00000 > 90 hdd 3.63820 osd.90 up 1.00000 1.00000 > 106 hdd 3.63820 osd.106 up 1.00000 1.00000 > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx