Hi David,
Thanks,
Pardhiv Karri
On Tue, May 22, 2018 at 9:42 AM, David Turner <drakonstein@xxxxxxxxx> wrote:
Your PG counts per pool per osd doesn't have any PGs on osd.38. that definitely matches what your seeing, but I've never seen this happen before. The osd doesn't seem to be misconfigured at all.Does anyone have any ideas what could be happening here? I expected to see something wrong in one of those outputs, but it all looks good. Possibly something with straw vs straw2 or crush tunables.On Tue, May 22, 2018, 12:33 PM Pardhiv Karri <meher4india@xxxxxxxxx> wrote:Hi David,root@or1010051251044:~# ceph dfGLOBAL:SIZE AVAIL RAW USED %RAW USED79793G 56832G 22860G 28.65POOLS:NAME ID USED %USED MAX AVAIL OBJECTSrbd 0 0 0 14395G 0compute 1 0 0 14395G 0volumes 2 7605G 28.60 14395G 1947372images 4 0 0 14395G 0root@or1010051251044:~#pool : 4 0 1 2 | SUM------------------------------------------------ osd.10 8 10 44 96 | 158osd.11 14 8 58 100 | 180osd.12 12 6 50 95 | 163osd.13 14 4 49 121 | 188osd.14 9 8 54 86 | 157osd.15 12 5 55 103 | 175osd.16 23 5 56 99 | 183osd.30 6 4 31 47 | 88osd.17 8 8 50 114 | 180osd.31 7 1 23 35 | 66osd.18 15 5 42 94 | 156osd.32 12 6 24 54 | 96osd.19 13 5 54 116 | 188osd.33 4 2 28 49 | 83osd.34 7 5 18 62 | 92osd.35 10 2 21 56 | 89osd.36 5 1 34 35 | 75osd.37 4 4 24 45 | 77osd.39 14 8 48 106 | 176osd.0 12 3 27 67 | 109osd.1 8 3 27 43 | 81osd.2 4 5 27 45 | 81osd.3 4 3 19 50 | 76osd.4 4 1 23 54 | 82osd.5 4 2 23 56 | 85osd.6 1 5 32 50 | 88osd.7 9 1 32 66 | 108osd.8 7 4 27 49 | 87osd.9 6 4 24 55 | 89osd.20 7 4 43 122 | 176osd.21 14 5 46 95 | 160osd.22 13 8 51 107 | 179osd.23 11 7 54 105 | 177osd.24 11 6 52 112 | 181osd.25 16 6 36 98 | 156osd.26 15 7 59 101 | 182osd.27 7 9 58 101 | 175osd.28 16 5 60 89 | 170osd.29 18 7 53 94 | 172------------------------------------------------ SUM : 384 192 1536 3072root@or1010051251044:~# for i in `rados lspools`; do echo "================="; echo Working on pool: $i; ceph osd pool get $i pg_num; ceph osd pool get $i pgp_num; done ================= Working on pool: rbd pg_num: 64 pgp_num: 64 ================= Working on pool: compute pg_num: 512 pgp_num: 512 ================= Working on pool: volumes pg_num: 1024 pgp_num: 1024 ================= Working on pool: images pg_num: 128 pgp_num: 128 root@or1010051251044:~#Thanks,Pardhiv KarriOn Tue, May 22, 2018 at 9:16 AM, David Turner <drakonstein@xxxxxxxxx> wrote:This is all weird. Maybe it just doesn't have any PGs with data on them. `ceph df`, how many PGs you have in each pool, and which PGs are on osd 38.On Tue, May 22, 2018, 11:19 AM Pardhiv Karri <meher4india@xxxxxxxxx> wrote:Hi David,root@or1010051251044:~# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 80.00000 root default-2 40.00000 rack rack_A1-3 20.00000 host or10100512510400 2.00000 osd.0 up 1.00000 1.000001 2.00000 osd.1 up 1.00000 1.000002 2.00000 osd.2 up 1.00000 1.000003 2.00000 osd.3 up 1.00000 1.000004 2.00000 osd.4 up 1.00000 1.000005 2.00000 osd.5 up 1.00000 1.000006 2.00000 osd.6 up 1.00000 1.000007 2.00000 osd.7 up 1.00000 1.000008 2.00000 osd.8 up 1.00000 1.000009 2.00000 osd.9 up 1.00000 1.00000-8 20.00000 host or101005125104430 2.00000 osd.30 up 1.00000 1.0000031 2.00000 osd.31 up 1.00000 1.0000032 2.00000 osd.32 up 1.00000 1.0000033 2.00000 osd.33 up 1.00000 1.0000034 2.00000 osd.34 up 1.00000 1.0000035 2.00000 osd.35 up 1.00000 1.0000036 2.00000 osd.36 up 1.00000 1.0000037 2.00000 osd.37 up 1.00000 1.0000038 2.00000 osd.38 up 1.00000 1.0000039 2.00000 osd.39 up 1.00000 1.00000-4 20.00000 rack rack_B1-5 20.00000 host or101005125104110 2.00000 osd.10 up 1.00000 1.0000011 2.00000 osd.11 up 1.00000 1.0000012 2.00000 osd.12 up 1.00000 1.0000013 2.00000 osd.13 up 1.00000 1.0000014 2.00000 osd.14 up 1.00000 1.0000015 2.00000 osd.15 up 1.00000 1.0000016 2.00000 osd.16 up 1.00000 1.0000017 2.00000 osd.17 up 1.00000 1.0000018 2.00000 osd.18 up 1.00000 1.0000019 2.00000 osd.19 up 1.00000 1.00000-9 0 host or1010051251045-6 20.00000 rack rack_C1-7 20.00000 host or101005125104220 2.00000 osd.20 up 1.00000 1.0000021 2.00000 osd.21 up 1.00000 1.0000022 2.00000 osd.22 up 1.00000 1.0000023 2.00000 osd.23 up 1.00000 1.0000024 2.00000 osd.24 up 1.00000 1.0000025 2.00000 osd.25 up 1.00000 1.0000026 2.00000 osd.26 up 1.00000 1.0000027 2.00000 osd.27 up 1.00000 1.0000028 2.00000 osd.28 up 1.00000 1.0000029 2.00000 osd.29 up 1.00000 1.00000-10 0 host or1010051251046-11 0 host or1010051251023root@or1010051251044:~#root@or1010051251044:~# ceph -scluster 6eacac66-087a-464d-94cb-9ca2585b98d5 health HEALTH_OKmonmap e3: 3 mons at {or1010051251037=10.51.251.37:6789/0,or1010051251038=10.51. }251.38:6789/0,or1010051251039= 10.51.251.39:6789/0 election epoch 144, quorum 0,1,2 or1010051251037,or1010051251038, or1010051251039 osdmap e1814: 40 osds: 40 up, 40 inpgmap v446581: 1728 pgs, 4 pools, 7389 GB data, 1847 kobjects22221 GB used, 57472 GB / 79793 GB avail1728 active+cleanclient io 61472 kB/s wr, 30 op/sroot@or1010051251044:~#Thanks,Pardhiv KarriOn Tue, May 22, 2018 at 5:01 AM, David Turner <drakonstein@xxxxxxxxx> wrote:What are your `ceph osd tree` and `ceph status` as well?On Tue, May 22, 2018, 3:05 AM Pardhiv Karri <meher4india@xxxxxxxxx> wrote:______________________________Hi,We are using Ceph Hammer 0.94.9. Some of our OSDs never get any data or PGs even at their full crush weight, up and running. Rest of the OSDs are at 50% full. Is there a bug in Hammer that is causing this issue? Does upgrading to Jewel or Luminous fix this issue?I tried deleting and recreating this OSD N number of times and still the same issue. I am seeing this in 3 of our 4 ceph clusters in different datacenters. We are using HDD as OSD and SSD as Journal drive.The below is from our lab and OSD 38 is the one that never fills.ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR TYPE NAME-1 80.00000 - 0 0 0 0 0 root default-2 40.00000 - 39812G 6190G 33521G 15.55 0.68 rack rack_A1-3 20.00000 - 19852G 3718G 16134G 18.73 0.82 host or10100512510400 2.00000 1.00000 1861G 450G 1410G 24.21 1.07 osd.01 2.00000 1.00000 1999G 325G 1673G 16.29 0.72 osd.12 2.00000 1.00000 1999G 336G 1662G 16.85 0.74 osd.23 2.00000 1.00000 1999G 386G 1612G 19.35 0.85 osd.34 2.00000 1.00000 1999G 385G 1613G 19.30 0.85 osd.45 2.00000 1.00000 1999G 364G 1634G 18.21 0.80 osd.56 2.00000 1.00000 1999G 319G 1679G 15.99 0.70 osd.67 2.00000 1.00000 1999G 434G 1564G 21.73 0.96 osd.78 2.00000 1.00000 1999G 352G 1646G 17.63 0.78 osd.89 2.00000 1.00000 1999G 362G 1636G 18.12 0.80 osd.9-8 20.00000 - 19959G 2472G 17387G 12.39 0.55 host or101005125104430 2.00000 1.00000 1999G 362G 1636G 18.14 0.80 osd.3031 2.00000 1.00000 1999G 293G 1705G 14.66 0.65 osd.3132 2.00000 1.00000 1999G 202G 1796G 10.12 0.45 osd.3233 2.00000 1.00000 1999G 215G 1783G 10.76 0.47 osd.3334 2.00000 1.00000 1999G 192G 1806G 9.61 0.42 osd.3435 2.00000 1.00000 1999G 337G 1661G 16.90 0.74 osd.3536 2.00000 1.00000 1999G 206G 1792G 10.35 0.46 osd.3637 2.00000 1.00000 1999G 266G 1732G 13.33 0.59 osd.3738 2.00000 1.00000 1999G 55836k 1998G 0.00 0 osd.3839 2.00000 1.00000 1968G 396G 1472G 20.12 0.89 osd.39-4 20.00000 - 0 0 0 0 0 rack rack_B1-5 20.00000 - 19990G 5978G 14011G 29.91 1.32 host or101005125104110 2.00000 1.00000 1999G 605G 1393G 30.27 1.33 osd.1011 2.00000 1.00000 1999G 592G 1406G 29.62 1.30 osd.1112 2.00000 1.00000 1999G 539G 1460G 26.96 1.19 osd.1213 2.00000 1.00000 1999G 684G 1314G 34.22 1.51 osd.1314 2.00000 1.00000 1999G 510G 1488G 25.56 1.13 osd.1415 2.00000 1.00000 1999G 590G 1408G 29.52 1.30 osd.1516 2.00000 1.00000 1999G 595G 1403G 29.80 1.31 osd.1617 2.00000 1.00000 1999G 652G 1346G 32.64 1.44 osd.1718 2.00000 1.00000 1999G 544G 1454G 27.23 1.20 osd.1819 2.00000 1.00000 1999G 665G 1333G 33.27 1.46 osd.19-9 0 - 0 0 0 0 0 host or1010051251045-6 20.00000 - 0 0 0 0 0 rack rack_C1-7 20.00000 - 19990G 5956G 14033G 29.80 1.31 host or101005125104220 2.00000 1.00000 1999G 701G 1297G 35.11 1.55 osd.2021 2.00000 1.00000 1999G 573G 1425G 28.70 1.26 osd.2122 2.00000 1.00000 1999G 652G 1346G 32.64 1.44 osd.2223 2.00000 1.00000 1999G 612G 1386G 30.62 1.35 osd.2324 2.00000 1.00000 1999G 614G 1384G 30.74 1.35 osd.2425 2.00000 1.00000 1999G 561G 1437G 28.11 1.24 osd.2526 2.00000 1.00000 1999G 558G 1440G 27.93 1.23 osd.2627 2.00000 1.00000 1999G 610G 1388G 30.52 1.34 osd.2728 2.00000 1.00000 1999G 515G 1483G 25.81 1.14 osd.2829 2.00000 1.00000 1999G 555G 1443G 27.78 1.22 osd.29-10 0 - 0 0 0 0 0 host or1010051251046-11 0 - 0 0 0 0 0 host or1010051251023TOTAL 79793G 18126G 61566G 22.72MIN/MAX VAR: 0/1.55 STDDEV: 8.26ThanksPardhiv karri_________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
--Pardhiv Karri
"Rise and Rise again until LAMBS become LIONS"--Pardhiv Karri
"Rise and Rise again until LAMBS become LIONS"
Pardhiv Karri
"Rise and Rise again until LAMBS become LIONS"
"Rise and Rise again until LAMBS become LIONS"
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com