On Tue, May 22, 2018, 11:19 AM Pardhiv Karri <meher4india@xxxxxxxxx> wrote:
Hi David,root@or1010051251044:~# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 80.00000 root default-2 40.00000 rack rack_A1-3 20.00000 host or10100512510400 2.00000 osd.0 up 1.00000 1.000001 2.00000 osd.1 up 1.00000 1.000002 2.00000 osd.2 up 1.00000 1.000003 2.00000 osd.3 up 1.00000 1.000004 2.00000 osd.4 up 1.00000 1.000005 2.00000 osd.5 up 1.00000 1.000006 2.00000 osd.6 up 1.00000 1.000007 2.00000 osd.7 up 1.00000 1.000008 2.00000 osd.8 up 1.00000 1.000009 2.00000 osd.9 up 1.00000 1.00000-8 20.00000 host or101005125104430 2.00000 osd.30 up 1.00000 1.0000031 2.00000 osd.31 up 1.00000 1.0000032 2.00000 osd.32 up 1.00000 1.0000033 2.00000 osd.33 up 1.00000 1.0000034 2.00000 osd.34 up 1.00000 1.0000035 2.00000 osd.35 up 1.00000 1.0000036 2.00000 osd.36 up 1.00000 1.0000037 2.00000 osd.37 up 1.00000 1.0000038 2.00000 osd.38 up 1.00000 1.0000039 2.00000 osd.39 up 1.00000 1.00000-4 20.00000 rack rack_B1-5 20.00000 host or101005125104110 2.00000 osd.10 up 1.00000 1.0000011 2.00000 osd.11 up 1.00000 1.0000012 2.00000 osd.12 up 1.00000 1.0000013 2.00000 osd.13 up 1.00000 1.0000014 2.00000 osd.14 up 1.00000 1.0000015 2.00000 osd.15 up 1.00000 1.0000016 2.00000 osd.16 up 1.00000 1.0000017 2.00000 osd.17 up 1.00000 1.0000018 2.00000 osd.18 up 1.00000 1.0000019 2.00000 osd.19 up 1.00000 1.00000-9 0 host or1010051251045-6 20.00000 rack rack_C1-7 20.00000 host or101005125104220 2.00000 osd.20 up 1.00000 1.0000021 2.00000 osd.21 up 1.00000 1.0000022 2.00000 osd.22 up 1.00000 1.0000023 2.00000 osd.23 up 1.00000 1.0000024 2.00000 osd.24 up 1.00000 1.0000025 2.00000 osd.25 up 1.00000 1.0000026 2.00000 osd.26 up 1.00000 1.0000027 2.00000 osd.27 up 1.00000 1.0000028 2.00000 osd.28 up 1.00000 1.0000029 2.00000 osd.29 up 1.00000 1.00000-10 0 host or1010051251046-11 0 host or1010051251023root@or1010051251044:~#root@or1010051251044:~# ceph -scluster 6eacac66-087a-464d-94cb-9ca2585b98d5health HEALTH_OKmonmap e3: 3 mons at {or1010051251037=10.51.251.37:6789/0,or1010051251038=10.51.251.38:6789/0,or1010051251039=10.51.251.39:6789/0}election epoch 144, quorum 0,1,2 or1010051251037,or1010051251038,or1010051251039osdmap e1814: 40 osds: 40 up, 40 inpgmap v446581: 1728 pgs, 4 pools, 7389 GB data, 1847 kobjects22221 GB used, 57472 GB / 79793 GB avail1728 active+cleanclient io 61472 kB/s wr, 30 op/sroot@or1010051251044:~#Thanks,Pardhiv KarriOn Tue, May 22, 2018 at 5:01 AM, David Turner <drakonstein@xxxxxxxxx> wrote:What are your `ceph osd tree` and `ceph status` as well?On Tue, May 22, 2018, 3:05 AM Pardhiv Karri <meher4india@xxxxxxxxx> wrote:_______________________________________________Hi,We are using Ceph Hammer 0.94.9. Some of our OSDs never get any data or PGs even at their full crush weight, up and running. Rest of the OSDs are at 50% full. Is there a bug in Hammer that is causing this issue? Does upgrading to Jewel or Luminous fix this issue?I tried deleting and recreating this OSD N number of times and still the same issue. I am seeing this in 3 of our 4 ceph clusters in different datacenters. We are using HDD as OSD and SSD as Journal drive.The below is from our lab and OSD 38 is the one that never fills.ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR TYPE NAME-1 80.00000 - 0 0 0 0 0 root default-2 40.00000 - 39812G 6190G 33521G 15.55 0.68 rack rack_A1-3 20.00000 - 19852G 3718G 16134G 18.73 0.82 host or10100512510400 2.00000 1.00000 1861G 450G 1410G 24.21 1.07 osd.01 2.00000 1.00000 1999G 325G 1673G 16.29 0.72 osd.12 2.00000 1.00000 1999G 336G 1662G 16.85 0.74 osd.23 2.00000 1.00000 1999G 386G 1612G 19.35 0.85 osd.34 2.00000 1.00000 1999G 385G 1613G 19.30 0.85 osd.45 2.00000 1.00000 1999G 364G 1634G 18.21 0.80 osd.56 2.00000 1.00000 1999G 319G 1679G 15.99 0.70 osd.67 2.00000 1.00000 1999G 434G 1564G 21.73 0.96 osd.78 2.00000 1.00000 1999G 352G 1646G 17.63 0.78 osd.89 2.00000 1.00000 1999G 362G 1636G 18.12 0.80 osd.9-8 20.00000 - 19959G 2472G 17387G 12.39 0.55 host or101005125104430 2.00000 1.00000 1999G 362G 1636G 18.14 0.80 osd.3031 2.00000 1.00000 1999G 293G 1705G 14.66 0.65 osd.3132 2.00000 1.00000 1999G 202G 1796G 10.12 0.45 osd.3233 2.00000 1.00000 1999G 215G 1783G 10.76 0.47 osd.3334 2.00000 1.00000 1999G 192G 1806G 9.61 0.42 osd.3435 2.00000 1.00000 1999G 337G 1661G 16.90 0.74 osd.3536 2.00000 1.00000 1999G 206G 1792G 10.35 0.46 osd.3637 2.00000 1.00000 1999G 266G 1732G 13.33 0.59 osd.3738 2.00000 1.00000 1999G 55836k 1998G 0.00 0 osd.3839 2.00000 1.00000 1968G 396G 1472G 20.12 0.89 osd.39-4 20.00000 - 0 0 0 0 0 rack rack_B1-5 20.00000 - 19990G 5978G 14011G 29.91 1.32 host or101005125104110 2.00000 1.00000 1999G 605G 1393G 30.27 1.33 osd.1011 2.00000 1.00000 1999G 592G 1406G 29.62 1.30 osd.1112 2.00000 1.00000 1999G 539G 1460G 26.96 1.19 osd.1213 2.00000 1.00000 1999G 684G 1314G 34.22 1.51 osd.1314 2.00000 1.00000 1999G 510G 1488G 25.56 1.13 osd.1415 2.00000 1.00000 1999G 590G 1408G 29.52 1.30 osd.1516 2.00000 1.00000 1999G 595G 1403G 29.80 1.31 osd.1617 2.00000 1.00000 1999G 652G 1346G 32.64 1.44 osd.1718 2.00000 1.00000 1999G 544G 1454G 27.23 1.20 osd.1819 2.00000 1.00000 1999G 665G 1333G 33.27 1.46 osd.19-9 0 - 0 0 0 0 0 host or1010051251045-6 20.00000 - 0 0 0 0 0 rack rack_C1-7 20.00000 - 19990G 5956G 14033G 29.80 1.31 host or101005125104220 2.00000 1.00000 1999G 701G 1297G 35.11 1.55 osd.2021 2.00000 1.00000 1999G 573G 1425G 28.70 1.26 osd.2122 2.00000 1.00000 1999G 652G 1346G 32.64 1.44 osd.2223 2.00000 1.00000 1999G 612G 1386G 30.62 1.35 osd.2324 2.00000 1.00000 1999G 614G 1384G 30.74 1.35 osd.2425 2.00000 1.00000 1999G 561G 1437G 28.11 1.24 osd.2526 2.00000 1.00000 1999G 558G 1440G 27.93 1.23 osd.2627 2.00000 1.00000 1999G 610G 1388G 30.52 1.34 osd.2728 2.00000 1.00000 1999G 515G 1483G 25.81 1.14 osd.2829 2.00000 1.00000 1999G 555G 1443G 27.78 1.22 osd.29-10 0 - 0 0 0 0 0 host or1010051251046-11 0 - 0 0 0 0 0 host or1010051251023TOTAL 79793G 18126G 61566G 22.72MIN/MAX VAR: 0/1.55 STDDEV: 8.26ThanksPardhiv karri
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--Pardhiv Karri
"Rise and Rise again until LAMBS become LIONS"
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com