Forgot to reply to the list!
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, January 17, 2019 8:32 AM, David Young <funkypenguin@xxxxxxxxxxxxxx> wrote:
Thanks David,"ceph osd df" looks like this:---------root@node1:~# ceph osd dfID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS9 hdd 7.27698 1.00000 7.3 TiB 6.3 TiB 1008 GiB 86.47 1.22 12210 hdd 7.27698 1.00000 7.3 TiB 4.9 TiB 2.4 TiB 66.90 0.94 9411 hdd 7.27739 0.90002 7.3 TiB 5.4 TiB 1.9 TiB 74.29 1.05 10412 hdd 7.27698 0.95001 7.3 TiB 5.8 TiB 1.5 TiB 79.64 1.12 11513 hdd 0 0 0 B 0 B 0 B 0 0 1840 hdd 7.27739 1.00000 7.3 TiB 6.1 TiB 1.2 TiB 83.32 1.17 12041 hdd 7.27739 0.90002 7.3 TiB 5.6 TiB 1.7 TiB 76.88 1.08 11342 hdd 7.27739 0.80005 7.3 TiB 6.3 TiB 1.0 TiB 85.98 1.21 12343 hdd 0 0 0 B 0 B 0 B 0 0 3244 hdd 7.27739 0 0 B 0 B 0 B 0 0 2745 hdd 7.27739 1.00000 7.3 TiB 5.1 TiB 2.2 TiB 69.44 0.98 9846 hdd 0 0 0 B 0 B 0 B 0 0 3847 hdd 7.27739 1.00000 7.3 TiB 4.4 TiB 2.9 TiB 60.24 0.85 8448 hdd 7.27739 1.00000 7.3 TiB 4.5 TiB 2.8 TiB 61.66 0.87 8549 hdd 7.27739 1.00000 7.3 TiB 4.7 TiB 2.5 TiB 65.07 0.92 9050 hdd 7.27739 1.00000 7.3 TiB 4.7 TiB 2.6 TiB 64.39 0.91 8751 hdd 7.27739 1.00000 7.3 TiB 5.1 TiB 2.2 TiB 70.22 0.99 9552 hdd 7.27739 1.00000 7.3 TiB 4.9 TiB 2.4 TiB 66.69 0.94 9853 hdd 7.27739 1.00000 7.3 TiB 4.8 TiB 2.5 TiB 66.33 0.93 9754 hdd 7.27739 1.00000 7.3 TiB 4.3 TiB 3.0 TiB 59.20 0.83 820 hdd 7.27699 1.00000 7.3 TiB 3.8 TiB 3.5 TiB 52.34 0.74 711 hdd 7.27699 1.00000 7.3 TiB 4.9 TiB 2.4 TiB 67.62 0.95 892 hdd 7.27699 0.90002 7.3 TiB 4.9 TiB 2.4 TiB 66.69 0.94 813 hdd 7.27699 1.00000 7.3 TiB 4.7 TiB 2.5 TiB 65.21 0.92 884 hdd 7.27699 0.90002 7.3 TiB 4.9 TiB 2.4 TiB 67.25 0.95 935 hdd 7.27739 0.95001 7.3 TiB 4.2 TiB 3.0 TiB 58.39 0.82 786 hdd 7.27739 1.00000 7.3 TiB 5.7 TiB 1.6 TiB 78.35 1.10 1057 hdd 7.27739 0.95001 7.3 TiB 5.2 TiB 2.1 TiB 71.65 1.01 988 hdd 7.27739 1.00000 7.3 TiB 5.1 TiB 2.2 TiB 69.92 0.98 9414 hdd 7.27739 0.95001 7.3 TiB 5.3 TiB 2.0 TiB 72.46 1.02 10015 hdd 7.27739 0.85004 7.3 TiB 6.0 TiB 1.2 TiB 82.93 1.17 11916 hdd 7.27739 1.00000 7.3 TiB 6.3 TiB 1.0 TiB 86.11 1.21 11717 hdd 7.27739 0.85004 7.3 TiB 5.2 TiB 2.1 TiB 71.48 1.01 10318 hdd 7.27739 1.00000 7.3 TiB 5.2 TiB 2.1 TiB 71.43 1.00 10019 hdd 7.27739 1.00000 7.3 TiB 5.2 TiB 2.0 TiB 72.14 1.01 10320 hdd 7.27739 1.00000 7.3 TiB 5.7 TiB 1.6 TiB 78.13 1.10 11021 hdd 7.27739 1.00000 7.3 TiB 6.2 TiB 1.0 TiB 85.58 1.20 12522 hdd 7.27739 1.00000 7.3 TiB 5.2 TiB 2.1 TiB 71.71 1.01 10323 hdd 7.27739 0.95001 7.3 TiB 6.0 TiB 1.2 TiB 83.04 1.17 11024 hdd 0 1.00000 7.3 TiB 831 GiB 6.5 TiB 11.15 0.16 1325 hdd 7.27739 1.00000 7.3 TiB 6.3 TiB 978 GiB 86.87 1.22 12126 hdd 7.27739 1.00000 7.3 TiB 5.2 TiB 2.1 TiB 70.86 1.00 10027 hdd 7.27739 1.00000 7.3 TiB 5.9 TiB 1.4 TiB 80.92 1.14 11528 hdd 7.27739 1.00000 7.3 TiB 6.5 TiB 826 GiB 88.91 1.25 12129 hdd 7.27739 1.00000 7.3 TiB 5.2 TiB 2.1 TiB 70.99 1.00 9530 hdd 0 1.00000 7.3 TiB 2.0 TiB 5.3 TiB 26.99 0.38 3331 hdd 7.27739 1.00000 7.3 TiB 4.6 TiB 2.7 TiB 62.61 0.88 9032 hdd 7.27739 0.90002 7.3 TiB 5.5 TiB 1.8 TiB 75.65 1.06 10733 hdd 7.27739 1.00000 7.3 TiB 5.7 TiB 1.6 TiB 77.99 1.10 11134 hdd 7.27739 0 0 B 0 B 0 B 0 0 1035 hdd 7.27739 1.00000 7.3 TiB 5.3 TiB 2.0 TiB 73.16 1.03 10636 hdd 7.27739 0.95001 7.3 TiB 6.6 TiB 694 GiB 90.68 1.28 12637 hdd 7.27739 1.00000 7.3 TiB 5.5 TiB 1.8 TiB 75.83 1.07 10638 hdd 7.27739 0.95001 7.3 TiB 6.2 TiB 1.1 TiB 85.02 1.20 11539 hdd 7.27739 1.00000 7.3 TiB 4.9 TiB 2.4 TiB 67.16 0.94 94TOTAL 400 TiB 266 TiB 134 TiB 71.08MIN/MAX VAR: 0.16/1.28 STDDEV: 13.96root@node1:~#------------The drives that are weighted zero are "out" pending the completion of the remaining degraded objects after an OSD failure:-----------data:pools: 2 pools, 1028 pgsobjects: 52.15 M objects, 197 TiBusage: 266 TiB used, 134 TiB / 400 TiB availpgs: 477114/260622045 objects degraded (0.183%)10027396/260622045 objects misplaced (3.847%)----------------------‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐On Thursday, January 17, 2019 7:23 AM, David C <dcsysengineer@xxxxxxxxx> wrote:On Wed, 16 Jan 2019, 02:20 David Young <funkypenguin@xxxxxxxxxxxxxx wrote:Hi folks,My ceph cluster is used exclusively for cephfs, as follows:---root@node1:~# grep ceph /etc/fstabnode2:6789:/ /ceph ceph auto,_netdev,name=admin,secretfile=/root/ceph.admin.secretroot@node1:~#---"rados df" shows me the following:---root@node1:~# rados dfPOOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WRcephfs_metadata 197 MiB 49066 0 98132 0 0 0 9934744 55 GiB 57244243 232 GiBmedia 196 TiB 51768595 0 258842975 0 1 203534 477915206 509 TiB 165167618 292 TiBtotal_objects 51817661total_used 266 TiBtotal_avail 135 TiBtotal_space 400 TiBroot@node1:~#---But "df" on the mounted cephfs volume shows me:---root@node1:~# df -h /cephFilesystem Size Used Avail Use% Mounted on10.20.30.22:6789:/ 207T 196T 11T 95% /cephroot@node1:~#---And ceph -s shows me:---data:pools: 2 pools, 1028 pgsobjects: 51.82 M objects, 196 TiBusage: 266 TiB used, 135 TiB / 400 TiB avail---"media" is an EC pool with size of 5 (4+1), so I can expect 1TB of data to consume 1.25TB raw space.My question is, why does "df" show me I have 11TB free, when "rados df" shows me I have 135TB (raw) available?Probabaly because your OSDs are quite unbalanced. What does your 'ceph osd df' look like?Thanks!D_______________________________________________ceph-users mailing list
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com