Re: Ceph - incorrect output of ceph osd tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There is a config option "mon osd min up ratio" (defaults to 0.3) - and if too many OSDs are down, the monitors will not mark further OSDs down.  Perhaps that's the culprit here?

Andras


On 01/31/2018 02:21 PM, Marc Roos wrote:
Maybe the process is still responding on an active session?
If you can't ping a host, that only means you cannot ping it.


-----Original Message-----
From: Steven Vacaroaia [mailto:stef97@xxxxxxxxx]
Sent: woensdag 31 januari 2018 19:47
To: ceph-users
Subject:  Ceph - incorrect output of ceph osd tree

Hi,

Why is ceph osd tree reports that osd.4 is up when the server on which
osd.4 is running is actually down ??

Any help will be appreciated

[root@osd01 ~]# ping -c 2 osd02
PING osd02 (10.10.30.182) 56(84) bytes of data.
 From osd01 (10.10.30.181) icmp_seq=1 Destination Host Unreachable From
osd01 (10.10.30.181) icmp_seq=2 Destination Host Unreachable


[root@osd01 ~]# ceph osd tree
ID  CLASS WEIGHT  TYPE NAME          STATUS REWEIGHT PRI-AFF
  -9             0 root ssds
-10             0     host osd01-ssd
-11             0     host osd02-ssd
-12             0     host osd04-ssd
  -1       4.22031 root default
  -3       1.67967     host osd01
   0   hdd 0.55989         osd.0        down        0 1.00000
   3   hdd 0.55989         osd.3        down        0 1.00000
   6   hdd 0.55989         osd.6          up  1.00000 1.00000
  -5       1.67967     host osd02
   1   hdd 0.55989         osd.1        down  1.00000 1.00000
   4   hdd 0.55989         osd.4          up  1.00000 1.00000
   7   hdd 0.55989         osd.7        down  1.00000 1.00000
  -7       0.86096     host osd04
   2   hdd 0.28699         osd.2        down        0 1.00000
   5   hdd 0.28699         osd.5        down  1.00000 1.00000
   8   hdd 0.28699         osd.8        down  1.00000 1.00000
[root@osd01 ~]# ceph tell osd.4 bench
^CError EINTR: problem getting command descriptions from osd.4
[root@osd01 ~]# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE  USE    AVAIL %USE VAR  PGS
  0   hdd 0.55989        0     0      0     0    0    0   0
  3   hdd 0.55989        0     0      0     0    0    0   0
  6   hdd 0.55989  1.00000  573G 16474M  557G 2.81 0.84   0
  1   hdd 0.55989  1.00000  573G 16516M  557G 2.81 0.84   0
  4   hdd 0.55989  1.00000  573G 16465M  557G 2.80 0.84   0
  7   hdd 0.55989  1.00000  573G 16473M  557G 2.81 0.84   0
  2   hdd 0.28699        0     0      0     0    0    0   0
  5   hdd 0.28699  1.00000  293G 16466M  277G 5.47 1.63   0
  8   hdd 0.28699  1.00000  293G 16461M  277G 5.47 1.63   0
                     TOTAL 2881G 98857M 2784G 3.35 MIN/MAX VAR: 0.84/1.63
  STDDEV: 1.30
[root@osd01 ~]# ceph osd df tree
ID  CLASS WEIGHT  REWEIGHT SIZE  USE    AVAIL %USE VAR  PGS TYPE NAME
  -9             0        -     0      0     0    0    0   - root ssds
-10             0        -     0      0     0    0    0   -     host
osd01-ssd
-11             0        -     0      0     0    0    0   -     host
osd02-ssd
-12             0        -     0      0     0    0    0   -     host
osd04-ssd
  -1       4.22031        - 2881G 98857M 2784G 3.35 1.00   - root default
  -3       1.67967        -  573G 16474M  557G 2.81 0.84   -     host
osd01
   0   hdd 0.55989        0     0      0     0    0    0   0
osd.0
   3   hdd 0.55989        0     0      0     0    0    0   0
osd.3
   6   hdd 0.55989  1.00000  573G 16474M  557G 2.81 0.84   0
osd.6
  -5       1.67967        - 1720G 49454M 1671G 2.81 0.84   -     host
osd02
   1   hdd 0.55989  1.00000  573G 16516M  557G 2.81 0.84   0
osd.1
   4   hdd 0.55989  1.00000  573G 16465M  557G 2.80 0.84   0
osd.4
   7   hdd 0.55989  1.00000  573G 16473M  557G 2.81 0.84   0
osd.7
  -7       0.86096        -  587G 32928M  555G 5.47 1.63   -     host
osd04
   2   hdd 0.28699        0     0      0     0    0    0   0
osd.2
   5   hdd 0.28699  1.00000  293G 16466M  277G 5.47 1.63   0
osd.5
   8   hdd 0.28699  1.00000  293G 16461M  277G 5.47 1.63   0
osd.8
                      TOTAL 2881G 98857M 2784G 3.35 MIN/MAX VAR:
0.84/1.63  STDDEV: 1.30



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux