=?gb18030?q?It_works__!Re=A3=BA__//=A3=BA_//__ceph-m?==?gb18030?q?on_is_blocked_after_shutting_down_and_ip_address_changed?=

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It works , after I remove v2 address in ceph.conf.
Hope I see. Thank you !
[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     e384e8e6-94d5-4812-bfbb-d1b0468bdef5
    health: HEALTH_WARN
            1 MDSs report slow metadata IOs
            noout,nobackfill,norecover flag(s) set
            9 osds down
            no active mgr
            Reduced data availability: 102 pgs inactive, 128 pgs down, 8 pgs stale
            Degraded data redundancy: 3664/29810955 objects degraded (0.012%), 28 pgs degraded, 29 pgs undersized
            1 monitors have not enabled msgr2
 
  services:
    mon: 1 daemons, quorum ceph-node1 (age 13h)
    mgr: no daemons active (since 3d)
    mds: cephfs:1 {0=ceph-node1=up:active(laggy or crashed)}
    osd: 12 osds: 3 up (since 3d), 12 in (since 3d)
         flags noout,nobackfill,norecover
    rgw: 1 daemon active (ceph-node1.rgw0)
 
  data:
    pools:   6 pools, 168 pgs
    objects: 2.49M objects, 9.3 TiB
    usage:   14 TiB used, 74 TiB / 88 TiB avail
    pgs:     97.024% pgs not active
             3664/29810955 objects degraded (0.012%)
             128 down
             13  stale+undersized+degraded+peered
             11  undersized+degraded+peered
             6   stale+undersized+peered
             5   undersized+peered
             4   active+undersized+degraded
             1   active+undersized
 



------------------ 原始邮件 ------------------
发件人: "Stefan Kooman"<stefan@xxxxxx>;
发送时间: 2019年12月11日(星期三) 晚上9:45
收件人: "Chu"<occj@xxxxxx>;
抄送: "ceph-users"<ceph-users@xxxxxxxxxxxxxx>;
主题: Re: //:[ceph-users] // ceph-mon is blocked after shutting down and ip address changed

Quoting Cc君 (occj@xxxxxx):
> Hi,daemon  is running when using admin socket
> [root@ceph-node1 ceph]#&nbsp; ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node1.asok mon_status
> {
> &nbsp; &nbsp; "name": "ceph-node1",
> &nbsp; &nbsp; "rank": 0,
> &nbsp; &nbsp; "state": "leader",
> &nbsp; &nbsp; "election_epoch": 63,
> &nbsp; &nbsp; "quorum": [
> &nbsp; &nbsp; &nbsp; &nbsp; 0
> &nbsp; &nbsp; ],
> &nbsp; &nbsp; "quorum_age": 40839,

Your ceph.conf show that messenger should listen on 3300 (v2) and 6789
(v1). If only 6789 is actually listening ... and the client tries to
connect to 3300 ... you might get a timeout as well. Not sure if
messenger falls back to v1.

What happens when you change ceph.conf (first without restarting the
mon) and try a "ceph -s" again with a ceph client on the monitor node?

Gr. Stefan

--
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux