Re: As mon should be deployed in odd numbers, and I have a fourth node, can I deploy a fourth mds only? - 14.2.7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Nathan,

Having worked on this a bit since I did make some progress:

[prdceph04][DEBUG ] connected to host: prdceph04
[prdceph04][DEBUG ] detect platform information from remote host
[prdceph04][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to prdceph04
[prdceph04][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[prdceph04][WARNIN] mds keyring does not exist yet, creating one
[prdceph04][DEBUG ] create a keyring file
[prdceph04][DEBUG ] create path if it doesn't exist
[prdceph04][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.prdceph04 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-prdceph04/keyring
[prdceph04][INFO  ] Running command: systemctl enable ceph-mds@prdceph04
[prdceph04][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@prdceph04.service to /usr/lib/systemd/system/ceph-mds@.service.
[prdceph04][INFO  ] Running command: systemctl start ceph-mds@prdceph04
[prdceph04][INFO  ] Running command: systemctl enable ceph.target


However i don't see it in dashboard or in ceph -s:

    health: HEALTH_WARN
            1 pools have many more objects per pg than average
            Degraded data redundancy: 1351072/1169191146 objects degraded (0.116%), 39 pgs degraded, 39 pgs undersized
            108 pgs not deep-scrubbed in time
            12 pgs not scrubbed in time

  services:
    mon: 3 daemons, quorum prdceph01,prdceph02,prdceph03 (age 2h)
    mgr: prdceph01(active, since 2h), standbys: prdceph03, prdceph04, prdceph02
    mds: ArchiveRepository:2 {0=prdceph03=up:active,1=prdceph02=up:active} 1 up:standby
    osd: 240 osds: 240 up (since 2h), 240 in; 323 remapped pgs

  data:
    pools:   7 pools, 8383 pgs
    objects: 389.73M objects, 460 TiB
    usage:   1.4 PiB used, 763 TiB / 2.1 PiB avail
    pgs:     1351072/1169191146 objects degraded (0.116%)
             11416275/1169191146 objects misplaced (0.976%)
             8054 active+clean
             242  active+remapped+backfill_wait
             42   active+remapped+backfilling
             29   active+undersized+degraded+remapped+backfill_wait
             10   active+undersized+degraded+remapped+backfilling
             6    active+clean+scrubbing+deep
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux