Re: Issue about execute "ceph fs new"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Did you enable multi-active MDS? Can you please share 'ceph fs dump'? Port 6789 is the MON port (v1, v2 is 3300). If you haven't enabled multi-active, run:

ceph fs flag set enable_multiple

Zitat von elite_stu@xxxxxxx:

I tried to remove the default fs then it works, but port 6789 still not able to telnet.

ceph fs fail myfs
ceph fs rm myfs --yes-i-really-mean-it

bash-4.4$
bash-4.4$ ceph fs ls

name: kingcephfs, metadata pool: cephfs-king-metadata, data pools: [cephfs-king-data ]
bash-4.4$
bash-4.4$
bash-4.4$ ceph -s
  cluster:
    id:     de9af3fe-d3b1-4a4b-bf61-929a990295f6
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,d (age 90m)
    mgr: a(active, since 5d), standbys: b
    mds: 1/1 daemons up, 5 standby
    osd: 3 osds: 3 up (since 100m), 3 in (since 6d)
    rgw: 1 daemon active (1 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   14 pools, 233 pgs
    objects: 633 objects, 450 MiB
    usage:   2.0 GiB used, 208 GiB / 210 GiB avail
    pgs:     233 active+clean

bash-4.4$
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux