Hi,
you need to deploy more daemons because your current active MDS is
responsible for the already existing CephFS. There are several ways to
do this, I like the yaml file approach and increase the number of MDS
daemons, just as an example from a test cluster with one CephFS I
added the line "count_per_host: 2" to have two more daemons (one
active, one standby for the new FS):
cat mds.yml
service_type: mds
service_id: cephfs
placement:
hosts:
- host5
- host6
count_per_host: 2
Then apply:
ceph orch apply -i mds.yaml
As soon as there are more daemons up, your second FS should become active.
Regards
Eugen
Zitat von elite_stu@xxxxxxx:
Everything goes fine except execute "ceph fs new kingcephfs
cephfs-king-metadata cephfs-king-data", its shows 1 filesystem is
offline 1 filesystem is online with fewer MDS than max_mds.
But i see there is one mds services running, please help me to fix
the issue, thanks a lot.
bash-4.4$
bash-4.4$ ceph fs new kingcephfs cephfs-king-metadata cephfs-king-data
new fs with metadata pool 7 and data pool 8
bash-4.4$
bash-4.4$ ceph -s
cluster:
id: de9af3fe-d3b1-4a4b-bf61-929a990295f6
health: HEALTH_ERR
1 filesystem is offline
1 filesystem is online with fewer MDS than max_mds
services:
mon: 3 daemons, quorum a,b,c (age 2d)
mgr: a(active, since 2d), standbys: b
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 2d), 3 in (since 2d)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 2/2 healthy
pools: 14 pools, 233 pgs
objects: 592 objects, 450 MiB
usage: 1.5 GiB used, 208 GiB / 210 GiB avail
pgs: 233 active+clean
io:
client: 921 B/s rd, 1 op/s rd, 0 op/s wr
bash-4.4$
bash-4.4$ ceph fs status
myfs - 0 clients
====
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active myfs-a Reqs: 0 /s 10 13 12 0
0-s standby-replay myfs-b Evts: 0 /s 0 3 2 0
POOL TYPE USED AVAIL
myfs-metadata metadata 180k 65.9G
myfs-replicated data 0 65.9G
kingcephfs - 0 clients
==========
POOL TYPE USED AVAIL
cephfs-king-metadata metadata 0 65.9G
cephfs-king-data data 0 65.9G
MDS version: ceph version 17.2.5
(98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
bash-4.4$
bash-4.4$
bash-4.4$
bash-4.4$
bash-4.4$ ceph mds stat
myfs:1 kingcephfs:0 {myfs:0=myfs-a=up:active} 1 up:standby-replay
bash-4.4$
bash-4.4$ ceph fs ls
name: myfs, metadata pool: myfs-metadata, data pools: [myfs-replicated ]
name: kingcephfs, metadata pool: cephfs-king-metadata, data pools:
[cephfs-king-data ]
bash-4.4$
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx