Hi,
your subject is "...two monitors per host????" but I guess you're
asking for MDS daemons per host. ;-) What's the output of 'ceph orch
ls mds --export'? You're using 3 active MDS daemons, maybe you set
"count_per_host: 2" to have enough standby daemons? I don't think an
upgrade would do that but I haven't tested Reef yet, so who knows.
Regards,
Eugen
Zitat von "Robert W. Eckert" <rob@xxxxxxxxxxxxxxx>:
Hi - I have a 4 node cluster, and started to have some odd access
issues to my file system "Home"
When I started investigating, saw the message "1 MDSs behind on
trimming", but I also noticed that I seem to have 2 MDSs running on
each server - 3 Daemons up, with 5 standby. Is this expected
behavior after the upgrade to 18.2? or did something go wrong?
[root@cube ~]# ceph status
cluster:
id: fe3a7cb0-69ca-11eb-8d45-c86000d08867
health: HEALTH_WARN
1 filesystem is degraded
1 MDSs behind on trimming
services:
mon: 3 daemons, quorum rhel1,cube,hiho (age 23m)
mgr: hiho.bphqff(active, since 23m), standbys: rhel1.owrvaz, cube.sdhftu
mds: 3/3 daemons up, 5 standby
osd: 16 osds: 16 up (since 23m), 16 in (since 26h)
rgw: 4 daemons active (4 hosts, 1 zones)
data:
volumes: 0/1 healthy, 1 recovering
pools: 12 pools, 769 pgs
objects: 3.64M objects, 3.1 TiB
usage: 17 TiB used, 49 TiB / 65 TiB avail
pgs: 765 active+clean
4 active+clean+scrubbing+deep
io:
client: 154 MiB/s rd, 38 op/s rd, 0 op/s wr
[root@cube ~]# ceph health detail
HEALTH_WARN 1 filesystem is degraded; 1 MDSs behind on trimming
[WRN] FS_DEGRADED: 1 filesystem is degraded
fs home is degraded
[WRN] MDS_TRIM: 1 MDSs behind on trimming
mds.home.story.sodtjs(mds.0): Behind on trimming (5546/128)
max_segments: 128, num_segments: 5546
[root@cube ~]# ceph fs status home
home - 10 clients
====
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 replay home.story.sodtjs<http://home.story.sodtjs>
802k 766k 36.7k 0
1 resolve home.cube.xljmfz<http://home.cube.xljmfz>
735k 680k 39.0k 0
2 resolve home.rhel1.nwpmbg<http://home.rhel1.nwpmbg>
322k 316k 17.5k 0
POOL TYPE USED AVAIL
home.meta metadata 361G 14.9T
home.data data 9206G 14.9T
STANDBY MDS
home.rhel1.ffrufi<http://home.rhel1.ffrufi>
home.hiho.mssdyh<http://home.hiho.mssdyh>
home.cube.kmpbku<http://home.cube.kmpbku>
home.hiho.cfuswn<http://home.hiho.cfuswn>
home.story.gmieio<http://home.story.gmieio>
MDS version: ceph version 18.2.0
(5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx