Re: Degraded FS on 18.2.0 - two monitors per host????

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi- it settled back to having 4 MDS services, and the file system is up and running.  However the 4 MDS services are just on 2 hosts:

[root@story ~]# ceph fs status home
home - 6 clients
====
RANK  STATE         MDS            ACTIVITY     DNS    INOS   DIRS   CAPS
 0    active  home.hiho.mssdyh  Reqs:    0 /s   232k   224k  13.0k   433
 1    active  home.cube.xljmfz  Reqs:    1 /s  2037k  1913k   123k  1530
 2    active  home.cube.kmpbku  Reqs:    0 /s  18.3k  1041    317    523
   POOL      TYPE     USED  AVAIL
home.meta  metadata  10.8G  14.9T
home.data    data    9206G  14.9T
  STANDBY MDS
home.hiho.cfuswn
MDS version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)




-----Original Message-----
From: Robert W. Eckert <rob@xxxxxxxxxxxxxxx> 
Sent: Friday, August 18, 2023 12:48 AM
To: ceph-users@xxxxxxx
Subject:  Degraded FS on 18.2.0 - two monitors per host????

Hi - I have a 4 node cluster, and started to have some odd access issues to my file system "Home"

When I started investigating, saw the message "1 MDSs behind on trimming", but I also noticed that I seem to have 2 MDSs running on each server - 3 Daemons up, with 5 standby.     Is this expected behavior after the upgrade to 18.2? or did something go wrong?




[root@cube ~]# ceph status
  cluster:
    id:     fe3a7cb0-69ca-11eb-8d45-c86000d08867
    health: HEALTH_WARN
            1 filesystem is degraded
            1 MDSs behind on trimming

  services:
    mon: 3 daemons, quorum rhel1,cube,hiho (age 23m)
    mgr: hiho.bphqff(active, since 23m), standbys: rhel1.owrvaz, cube.sdhftu
    mds: 3/3 daemons up, 5 standby
    osd: 16 osds: 16 up (since 23m), 16 in (since 26h)
    rgw: 4 daemons active (4 hosts, 1 zones)

  data:
    volumes: 0/1 healthy, 1 recovering
    pools:   12 pools, 769 pgs
    objects: 3.64M objects, 3.1 TiB
    usage:   17 TiB used, 49 TiB / 65 TiB avail
    pgs:     765 active+clean
             4   active+clean+scrubbing+deep

  io:
    client:   154 MiB/s rd, 38 op/s rd, 0 op/s wr



[root@cube ~]# ceph health detail
HEALTH_WARN 1 filesystem is degraded; 1 MDSs behind on trimming [WRN] FS_DEGRADED: 1 filesystem is degraded
    fs home is degraded
[WRN] MDS_TRIM: 1 MDSs behind on trimming
    mds.home.story.sodtjs(mds.0): Behind on trimming (5546/128) max_segments: 128, num_segments: 5546

[root@cube ~]# ceph fs status home
home - 10 clients
====
RANK   STATE          MDS         ACTIVITY   DNS    INOS   DIRS   CAPS
0     replay  home.story.sodtjs<http://home.story.sodtjs>             802k   766k  36.7k     0
1    resolve   home.cube.xljmfz<http://home.cube.xljmfz>             735k   680k  39.0k     0
2    resolve  home.rhel1.nwpmbg<http://home.rhel1.nwpmbg>             322k   316k  17.5k     0
   POOL      TYPE     USED  AVAIL
home.meta  metadata   361G  14.9T
home.data    data    9206G  14.9T
   STANDBY MDS
home.rhel1.ffrufi<http://home.rhel1.ffrufi>
home.hiho.mssdyh<http://home.hiho.mssdyh>
home.cube.kmpbku<http://home.cube.kmpbku>
home.hiho.cfuswn<http://home.hiho.cfuswn>
home.story.gmieio<http://home.story.gmieio>
MDS version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable) _______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux