Re: MDS not becoming active after migrating to cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I just migrated to cephadm on my 2 node octopus cluster.
I have the same problems with the mds started in a container
not being available to ceph. Had to run the old systemd mds, to
keep the fs available.

some outputs:

============================================================================
[root@s0 ~]# ceph health detail
HEALTH_WARN 2 stray daemon(s) not managed by cephadm
[WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm
    stray daemon mds.s0 on host s0.harlan.de not managed by cephadm
    stray daemon mds.s1 on host s1.harlan.de not managed by cephadm

============================================================================
[root@s0 ~]# ceph -s
  cluster:
    id:     86bbd6c5-ae96-4c78-8a5e-50623f0ae524
    health: HEALTH_WARN
            2 stray daemon(s) not managed by cephadm

  services:
    mon: 3 daemons, quorum s0,s1,r1 (age 2h)
    mgr: s1(active, since 2h), standbys: s0
    mds: fs:1 {0=s0=up:active} 1 up:standby
    osd: 10 osds: 10 up (since 2h), 10 in (since 11h)

  data:
    pools:   6 pools, 289 pgs
    objects: 1.85M objects, 1.7 TiB
    usage:   3.6 TiB used, 13 TiB / 16 TiB avail
    pgs:     289 active+clean

  io:
    client:   85 B/s rd, 855 KiB/s wr, 0 op/s rd, 110 op/s wr


============================================================================
root@r1:/tmp# ceph fs ls
name: fs, metadata pool: cfs_md, data pools: [cfs ]

============================================================================
root@r1:/tmp# ceph orch ps --daemon-type mds
NAME              HOST          STATUS        REFRESHED  AGE  VERSION 
IMAGE NAME               IMAGE ID      CONTAINER ID  
mds.fs.s0.khuhto  s0.harlan.de  running (2h)  3m ago     2h   15.2.13 
docker.io/ceph/ceph:v15  2cf504fded39  0a65ce57d168  
mds.fs.s1.ajxyaf  s1.harlan.de  running (2h)  3m ago     2h   15.2.13 
docker.io/ceph/ceph:v15  2cf504fded39  407bd3bdb334 

Both WORKING mds are running from systemctl not cephadm. When I stop
them, fs is not available any more.

============================================================================
[root@s0 qemu]# systemctl status ceph-mds@s0
● ceph-mds@s0.service - Ceph metadata server daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2021-10-25 18:31:11 CEST; 2h 34min ago
 Main PID: 326528 (ceph-mds)
    Tasks: 23
   Memory: 1.1G
   CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@s0.service
           └─326528 /usr/bin/ceph-mds -f --cluster ceph --id s0 --setuser ceph --setgroup ceph

Okt 25 18:31:11 s0.harlan.de systemd[1]: Started Ceph metadata server daemon.
Okt 25 18:31:11 s0.harlan.de ceph-mds[326528]: starting mds.s0 at

============================================================================
[root@s1 ceph]# systemctl status ceph-mds@s1
● ceph-mds@s1.service - Ceph metadata server daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2021-10-25 20:58:17 CEST; 6min ago
 Main PID: 266482 (ceph-mds)
    Tasks: 15
   Memory: 15.2M
   CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@s1.service
           └─266482 /usr/bin/ceph-mds -f --cluster ceph --id s1 --setuser ceph --setgroup ceph

Oct 25 20:58:17 s1.harlan.de systemd[1]: Started Ceph metadata server daemon.
Oct 25 20:58:17 s1.harlan.de ceph-mds[266482]: starting mds.s1 at

============================================================================
[root@s0 qemu]# podman ps
CONTAINER ID  IMAGE                    COMMAND               CREATED      STATUS          PORTS       NAMES
4a66b2a1b9d1  docker.io/ceph/ceph:v15  -n mon.s0 -f --se...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-mon.s0
4319d986bbfc  docker.io/ceph/ceph:v15  -n mgr.s0 -f --se...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-mgr.s0
58bd2d0b1f3d  docker.io/ceph/ceph:v15  -n osd.1 -f --set...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-osd.1
14f80276cb4a  docker.io/ceph/ceph:v15  -n osd.3 -f --set...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-osd.3
37f51999a723  docker.io/ceph/ceph:v15  -n osd.4 -f --set...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-osd.4
fda7ef3bd7ea  docker.io/ceph/ceph:v15  -n osd.5 -f --set...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-osd.5
d390a53b3d29  docker.io/ceph/ceph:v15  -n osd.9 -f --set...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-osd.9
0a65ce57d168  docker.io/ceph/ceph:v15  -n mds.fs.s0.khuh...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-mds.fs.s0.khuhto

============================================================================
[root@s1 ceph]# podman ps
CONTAINER ID  IMAGE                    COMMAND               CREATED      STATUS          PORTS       NAMES
69bf42cb521b  docker.io/ceph/ceph:v15  -n mon.s1 -f --se...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-mon.s1
3e685ed6d16d  docker.io/ceph/ceph:v15  -n mgr.s1 -f --se...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-mgr.s1
7d7c27522504  docker.io/ceph/ceph:v15  -n osd.0 -f --set...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-osd.0
301e45f7b2a1  docker.io/ceph/ceph:v15  -n osd.2 -f --set...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-osd.2
927a20f667d4  docker.io/ceph/ceph:v15  -n osd.6 -f --set...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-osd.6
5ba0c8429422  docker.io/ceph/ceph:v15  -n osd.7 -f --set...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-osd.7
80ba3cd5826d  docker.io/ceph/ceph:v15  -n osd.8 -f --set...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-osd.8
407bd3bdb334  docker.io/ceph/ceph:v15  -n mds.fs.s1.ajxy...  3 hours ago  Up 3 hours ago              ceph-86bbd6c5-ae96-4c78-8a5e-50623f0ae524-mds.fs.s1.ajxyaf

There is no mds related logfile inside  the mds container!

ps inside the mds container:
============================================================================
[root@s0 ceph]# ps auxwww
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
ceph           1  0.0  0.0 354204 38056 ?        Ssl  16:29   0:03 /usr/bin/ceph-mds -n mds.fs.s0.khuhto -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug 
root          43  0.0  0.0  12128  3352 pts/0    Ss   19:07   0:00 /bin/bash
root          61  0.0  0.0  44636  3480 pts/0    R+   19:08   0:00 ps auxwww
[root@s0 ceph]# 

Thanx for any help,

Magnus

-- 
Dr. Magnus Harlander
Mail: harlan@xxxxxxxxx
Web: www.harlan.de
Stiftung: www.harlander-stiftung.de
Ceterum censeo bitcoin esse delendam!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux