Re: mds container dies during deployment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,


Didn't read the right one:
https://docs.ceph.com/docs/master/cephadm/install/#deploy-mdss
There it says, how to do it right.

The command I was using, was just to add a mds daemon if you have already one.


Hopes it helps others.

Cheers, Simon

________________________________
Von: Simon Sutter <ssutter@xxxxxxxxxxx>
Gesendet: Montag, 25. Mai 2020 16:44:54
An: ceph-users@xxxxxxx
Betreff:  mds container dies during deployment

Hello everyone


I've got a fresh ceph octopus installation and I'm trying to set up a cephfs with erasure code configuration.
The metadata pool was set up as default.
The erasure code pool was set up with this command:
-> ceph osd pool create ec-data_fs 128 erasure default
Enabled overwrites:
-> ceph osd pool set ec-data_fs allow_ec_overwrites true
And create fs:
-> ceph fs new ec-data_fs meta_fs ec-data_fs --force


Then I tried deploying the mds, but this fails:
-> ceph orch daemon add mds ec-data_fs magma01
returns:
-> Deployed mds.ec-data_fs.magma01.ujpcly on host 'magma01'

The mds daemon is not there.

Aparently the container dies without any information, as seen in the journal:

May 25 16:11:56 magma01 podman[9348]: 2020-05-25 16:11:56.670510456 +0200 CEST m=+0.186462913 container create 0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 (image=docker.io/ceph/ceph:v15, name=competent_cori)
May 25 16:11:56 magma01 systemd[1]: Started libpod-conmon-0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90.scope.
May 25 16:11:56 magma01 systemd[1]: Started libcontainer container 0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90.
May 25 16:11:57 magma01 podman[9348]: 2020-05-25 16:11:57.112182262 +0200 CEST m=+0.628134873 container init 0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 (image=docker.io/ceph/ceph:v15, name=competent_cori)
May 25 16:11:57 magma01 podman[9348]: 2020-05-25 16:11:57.137011897 +0200 CEST m=+0.652964354 container start 0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 (image=docker.io/ceph/ceph:v15, name=competent_cori)
May 25 16:11:57 magma01 podman[9348]: 2020-05-25 16:11:57.137110412 +0200 CEST m=+0.653062853 container attach 0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 (image=docker.io/ceph/ceph:v15, name=competent_cori)
May 25 16:11:57 magma01 systemd[1]: libpod-0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90.scope: Consumed 327ms CPU time
May 25 16:11:57 magma01 podman[9348]: 2020-05-25 16:11:57.182968802 +0200 CEST m=+0.698921275 container died 0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 (image=docker.io/ceph/ceph:v15, name=competent_cori)
May 25 16:11:57 magma01 podman[9348]: 2020-05-25 16:11:57.413743787 +0200 CEST m=+0.929696266 container remove 0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 (image=docker.io/ceph/ceph:v15, name=competent_cori)

Can someone help me debugging this?

Cheers
Simon

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
hosttech GmbH | Simon Sutter
hosttech.ch<https://www.hosttech.ch>

WE LOVE TO HOST YOU.

create your own website!
more information & online-demo: www.website-creator.ch<http://www.website-creator.ch<http://www.website-creator.ch<http://www.website-creator.ch>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux