Re: mds not starting ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello John,

that was the info i missed (both - create pools and fs). Works now.

Thank you very much.

Kind regards
  Petric

> -----Original Message-----
> From: John Spray [mailto:jspray@xxxxxxxxxx]
> Sent: Montag, 21. September 2015 14:41
> To: Frank, Petric (Petric)
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  mds not starting ?
> 
> Follow the instructions here to set up a filesystem:
> http://docs.ceph.com/docs/master/cephfs/createfs/
> 
> It looks like you haven't done "ceph fs new".
> 
> Cheers,
> John
> 
> On Mon, Sep 21, 2015 at 1:34 PM, Frank, Petric (Petric)
> <Petric.Frank@xxxxxxxxxxxxxxxxxx> wrote:
> > Hello,
> >
> > i'm facing a problem that mds seems not to start.
> >
> > I started mds in debug mode "ceph-mds -f -i storage08 --debug_mds 10"
> which outputs in the log:
> >
> > ---------------------- cut ---------------------------------
> > 2015-09-21 14:12:14.313534 7ff47983d780  0 ceph version 0.94.3
> > (95cefea9fd9ab740263bf8bb4796fd864d9afe2b), process ceph-mds, pid
> > 24787 starting mds.storage08 at :/0
> > 2015-09-21 14:12:14.316062 7ff47983d780 10 mds.-1.0 168
> MDSCacheObject
> > 2015-09-21 14:12:14.316074 7ff47983d780 10 mds.-1.0 2408
> CInode
> > 2015-09-21 14:12:14.316075 7ff47983d780 10 mds.-1.0 16
> elist<>::item   *7=112
> > 2015-09-21 14:12:14.316077 7ff47983d780 10 mds.-1.0 480  inode_t
> > 2015-09-21 14:12:14.316079 7ff47983d780 10 mds.-1.0 48    nest_info_t
> > 2015-09-21 14:12:14.316081 7ff47983d780 10 mds.-1.0 32    frag_info_t
> > 2015-09-21 14:12:14.316082 7ff47983d780 10 mds.-1.0 40   SimpleLock
> *5=200
> > 2015-09-21 14:12:14.316083 7ff47983d780 10 mds.-1.0 48   ScatterLock
> *3=144
> > 2015-09-21 14:12:14.316085 7ff47983d780 10 mds.-1.0 480 CDentry
> > 2015-09-21 14:12:14.316086 7ff47983d780 10 mds.-1.0 16
> elist<>::item
> > 2015-09-21 14:12:14.316096 7ff47983d780 10 mds.-1.0 40   SimpleLock
> > 2015-09-21 14:12:14.316097 7ff47983d780 10 mds.-1.0 952 CDir
> > 2015-09-21 14:12:14.316098 7ff47983d780 10 mds.-1.0 16
> elist<>::item   *2=32
> > 2015-09-21 14:12:14.316099 7ff47983d780 10 mds.-1.0 176  fnode_t
> > 2015-09-21 14:12:14.316100 7ff47983d780 10 mds.-1.0 48    nest_info_t
> *2
> > 2015-09-21 14:12:14.316101 7ff47983d780 10 mds.-1.0 32    frag_info_t
> *2
> > 2015-09-21 14:12:14.316103 7ff47983d780 10 mds.-1.0 264 Capability
> > 2015-09-21 14:12:14.316104 7ff47983d780 10 mds.-1.0 32
> xlist<>::item   *2=64
> > 2015-09-21 14:12:14.316665 7ff47983d780 -1 mds.-1.0 log_to_monitors
> > {default=true}
> > 2015-09-21 14:12:14.320840 7ff4740c8700  7 mds.-1.server
> > handle_osd_map: full = 0 epoch = 20
> > 2015-09-21 14:12:14.320984 7ff47983d780 10 mds.beacon.storage08 _send
> > up:boot seq 1
> > 2015-09-21 14:12:14.321060 7ff47983d780 10 mds.-1.0 create_logger
> > 2015-09-21 14:12:14.321234 7ff4740c8700  5 mds.-1.0 handle_mds_map
> epoch 1 from mon.1
> > 2015-09-21 14:12:14.321256 7ff4740c8700 10 mds.-1.0      my compat
> compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds
> uses inline data,8=no anchor table}
> > 2015-09-21 14:12:14.321264 7ff4740c8700 10 mds.-1.0  mdsmap compat
> > compat={},rocompat={},incompat={}
> > 2015-09-21 14:12:14.321267 7ff4740c8700 10 mds.-1.-1 map says i am
> > 192.168.0.178:6802/24787 mds.-1.-1 state down:dne
> > 2015-09-21 14:12:14.321272 7ff4740c8700 10 mds.-1.-1 not in map yet
> > 2015-09-21 14:12:14.321305 7ff4740c8700  7 mds.-1.server
> > handle_osd_map: full = 0 epoch = 20
> > 2015-09-21 14:12:14.321443 7ff4740c8700  5 mds.-1.-1 handle_mds_map
> > epoch 1 from mon.1
> > 2015-09-21 14:12:14.321447 7ff4740c8700  5 mds.-1.-1  old map epoch 1
> > <= 1, discarding
> > 2015-09-21 14:12:18.321061 7ff4707c0700 10 mds.beacon.storage08 _send
> > up:boot seq 2
> > 2015-09-21 14:12:19.321093 7ff470fc1700 10
> > MDSInternalContextBase::complete: N3MDS10C_MDS_TickE
> > 2015-09-21 14:12:22.321119 7ff4707c0700 10 mds.beacon.storage08 _send
> > up:boot seq 3
> > 2015-09-21 14:12:24.321169 7ff470fc1700 10
> > MDSInternalContextBase::complete: N3MDS10C_MDS_TickE ...
> > ---------------------- cut ---------------------------------
> >
> > "cheph -s" shows:
> >
> >     cluster 982924a3-32e7-401f-9975-018bb697d717
> >      health HEALTH_OK
> >      monmap e1: 3 mons at
> {0=192.168.0.176:6789/0,1=192.168.0.177:6789/0,2=192.168.0.178:6789/0}
> >             election epoch 6, quorum 0,1,2 0,1,2
> >      osdmap e20: 3 osds: 3 up, 3 in
> >       pgmap v39: 64 pgs, 1 pools, 0 bytes data, 0 objects
> >             15541 MB used, 388 GB / 403 GB avail
> >                   64 active+clean
> >
> > As you see MONs and OSDs seem to be happy.
> > I'm missing the "mdsmap" entry here. Try to verify with the command
> "ceph mds stat" gives:
> >
> >   e1: 0/0/0 up
> >
> > The section of ceph.conf regarding mds reads:
> >   [mds]
> >     mds data = /var/lib/ceph/mds/ceph-$id
> >     keyring = /var/lib/ceph/mds/ceph-$id/keyring
> >   [mds.storage08]
> >     host = storage08
> >     mds addr = 192.168.0.178
> >
> >
> > Configuration:
> >   3 Hosts
> >   Gentoo Linux (kernel 4.0.5)
> >   Ceph 0.94.3
> >   All have MON and OSD
> >   One has MDS additionally.
> >
> >
> > Any idea for on how the MDS to get running ?
> >
> >
> > Kind regards
> >   Petric
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux