hi Gregory, Thanks for your response. I'm installed ceph v80.5 on a single node, and my mds status always be "creating". The output of "ceph -s" as following: root at ubuntu165:~# ceph -s cluster 3cd658c3-34ca-43f3-93c7-786e5162e412 health HEALTH_WARN 200 pgs incomplete; 200 pgs stuck inactive; 200 pgs stuck unclean; 50 requests are blocked > 32 sec monmap e1: 1 mons at {ubuntu165=10.62.170.165:6789/0}, election epoch 1, quorum 0 ubuntu165 mdsmap e19: 1/1/1 up {0=ubuntu165=up:creating} osdmap e32: 1 osds: 1 up, 1 in pgmap v64: 200 pgs, 4 pools, 0 bytes data, 0 objects 1059 MB used, 7448 GB / 7449 GB avail 200 creating+incomplete root at ubuntu165:~# ceph -v ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6) thanks. 2014-09-18 1:22 GMT+08:00 Gregory Farnum <greg at inktank.com>: > That looks like the beginning of an mds creation to me. What's your > problem in more detail, and what's the output of "ceph -s"? > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Mon, Sep 15, 2014 at 5:34 PM, Shun-Fa Yang <shunfa at gmail.com> wrote: > > Hi all, > > > > I'm installed ceph v 0.80.5 on Ubuntu 14.04 server version by using > > apt-get... > > > > The log of mds shows as following: > > > > 2014-09-15 17:24:58.291305 7fd6f6d47800 0 ceph version 0.80.5 > > (38b73c67d375a2552d8ed67843c8a65c2c0feba6), process ceph-mds, pid 10487 > > > > 2014-09-15 17:24:58.302164 7fd6f6d47800 -1 mds.-1.0 *** no OSDs are up > as of > > epoch 8, waiting > > > > 2014-09-15 17:25:08.302930 7fd6f6d47800 -1 mds.-1.-1 *** no OSDs are up > as > > of epoch 8, waiting > > > > 2014-09-15 17:25:19.322092 7fd6f1938700 1 mds.-1.0 handle_mds_map > standby > > > > 2014-09-15 17:25:19.325024 7fd6f1938700 1 mds.0.3 handle_mds_map i am > now > > mds.0.3 > > > > 2014-09-15 17:25:19.325026 7fd6f1938700 1 mds.0.3 handle_mds_map state > > change up:standby --> up:creating > > > > 2014-09-15 17:25:19.325196 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:1 > > > > 2014-09-15 17:25:19.325377 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:100 > > > > 2014-09-15 17:25:19.325381 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:600 > > > > 2014-09-15 17:25:19.325449 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:601 > > > > 2014-09-15 17:25:19.325489 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:602 > > > > 2014-09-15 17:25:19.325538 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:603 > > > > 2014-09-15 17:25:19.325564 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:604 > > > > 2014-09-15 17:25:19.325603 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:605 > > > > 2014-09-15 17:25:19.325627 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:606 > > > > 2014-09-15 17:25:19.325655 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:607 > > > > 2014-09-15 17:25:19.325682 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:608 > > > > 2014-09-15 17:25:19.325714 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:609 > > > > 2014-09-15 17:25:19.325738 7fd6f1938700 0 mds.0.cache creating system > inode > > with ino:200 > > > > Could someone tell me how to solve it? > > > > Thanks. > > > > -- > > ???(yang shun-fa) > > > > _______________________________________________ > > Ceph-community mailing list > > Ceph-community at lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com > > > -- ???(yang shun-fa) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140918/7b900327/attachment.htm>