Below is the detailed information after promoted the debug level. # ceph -c /home/ceph/ceph_conf/ceph.conf start 10.07.30_10:53:39.747643 2b7b527bca20 -- :/0 register_entity client? 10.07.30_10:53:39.747779 2b7b527bca20 -- :/0 register_entity client? at :/0 10.07.30_10:53:39.747793 2b7b527bca20 -- :/0 ready :/0 10.07.30_10:53:39.747871 2b7b527bca20 -- :/2233 messenger.start 10.07.30_10:53:39.748003 2b7b527bca20 -- :/2233 --> mon1 192.168.0.3:6789/0 -- auth(proto 0 30 bytes) v1 -- ?+0 0x11408b0 10.07.30_10:53:39.756916 2b7b527bca20 -- :/2233 submit_message auth(proto 0 30 bytes) v1 remote, 192.168.0.3:6789/0, new pipe. 10.07.30_10:53:39.756989 2b7b527bca20 -- :/2233 connect_rank to 192.168.0.3:6789/0, creating pipe and registering 10.07.30_10:53:39.757128 2b7b527bca20 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).register_pipe 10.07.30_10:53:39.757260 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).writer: state = 1 policy.server=0 10.07.30_10:53:39.757306 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).connect 0 10.07.30_10:53:39.757369 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).connecting to 192.168.0.3:6789/0 10.07.30_10:53:39.757551 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).connect error 192.168.0.3:6789/0, 111: Connection refused 10.07.30_10:53:39.757617 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).fault 111: Connection refused 10.07.30_10:53:39.757697 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).fault first fault 10.07.30_10:53:39.757740 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).writer: state = 1 policy.server=0 10.07.30_10:53:39.757777 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).connect 0 10.07.30_10:53:39.757831 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).connecting to 192.168.0.3:6789/0 10.07.30_10:53:39.757977 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).connect error 192.168.0.3:6789/0, 111: Connection refused 10.07.30_10:53:39.758039 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).fault 111: Connection refused 10.07.30_10:53:39.758097 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).fault waiting 0.200000 10.07.30_10:53:39.959700 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).fault done waiting or woke up 10.07.30_10:53:39.959753 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).writer: state = 1 policy.server=0 10.07.30_10:53:39.959810 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).connect 0 10.07.30_10:53:39.959856 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).connecting to 192.168.0.3:6789/0 10.07.30_10:53:39.959998 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).connect error 192.168.0.3:6789/0, 111: Connection refused 10.07.30_10:53:39.960054 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).fault 111: Connection refused 10.07.30_10:53:39.960112 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).fault waiting 0.400000 10.07.30_10:53:40.361707 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).fault done waiting or woke up 10.07.30_10:53:40.361727 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).writer: state = 1 policy.server=0 10.07.30_10:53:40.361742 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=-1 pgs=0 cs=0 l=0).connect 0 10.07.30_10:53:40.361763 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).connecting to 192.168.0.3:6789/0 10.07.30_10:53:40.361880 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).connect error 192.168.0.3:6789/0, 111: Connection refused 10.07.30_10:53:40.361902 42493940 -- :/2233 >> 192.168.0.3:6789/0 pipe(0x11422e0 sd=5 pgs=0 cs=0 l=0).fault On Tue, Aug 31, 2010 at 10:34 AM, Min Zhou <coderplay@xxxxxxxxx> wrote: > Hi all, > After creating a new file system followed by the wiki, I entered a > command to start ceph, but came across a failure > > # ceph -c /home/ceph/ceph_conf/ceph.conf start > 10.07.30_10:27:52.183028 42640940 -- :/31251 >> 192.168.0.3:6789/0 > pipe(0x1c015ff0 sd=-1 pgs=0 cs=0 l=0).fault first fault > 10.07.30_10:27:55.184319 42741940 -- :/31251 >> 192.168.0.2:6789/0 > pipe(0x1c014c90 sd=-1 pgs=0 cs=0 l=0).fault first fault > 10.07.30_10:27:58.186492 42842940 -- :/31251 >> 192.168.0.3:6789/0 > pipe(0x1c015b40 sd=-1 pgs=0 cs=0 l=0).fault first fault > 10.07.30_10:28:01.188484 42943940 -- :/31251 >> 192.168.0.2:6789/0 > pipe(0x1c015860 sd=-1 pgs=0 cs=0 l=0).fault first fault > 10.07.30_10:28:04.190674 42a44940 -- :/31251 >> 192.168.0.3:6789/0 > pipe(0x1c011060 sd=-1 pgs=0 cs=0 l=0).fault first fault > 10.07.30_10:28:07.192592 42b45940 -- :/31251 >> 192.168.0.2:6789/0 > pipe(0x1c01a200 sd=-1 pgs=0 cs=0 l=0).fault first fault > 10.07.30_10:28:10.194772 42c46940 -- :/31251 >> 192.168.0.3:6789/0 > pipe(0x1c01a660 sd=-1 pgs=0 cs=0 l=0).fault first fault > 10.07.30_10:28:13.196713 42d47940 -- :/31251 >> 192.168.0.2:6789/0 > pipe(0x1c01c010 sd=-1 pgs=0 cs=0 l=0).fault first fault > 10.07.30_10:28:16.198918 42e48940 -- :/31251 >> 192.168.0.3:6789/0 > pipe(0x1c01c8d0 sd=-1 pgs=0 cs=0 l=0).fault first fault > > Can anyone help me with this? > > Thanks, > Min > -- > My research interests are distributed systems, parallel computing and > bytecode based virtual machine. > > My profile: > http://www.linkedin.com/in/coderplay > My blog: > http://coderplay.javaeye.com > -- My research interests are distributed systems, parallel computing and bytecode based virtual machine. My profile: http://www.linkedin.com/in/coderplay My blog: http://coderplay.javaeye.com -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html