Hi Christoph, As Zenon pointed out, you have to install src/init-ceph yourself (as /etc/init.d/ceph). The RPM package should do this for you. We considered having "make install" perform this step for users, but unfortunately every Linux distribution puts the init scripts in a slightly different place. And some are moving to alternate, non sysV init systems, of course. P.S. If you're using CentOS, you should either compile a new kernel (with an up-to-date Ceph kernel module) or use FUSE. cheers, Colin On Wed, Apr 13, 2011 at 11:56 PM, Christoph Raible <c.raible@xxxxxxxxxxxxxxxxxxxx> wrote: > Hi @all, > > I compilied & configured ceph 0.26 today and now i want to start ceph with > /etc/init.d/ceph -a start oder service ceph -a start but the service is > unknown and in /etc/init.d there is noch such file! > > I compiled ceph with following commands: > > > > ###################### > > ./autogen.sh > > CXXFLAGS="-g -O2" ./configure --prefix=/usr --sbindir=/sbin > --localstatedir=/var --sysconfdir=/etc > > make > > make install > > > ######################## > > > > > My ceph.conf in /etc/ceph/ looks like this: > > > ####################### > > ; global > [global] > ; enable secure authentication > auth supported = cephx > > ; monitors > ; You need at least one. You need at least three if you want to > ; tolerate any node failures. Always create an odd number. > [mon] > mon data = /data/mon$id > > ; some minimal logging (just message traffic) to aid debugging > debug ms = 1 > > [mon0] > host = ceph0 > mon addr = 10.1.9.45:6789 > > [mon1] > host = ceph1 > mon addr = 10.1.9.46:6789 > > .... > > ; mds > ; You need at least one. Define two to get a standby. > [mds] > ; where the mds keeps its secret encryption keys > keyring = /data/keyring.$name > > [mds.0] > host = ceph0 > mds standby replay = true > mds standby for name = ceph1 > [mds.1] > host = ceph1 > mds standby replay = true > mds standby for name = ceph2 > .... > > > > ; osd > ; You need at least one. Two if you want data to be replicated. > ; Define as many as you like. > [osd] > ; This is where the btrfs volume will be mounted. > osd data = /data/osd$id > > ; Ideally, make this a separate disk or partition. A few GB > ; is usually enough; more if you have fast disks. You can use > ; a file under the osd data dir if need be > ; (e.g. /data/osd$id/journal), but it will be slower than a > ; separate disk or partition. > osd journal = /data/osd$id/journal > > ; If the OSD journal is a file, you need to specify the size. This is > specified in MB. > osd journal size = 512 > > > > [osd0] > ; if 'btrfs devs' is not specified, you're responsible for > ; setting up the 'osd data' dir. if it is not btrfs, things > ; will behave up until you try to recover from a crash (which > ; usually fine for basic testing). > host = ceph0 > btrfs devs = /dev/sdb1 > > [osd1] > host = ceph1 > btrfs devs = /dev/sdb1 > > .... > > ####################### > > > When I want to start ceph with ceph -c /etc/ceph/ceph.conf I got following > messages: > > ####################### > > > 2011-04-14 08:59:59.236688 4049d940 -- :/10014 >> 10.1.9.45:6789/0 > pipe(0x7f1ff4000ad0 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:02.237107 41190940 -- :/10014 >> 10.1.9.48:6789/0 > pipe(0x7f1ff4001570 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:05.237271 4049d940 -- :/10014 >> 10.1.9.47:6789/0 > pipe(0x7f1ff4000ad0 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:08.237502 41190940 -- :/10014 >> 10.1.9.48:6789/0 > pipe(0x7f1ff4001570 sd=4 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:11.237709 4049d940 -- :/10014 >> 10.1.9.47:6789/0 > pipe(0x7f1ff4000ad0 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:14.237554 41190940 -- :/10014 >> 10.1.9.46:6789/0 > pipe(0x7f1ff4001570 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:17.238122 4049d940 -- :/10014 >> 10.1.9.47:6789/0 > pipe(0x7f1ff4000ad0 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:20.238162 41190940 -- :/10014 >> 10.1.9.46:6789/0 > pipe(0x7f1ff4001570 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:23.238641 4049d940 -- :/10014 >> 10.1.9.45:6789/0 > pipe(0x7f1ff4000ad0 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:26.238939 41190940 -- :/10014 >> 10.1.9.48:6789/0 > pipe(0x7f1ff4001570 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:29.239232 4049d940 -- :/10014 >> 10.1.9.45:6789/0 > pipe(0x7f1ff4000ad0 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:32.239554 41190940 -- :/10014 >> 10.1.9.47:6789/0 > pipe(0x7f1ff4001570 sd=3 pgs=0 cs=0 l=0).fault first fault > 2011-04-14 09:00:35.239555 4049d940 -- :/10014 >> 10.1.9.46:6789/0 > pipe(0x7f1ff4000ad0 sd=3 pgs=0 cs=0 l=0).fault first fault > > > ####################### > > > > I realy hope someone can help me.... > > > Best regards, > > Christoph Raible > -- > Vorstand/Board of Management: > Dr. Bernd Finkbeiner, Dr. Roland Niemeier, Dr. Arno Steitz, Dr. Ingrid Zech > Vorsitzender des Aufsichtsrats/ > Chairman of the Supervisory Board: > Philippe Miltin > Sitz/Registered Office: Tuebingen > Registergericht/Registration Court: Stuttgart > Registernummer/Commercial Register No.: HRB 382196 > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html