If I remember mkcephfs correctly, it deliberately does not create the directories for each store (you'll notice that http://ceph.com/docs/master/start/quick-start/#deploy-the-configuration includes creating the directory for each daemon) — does /data/1/osd0 exist yet? On Fri, Jul 20, 2012 at 2:45 PM, Joe Landman <landman@xxxxxxxxxxxxxxxxxxxxxxx> wrote: > Hi folks: > > Setting up a test cluster. Simple ceph.conf > > [global] > auth supported = cephx > > [mon] > mon data = /data/mon$id > debug ms = 1 > > [mon.0] > host = n01 > mon addr = 10.202.1.142:6789 > > [mon.1] > host = n02 > mon addr = 10.202.1.141:6789 > > [mon.2] > host = siflash-ssd > mon addr = 10.202.1.128:6789 > > [mds] > keyring = /data/keyring.$name > > [mds.a] > host = siflash-ssd-1 > > [osd] > osd data = /data/1/osd$id > osd journal = /data/2/osd$id/journal > osd journal size = 512 > > [osd.0] > host = dv4-1 > > > [osd.1] > host = dv4-2 > > > Following along on the directions: > http://ceph.com/docs/master/start/quick-start/ > > mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring > > [root@n01 ceph]# mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyringtemp dir > is /tmp/mkcephfs.W73fq9MYo0 > preparing monmap in /tmp/mkcephfs.W73fq9MYo0/monmap > /usr/bin/monmaptool --create --clobber --add 0 10.202.1.142:6789 --add 1 > 10.202.1.141:6789 --add 2 10.202.1.128:6789 --print > /tmp/mkcephfs.W73fq9MYo0/monmap > /usr/bin/monmaptool: monmap file /tmp/mkcephfs.W73fq9MYo0/monmap > /usr/bin/monmaptool: generated fsid 3ffcf9f3-f589-4dae-a534-f53daa3bc12c > epoch 0 > fsid 3ffcf9f3-f589-4dae-a534-f53daa3bc12c > last_changed 2012-07-19 17:35:14.024091 > created 2012-07-19 17:35:14.024091 > 0: 10.202.1.128:6789/0 mon.2 > 1: 10.202.1.141:6789/0 mon.1 > 2: 10.202.1.142:6789/0 mon.0 > /usr/bin/monmaptool: writing epoch 0 to /tmp/mkcephfs.W73fq9MYo0/monmap (3 > monitors) > === osd.0 === > pushing conf and monmap to dv4-1:/tmp/mkfs.ceph.26215 > 2012-07-20 17:38:12.605503 7f30d6d41760 -1 ** ERROR: error creating empty > object store in /data/1/osd0: (2) No such file or directory > failed: 'ssh root@dv4-1 /sbin/mkcephfs -d /tmp/mkfs.ceph.26215 --init-daemon > osd.0' > > Not sure what its having trouble with doing, the code path suggests that an > mkfs failed (we are using an xfs backing store already built and mounted). > > OS is Centos 6.3 with a 3.2.[12]3 kernel. Any clues on what I should look > for, or try by hand? > > Thanks! > > > -- > Joseph Landman, Ph.D > Founder and CEO > Scalable Informatics Inc. > email: landman@xxxxxxxxxxxxxxxxxxxxxxx > web : http://scalableinformatics.com > http://scalableinformatics.com/sicluster > phone: +1 734 786 8423 x121 > fax : +1 866 888 3112 > cell : +1 734 612 4615 > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html