Hi, I'm a ceph newbie setting up some trial installs for evaluation. Using Debian stable (Wheezy) with Ceph Firefly from backports (0.80.7-1~bpo70+1). I've been following the instructions at http://docs.ceph.com/docs/firefly/install/manual-deployment/ and first time through went well, using a partition on the same drive as the OS. I then migrated to having data on separate harddrives and that worked too. I'm currently trying to get an OSD set up with the journal on an SSD partition that's separate from the data drive. ceph-disk is not playing ball and I've been getting various forms of failure. My greatest success was getting the OSD created but it would never go "up". I'm struggling to find anything useful in the logs or really what to look for. I purged the ceph package and wiped the storage drives to give me a blank slate and tried again. Steps performed: camel (MON server): $ apt-get install ceph $ uuidgen #= 8c9ff7b5-904a-4f9a-8c9e-d2f8b05b55d2 # created /etc/ceph/ceph.conf, attached $ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. \ --cap mon 'allow *' $ ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key \ -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' \ --cap mds 'allow' $ ceph-authtool /tmp/ceph.mon.keyring --import-keyring \ /etc/ceph/ceph.client.admin.keyring $ monmaptool --create --add a 10.1.0.3 --fsid \ 8c9ff7b5-904a-4f9a-8c9e-d2f8b05b55d2 /tmp/monmap $ ceph-mon --mkfs -i a --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring $ /etc/init.d/ceph start mon $ ceph osd lspools #= 0 data,1 metadata,2 rbd, storage node 1: $ apt-get install ceph $ rsync -a camel:/etc/ceph/ceph.conf /etc/ceph/ $ rsync -a camel:/var/lib/ceph/bootstrap-osd/ceph.keyring \ /var/lib/ceph/bootstrap-osd/ $ ceph-disk prepare --cluster ceph --cluster-uuid \ 8c9ff7b5-904a-4f9a-8c9e-d2f8b05b55d2 /dev/sdb /dev/sdc Output: cannot read partition index; assume it isn't present (Error: Command '/sbin/parted' returned non-zero exit status 1) WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data Creating new GPT entries. Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. Creating new GPT entries. Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=15262347 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=61049385, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=29809, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 The operation has completed successfully. $ ceph-disk activate /dev/sdb1 Hangs Looking at ps -efH I can see that ceph-disk launched: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/lib/ceph/tmp/mnt.ST6Kz_/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.ST6Kz_ --osd-journal /var/lib/ceph/tmp/mnt.ST6Kz_/journal --osd-uuid 636f694a-3677-44f0-baaf-4d74195b1806 --keyring /var/lib/ceph/tmp/mnt.ST6Kz_/keyring /var/lib/ceph/tmp/mnt.ST6Kz_ contains: activate.monmap current/ journal magic superblock ceph_fsid fsid journal_uuid store_version whoami journal is a symlink to /dev/disk/by-partuuid/798fa1c5-9751-403c-9d5a-5f7665a60d4b (sdc1) ceph osd tree: # id weight type name up/down reweight -1 0 root default 0 0 osd.0 down 0 If I Ctrl-C ceph-disk, kill the ceph-osd process and try again it still hangs. Please can somebody help? I've also attached the ceph-osd.0.log Dan
[global] fsid = 8c9ff7b5-904a-4f9a-8c9e-d2f8b05b55d2 public network = 10.1.0.0/24 cluster network = 10.1.2.0/24 auth cluster required = cephx auth service required = cephx auth client required = cephx cephx cluster require signatures = true cephx service require signatures = false # Replication level, number of data copies osd pool default size = 3 # Replication level in degraded state osd pool default min size = 2 osd pool default pg num = 128 osd pool default pgp num = 128 [mon] mon initial members = a [mon.a] host = camel mon addr = 10.1.0.3:6789 [osd] osd mkfs type = xfs osd mount options xfs = noatime,nodiratime,noexec,nodev,barrier=0,inode64 osd journal size = 10240
2015-04-22 11:59:30.080205 7fa3486a6780 0 ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3), process ceph-osd, pid 2566 2015-04-22 11:59:30.098311 7fa3486a6780 1 journal _open /dev/sdc1 fd 4: 10736386048 bytes, block size 4096 bytes, directio = 0, aio = 0 2015-04-22 11:59:30.311427 7f2060413780 0 ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3), process ceph-osd, pid 2593 2015-04-22 11:59:30.324928 7f2060413780 1 journal _open /dev/sdc1 fd 4: 10736386048 bytes, block size 4096 bytes, directio = 0, aio = 0 2015-04-22 11:59:38.419139 7f865ab11780 0 ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3), process ceph-osd, pid 2725 2015-04-22 11:59:38.421651 7f865ab11780 1 filestore(/var/lib/ceph/tmp/mnt.ST6Kz_) mkfs in /var/lib/ceph/tmp/mnt.ST6Kz_ 2015-04-22 11:59:38.421693 7f865ab11780 1 filestore(/var/lib/ceph/tmp/mnt.ST6Kz_) mkfs fsid is already set to 636f694a-3677-44f0-baaf-4d74195b1806 2015-04-22 11:59:38.613817 7f865ab11780 1 filestore(/var/lib/ceph/tmp/mnt.ST6Kz_) leveldb db exists/created 2015-04-22 11:59:38.627942 7f865ab11780 1 journal _open /var/lib/ceph/tmp/mnt.ST6Kz_/journal fd 10: 10736386048 bytes, block size 4096 bytes, directio = 1, aio = 1 2015-04-22 11:59:38.628277 7f865ab11780 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 636f694a-3677-44f0-baaf-4d74195b1806, invalid (someone else's?) journal 2015-04-22 11:59:38.637852 7f865ab11780 1 journal _open /var/lib/ceph/tmp/mnt.ST6Kz_/journal fd 10: 10736386048 bytes, block size 4096 bytes, directio = 1, aio = 1 2015-04-22 11:59:38.639002 7f865ab11780 0 filestore(/var/lib/ceph/tmp/mnt.ST6Kz_) mkjournal created journal on /var/lib/ceph/tmp/mnt.ST6Kz_/journal 2015-04-22 11:59:38.639093 7f865ab11780 1 filestore(/var/lib/ceph/tmp/mnt.ST6Kz_) mkfs done in /var/lib/ceph/tmp/mnt.ST6Kz_ 2015-04-22 11:59:38.639174 7f865ab11780 0 filestore(/var/lib/ceph/tmp/mnt.ST6Kz_) mount detected xfs (libxfs) 2015-04-22 11:59:38.639181 7f865ab11780 1 filestore(/var/lib/ceph/tmp/mnt.ST6Kz_) disabling 'filestore replica fadvise' due to known issues with fadvise(DONTNEED) on xfs 2015-04-22 11:59:38.730318 7f865ab11780 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.ST6Kz_) detect_features: FIEMAP ioctl is supported and appears to work 2015-04-22 11:59:38.730334 7f865ab11780 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.ST6Kz_) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option 2015-04-22 11:59:38.776982 7f865ab11780 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.ST6Kz_) detect_features: syscall(SYS_syncfs, fd) fully supported 2015-04-22 11:59:38.777063 7f865ab11780 0 xfsfilestorebackend(/var/lib/ceph/tmp/mnt.ST6Kz_) detect_feature: extsize is disabled by conf 2015-04-22 11:59:38.811068 7f865ab11780 0 filestore(/var/lib/ceph/tmp/mnt.ST6Kz_) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled 2015-04-22 11:59:38.824965 7f865ab11780 1 journal _open /var/lib/ceph/tmp/mnt.ST6Kz_/journal fd 16: 10736386048 bytes, block size 4096 bytes, directio = 1, aio = 1 2015-04-22 11:59:38.834966 7f865ab11780 1 journal _open /var/lib/ceph/tmp/mnt.ST6Kz_/journal fd 16: 10736386048 bytes, block size 4096 bytes, directio = 1, aio = 1 2015-04-22 11:59:38.835640 7f865ab11780 -1 filestore(/var/lib/ceph/tmp/mnt.ST6Kz_) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2015-04-22 12:33:20.172681 7fd9d5257780 0 ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3), process ceph-osd, pid 3077 2015-04-22 12:33:20.175002 7fd9d5257780 1 filestore(/var/lib/ceph/tmp/mnt.BSNzXv) mkfs in /var/lib/ceph/tmp/mnt.BSNzXv 2015-04-22 12:33:20.175029 7fd9d5257780 1 filestore(/var/lib/ceph/tmp/mnt.BSNzXv) mkfs fsid is already set to 636f694a-3677-44f0-baaf-4d74195b1806 2015-04-22 12:33:20.247276 7fd9d5257780 1 filestore(/var/lib/ceph/tmp/mnt.BSNzXv) leveldb db exists/created 2015-04-22 12:33:20.261320 7fd9d5257780 1 journal _open /var/lib/ceph/tmp/mnt.BSNzXv/journal fd 10: 10736386048 bytes, block size 4096 bytes, directio = 1, aio = 1 2015-04-22 12:33:20.261759 7fd9d5257780 1 journal check: header looks ok 2015-04-22 12:33:20.262104 7fd9d5257780 1 filestore(/var/lib/ceph/tmp/mnt.BSNzXv) mkfs done in /var/lib/ceph/tmp/mnt.BSNzXv 2015-04-22 12:33:20.262193 7fd9d5257780 0 filestore(/var/lib/ceph/tmp/mnt.BSNzXv) mount detected xfs (libxfs) 2015-04-22 12:33:20.262200 7fd9d5257780 1 filestore(/var/lib/ceph/tmp/mnt.BSNzXv) disabling 'filestore replica fadvise' due to known issues with fadvise(DONTNEED) on xfs 2015-04-22 12:33:20.355342 7fd9d5257780 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.BSNzXv) detect_features: FIEMAP ioctl is supported and appears to work 2015-04-22 12:33:20.355363 7fd9d5257780 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.BSNzXv) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option 2015-04-22 12:33:20.406353 7fd9d5257780 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.BSNzXv) detect_features: syscall(SYS_syncfs, fd) fully supported 2015-04-22 12:33:20.406427 7fd9d5257780 0 xfsfilestorebackend(/var/lib/ceph/tmp/mnt.BSNzXv) detect_feature: extsize is disabled by conf 2015-04-22 12:33:20.440407 7fd9d5257780 0 filestore(/var/lib/ceph/tmp/mnt.BSNzXv) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled 2015-04-22 12:33:20.454422 7fd9d5257780 1 journal _open /var/lib/ceph/tmp/mnt.BSNzXv/journal fd 16: 10736386048 bytes, block size 4096 bytes, directio = 1, aio = 1 2015-04-22 12:33:20.464431 7fd9d5257780 1 journal _open /var/lib/ceph/tmp/mnt.BSNzXv/journal fd 16: 10736386048 bytes, block size 4096 bytes, directio = 1, aio = 1 2015-04-22 12:33:20.465540 7fd9d5257780 -1 filestore(/var/lib/ceph/tmp/mnt.BSNzXv) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com