Re: ceph-disk activate hangs with external journal device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry, reading too fast. That key isn't from a previous attempt, correct? But I doubt that is the problem as you would receive an access denied message in the logs.

Try running Ceph-disk zap and recreate the OSD. Also remove the Auth key and the osd (ceph osd rm <id>) then do a ceph-disk prepare. I don't think the first stay up should be trying to create file systems, that should have been done with prepare.

Robert LeBlanc

Sent from a mobile device please excuse any typos.

On Apr 23, 2015 3:43 AM, "Daniel Piddock" <dgp-ceph@xxxxxxxxxxxxxxxx> wrote:
On 22/04/15 20:32, Robert LeBlanc wrote:
I believe your problem is that you haven't created bootstrap-osd key and distributed it to your OSD node in /var/lib/ceph/bootstrap-osd/.

Hi Robert,

Thank you for your reply.

In my original post, steps performed, I did include copying over the bootstrap-osd key. Also "ceph-disk activate" fails with an obvious error when that file is missing:

2015-04-23 10:16:47.245951 7fccc5a9c700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2015-04-23 10:16:47.245955 7fccc5a9c700  0 librados: client.bootstrap-osd initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound
ERROR:ceph-disk:Failed to activate
ceph-disk: Error: ceph osd create failed: Command '/usr/bin/ceph' returned non-zero exit status 1:

This is not the source of my issue.

Dan



On Wed, Apr 22, 2015 at 5:41 AM, Daniel Piddock <dgp-ceph@xxxxxxxxxxxxxxxx> wrote:
Hi,

I'm a ceph newbie setting up some trial installs for evaluation.

Using Debian stable (Wheezy) with Ceph Firefly from backports
(0.80.7-1~bpo70+1).

I've been following the instructions at
http://docs.ceph.com/docs/firefly/install/manual-deployment/ and first
time through went well, using a partition on the same drive as the OS. I
then migrated to having data on separate harddrives and that worked too.

I'm currently trying to get an OSD set up with the journal on an SSD
partition that's separate from the data drive. ceph-disk is not playing
ball and I've been getting various forms of failure. My greatest success
was getting the OSD created but it would never go "up". I'm struggling
to find anything useful in the logs or really what to look for.

I purged the ceph package and wiped the storage drives to give me a
blank slate and tried again.

Steps performed:

camel (MON server):
$ apt-get install ceph
$ uuidgen #= 8c9ff7b5-904a-4f9a-8c9e-d2f8b05b55d2
# created /etc/ceph/ceph.conf, attached
$ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. \
  --cap mon 'allow *'
$ ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring
--gen-key \
  -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' \
  --cap mds 'allow'
$ ceph-authtool /tmp/ceph.mon.keyring --import-keyring \
  /etc/ceph/ceph.client.admin.keyring
$ monmaptool --create --add a 10.1.0.3 --fsid \
  8c9ff7b5-904a-4f9a-8c9e-d2f8b05b55d2 /tmp/monmap
$ ceph-mon --mkfs -i a --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
$ /etc/init.d/ceph start mon
$ ceph osd lspools #= 0 data,1 metadata,2 rbd,

storage node 1:
$ apt-get install ceph
$ rsync -a camel:/etc/ceph/ceph.conf /etc/ceph/
$ rsync -a camel:/var/lib/ceph/bootstrap-osd/ceph.keyring \
  /var/lib/ceph/bootstrap-osd/
$ ceph-disk prepare --cluster ceph --cluster-uuid \
  8c9ff7b5-904a-4f9a-8c9e-d2f8b05b55d2 /dev/sdb /dev/sdc

Output:
cannot read partition index; assume it isn't present
 (Error: Command '/sbin/parted' returned non-zero exit status 1)
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the
same device as the osd data
Creating new GPT entries.
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
Creating new GPT entries.
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
meta-data=""             isize=2048   agcount=4, agsize=15262347
blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=61049385, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=29809, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
The operation has completed successfully.

$ ceph-disk activate /dev/sdb1
Hangs

Looking at ps -efH I can see that ceph-disk launched:
/usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap
/var/lib/ceph/tmp/mnt.ST6Kz_/activate.monmap --osd-data
/var/lib/ceph/tmp/mnt.ST6Kz_ --osd-journal
/var/lib/ceph/tmp/mnt.ST6Kz_/journal --osd-uuid
636f694a-3677-44f0-baaf-4d74195b1806 --keyring
/var/lib/ceph/tmp/mnt.ST6Kz_/keyring

/var/lib/ceph/tmp/mnt.ST6Kz_ contains:
activate.monmap  current/ journal       magic          superblock
ceph_fsid        fsid     journal_uuid  store_version  whoami

journal is a symlink to
/dev/disk/by-partuuid/798fa1c5-9751-403c-9d5a-5f7665a60d4b (sdc1)

ceph osd tree:
# id    weight  type name       up/down reweight
-1      0       root default
0       0       osd.0   down    0

If I Ctrl-C ceph-disk, kill the ceph-osd process and try again it still
hangs.

Please can somebody help? I've also attached the ceph-osd.0.log

Dan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux