Re: installation help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 6, 2013 at 5:13 AM, Wojciech Giel
<wojciech.giel@xxxxxxxxxxxxxx> wrote:
> Hello,
> I trying to install ceph but can't get it working documentation is not clear
> and confusing how to do it.
> I have cloned 3 machines with ubuntu 12.04 minimal system. I'm trying to
> follow docs
>
> http://ceph.com/docs/master/start/quick-start-preflight/
>
> but got some questions:
>
> step 4. Configure your ceph-deploy admin node with password-less SSH access
> This should be done on ceph account, its't it?

If you are following the quickstart to the letter, yes. If you are
using the latest version of ceph-deploy (1.3.3)
you will benefit from automatically getting these set up for you when
you do `ceph-deploy new {hosts}`

It will make sure it can SSH without a password prompt to the hosts
you are setting up.

>
> next:
> http://ceph.com/docs/master/start/quick-ceph-deploy/
>
> creating directories  for maintaining the configuration for ceph-deploy is
> on ceph account or root?

ceph-deploy will create the ceph.conf and other files from wherever it
runs from owned by the user executing
ceph-deploy.

So it depends here on what user you are calling ceph-deploy.

> do all following step in docs are on ceph account or root?

All steps in the quickstart assume that you have created a ceph user
and you are connecting to remote hosts
with a ceph user.

In the end it doesn't matter. But if you want to get things right from
the get-go I would try and match what the
quickstart uses so you can troubleshoot easier.


> if on ceph account step 3. creating mon on 3 machines gives on two remote
> machines these errors:
>
>
> [ceph1][DEBUG ] locating the `service` executable...
> [ceph1][INFO  ] Running command: sudo initctl emit ceph-mon cluster=ceph
> id=ceph1
> [ceph1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon
> /var/run/ceph/ceph-mon.ceph1.asok mon_status
> [ceph1][ERROR ] admin_socket: exception getting command descriptions: [Errno
> 2] No such file or directory
> [ceph1][WARNIN] monitor: mon.ceph1, might not be running yet
> [ceph1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon
> /var/run/ceph/ceph-mon.ceph1.asok mon_status
> [ceph1][ERROR ] admin_socket: exception getting command descriptions: [Errno
> 2] No such file or directory
> [ceph1][WARNIN] ceph1 is not defined in `mon initial members`
> [ceph1][WARNIN] monitor ceph1 does not exist in monmap
> [ceph1][WARNIN] neither `public_addr` nor `public_network` keys are defined
> for monitors
> [ceph1][WARNIN] monitors may not be able to form quorum


It looks like you've tried a few things in that server and you've
ended in a broken state. If you
are deploying the `ceph1` mon, that should've been defined in your
ceph.conf and it should've been
done automatically for you when you called `ceph-deploy new ceph1`.

This is an example of creating a new config file for a server I have
called `node1`:


$ ceph-deploy new node1
[ceph_deploy.cli][INFO  ] Invoked (1.3.3):
/Users/alfredo/.virtualenvs/ceph-deploy/bin/ceph-deploy new node1
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][DEBUG ] Resolving host node1
[ceph_deploy.new][DEBUG ] Monitor node1 at 192.168.111.100
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host: papaya.local
[node1][INFO  ] Running command: ssh -CT -o BatchMode=yes node1
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.111.100']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

$ cat ceph.conf
[global]
fsid = 4e04aeaf-7025-4d33-bbcb-b27e75749b97
mon_initial_members = node1
mon_host = 192.168.111.100
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true

See how `mon_initial_members` has `node1` in it?

>
> step 4. gathering keys should it be from all mon servers?
>
> checking status on ceph account gives:
> $ ceph health
> 2013-12-06 09:48:41.550270 7f16b6eea700 -1 monclient(hunting): ERROR:
> missing keyring, cannot use cephx for authentication
> 2013-12-06 09:48:41.550278 7f16b6eea700  0 librados: client.admin
> initialization error (2) No such file or directory
> Error connecting to cluster: ObjectNotFound

That happens because you need to call `ceph` with sudo  (something
that ceph-deploy takes care for you)
>
> on root account:
> # ceph status
>     cluster 5ee9b196-ef36-46dd-870e-6ef1824b1cd0
>      health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no
> osds
>      monmap e1: 1 mons at {ceph0=192.168.45.222:6789/0}, election epoch 2,
> quorum 0 ceph0
>      osdmap e1: 0 osds: 0 up, 0 in
>       pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
>             0 kB used, 0 kB / 0 kB avail
>                  192 creating
>
> Ceph management after installation should it be done on root or ceph
> account?


It doesn't matter, just as long as you have super user permissions.
E.g. either with sudo or with a root account.

>
> I've attached typescrpi from what I have done.
> thanks
> Wojciech
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux