Re: ceph configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've managed to start and mount the cluster by completely starting the process from scratch. Other thing that i'm searching for is any documentation how to add another node (or hard drives) on a running cluster without affecting the mount point and the running service. Can you point me for this?



On 06/05/2013 11:20 AM, Igor Laskovy wrote:
>and I'm unable to mount the cluster with the following command:
>root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt

So, what it says? 

I'm also recommend to you start from my russian doc http://habrahabr.ru/post/179823


On Tue, Jun 4, 2013 at 4:22 PM, Явор Маринов <ymarinov@xxxxxxxxxxx> wrote:
That's the exact documentation which i'm using the directory on ceph2 is created, and the service is starting without any problems on both nodes. However the health of the cluster is getting WARN and i was able to mount the cluster




On 06/04/2013 03:43 PM, Andrei Mikhailovsky wrote:
Yavor,

I would highly recommend taking a look at the quick install guide: http://ceph.com/docs/next/start/quick-start/

As per the guide, you need to precreate the directories prior to starting ceph.

Andrei

From: "Явор Маринов" <ymarinov@xxxxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, 4 June, 2013 11:03:52 AM
Subject: ceph configuration


Hello,

I'm new to the Ceph mailing list, and I need some advices for our
testing cluster. I have 2 servers with x2 hard disks. On the first
server i configured monitor and OSD, and on the second server only OSD.
The configuration looks like as follows:

[mon.a]

         host = ceph1
         mon addr = 192.168.2.170:6789

[osd.0]
         host = ceph1
         addr = 192.168.2.170
         devs = /dev/sdb

[osd.1]
         host = ceph2
         addr = 192.168.2.114
         devs = /dev/sdb

Once i initiate 'service ceph -a start' i keep getting the following error:

Mounting xfs on ceph2:/var/lib/ceph/osd/ceph-1
df: `/var/lib/ceph/osd/ceph-1/.': No such file or directory

and I'm unable to mount the cluster with the following command:
root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt

Also executing 'ceph health' i'm getting this response:
HEALTH_WARN 143 pgs degraded; 576 pgs stuck unclean; recovery 15/122
degraded (12.295%)

This is fresh install and there aren't any nodes which are added/removed.

Any help will be much appreciated.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux