ceph configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hello,

I'm new to the Ceph mailing list, and I need some advices for our testing cluster. I have 2 servers with x2 hard disks. On the first server i configured monitor and OSD, and on the second server only OSD. The configuration looks like as follows:

[mon.a]

        host = ceph1
        mon addr = 192.168.2.170:6789

[osd.0]
        host = ceph1
        addr = 192.168.2.170
        devs = /dev/sdb

[osd.1]
        host = ceph2
        addr = 192.168.2.114
        devs = /dev/sdb

Once i initiate 'service ceph -a start' i keep getting the following error:

Mounting xfs on ceph2:/var/lib/ceph/osd/ceph-1
df: `/var/lib/ceph/osd/ceph-1/.': No such file or directory

and I'm unable to mount the cluster with the following command:
root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt

Also executing 'ceph health' i'm getting this response:
HEALTH_WARN 143 pgs degraded; 576 pgs stuck unclean; recovery 15/122 degraded (12.295%)

This is fresh install and there aren't any nodes which are added/removed.

Any help will be much appreciated.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux