Yavor,
I would highly recommend taking a look at the quick install guide: http://ceph.com/docs/next/start/quick-start/
As per the guide, you need to precreate the directories prior to starting ceph.
Andrei
I would highly recommend taking a look at the quick install guide: http://ceph.com/docs/next/start/quick-start/
As per the guide, you need to precreate the directories prior to starting ceph.
Andrei
From: "Явор Маринов" <ymarinov@xxxxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, 4 June, 2013 11:03:52 AM
Subject: ceph configuration
Hello,
I'm new to the Ceph mailing list, and I need some advices for our
testing cluster. I have 2 servers with x2 hard disks. On the first
server i configured monitor and OSD, and on the second server only OSD.
The configuration looks like as follows:
[mon.a]
host = ceph1
mon addr = 192.168.2.170:6789
[osd.0]
host = ceph1
addr = 192.168.2.170
devs = /dev/sdb
[osd.1]
host = ceph2
addr = 192.168.2.114
devs = /dev/sdb
Once i initiate 'service ceph -a start' i keep getting the following error:
Mounting xfs on ceph2:/var/lib/ceph/osd/ceph-1
df: `/var/lib/ceph/osd/ceph-1/.': No such file or directory
and I'm unable to mount the cluster with the following command:
root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt
Also executing 'ceph health' i'm getting this response:
HEALTH_WARN 143 pgs degraded; 576 pgs stuck unclean; recovery 15/122
degraded (12.295%)
This is fresh install and there aren't any nodes which are added/removed.
Any help will be much appreciated.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
To: ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, 4 June, 2013 11:03:52 AM
Subject: ceph configuration
Hello,
I'm new to the Ceph mailing list, and I need some advices for our
testing cluster. I have 2 servers with x2 hard disks. On the first
server i configured monitor and OSD, and on the second server only OSD.
The configuration looks like as follows:
[mon.a]
host = ceph1
mon addr = 192.168.2.170:6789
[osd.0]
host = ceph1
addr = 192.168.2.170
devs = /dev/sdb
[osd.1]
host = ceph2
addr = 192.168.2.114
devs = /dev/sdb
Once i initiate 'service ceph -a start' i keep getting the following error:
Mounting xfs on ceph2:/var/lib/ceph/osd/ceph-1
df: `/var/lib/ceph/osd/ceph-1/.': No such file or directory
and I'm unable to mount the cluster with the following command:
root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt
Also executing 'ceph health' i'm getting this response:
HEALTH_WARN 143 pgs degraded; 576 pgs stuck unclean; recovery 15/122
degraded (12.295%)
This is fresh install and there aren't any nodes which are added/removed.
Any help will be much appreciated.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com