Hi,
see here:
https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg15546.html
Udo
On 16.12.2014 05:39, Benjamin wrote:
I increased the OSDs to 10.5GB each and now I have
a different issue...
cephy@ceph-admin0:~/ceph-cluster$ echo {Test-data} >
testfile.txt
cephy@ceph-admin0:~/ceph-cluster$ rados put test-object-1
testfile.txt --pool=data
error opening pool data: (2) No such file or directory
cephy@ceph-admin0:~/ceph-cluster$ ceph osd lspools
0 rbd,
Here's ceph -w:
cephy@ceph-admin0:~/ceph-cluster$ ceph -w
cluster b3e15af-SNIP
health HEALTH_WARN mon.ceph0 low disk space;
mon.ceph1 low disk space; mon.ceph2 low disk space; clock
skew detected on mon.ceph0, mon.ceph1, mon.ceph2
monmap e3: 4 mons at {ceph-admin0= 10.0.1.10:6789/0,ceph0=10.0.1.11:6789/0,ceph1=10.0.1.12:6789/0,ceph2=10.0.1.13:6789/0},
election epoch 10, quorum 0,1,2,3
ceph-admin0,ceph0,ceph1,ceph2
osdmap e17: 3 osds: 3 up, 3 in
pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
19781 MB used, 7050 MB / 28339 MB avail
64 active+clean
Any other commands to run that would be helpful? Is it safe
to simply manually create the "data" and "metadata" pools
myself?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com