Hi Udo,
Thanks! Creating the MDS did not add a data and metadata pool for me but I was able to simply create them myself.
The tutorials also suggest you make new pools, cephfs_data and cephfs_metadata - would simply using data and metadata work better?
- B
On Mon, Dec 15, 2014, 10:37 PM Udo Lembke <ulembke@xxxxxxxxxxxx> wrote:
On 16.12.2014 05:39, Benjamin wrote:
I increased the OSDs to 10.5GB each and now I have a different issue...
cephy@ceph-admin0:~/ceph-cluster$ echo {Test-data} > testfile.txtcephy@ceph-admin0:~/ceph-cluster$ rados put test-object-1 testfile.txt --pool=dataerror opening pool data: (2) No such file or directorycephy@ceph-admin0:~/ceph-cluster$ ceph osd lspools0 rbd,
Here's ceph -w:
cephy@ceph-admin0:~/ceph-cluster$ ceph -wcluster b3e15af-SNIPhealth HEALTH_WARN mon.ceph0 low disk space; mon.ceph1 low disk space; mon.ceph2 low disk space; clock skew detected on mon.ceph0, mon.ceph1, mon.ceph2monmap e3: 4 mons at {ceph-admin0=10.0.1.10:6789/0,ceph0=10.0.1.11:6789/0,ceph1=10.0.1.12:6789/0,ceph2=10.0.1.13:6789/0}, election epoch 10, quorum 0,1,2,3 ceph-admin0,ceph0,ceph1,ceph2osdmap e17: 3 osds: 3 up, 3 inpgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects19781 MB used, 7050 MB / 28339 MB avail64 active+clean
Any other commands to run that would be helpful? Is it safe to simply manually create the "data" and "metadata" pools myself?
On Mon, Dec 15, 2014 at 5:07 PM, Benjamin <zorlin@xxxxxxxxx> wrote:Aha, excellent suggestion! I'll try that as soon as I get back, thank you.
- BOn Dec 15, 2014 5:06 PM, "Craig Lewis" <clewis@xxxxxxxxxxxxxxxxxx> wrote:
On Sun, Dec 14, 2014 at 6:31 PM, Benjamin <zorlin@xxxxxxxxx> wrote:The machines each have Ubuntu 14.04 64-bit, with 1GB of RAM and 8GB of disk. They have between 10% and 30% disk utilization but common between all of them is that they have free disk space meaning I have no idea what the heck is causing Ceph to complain.
Each OSD is 8GB? You need to make them at least 10 GB.
Ceph weights each disk as it's size in TiB, and it truncates to two decimal places. So your 8 GiB disks have a weight of 0.00. Bump it up to 10 GiB, and it'll get a weight of 0.01.
You should have 3 OSDs, one for each of ceph0,ceph1,ceph2.
If that doesn't fix the problem, go ahead and post the things Udo mentioned.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com