I hate to bug, but I truly hope someone has an answer to below.
Thank you kindly!
---------- Forwarded message ----------
From: Marek Dohojda <mdohojda@xxxxxxxxxxxxxxxxxxx>
Date: Wed, Jun 10, 2015 at 7:49 AM
Subject: Too many PGs
To: ceph-users-request@xxxxxxxxxxxxxx
Hello
I am running “Hammer” Ceph and I am getting following:
health HEALTH_WARN
too many PGs per OSD (438 > max 300)
Now I realize that this is because I have too few OSD for the amount of pools I have. Currently I have 14 OSD, split into 7 each for SSD and LVM. I created another one for CephFS which started this error.
I think that the reason I have this error is because I created the last CephFS pool with 512 PGs, which in retrospect was a mistake. I am utilizing this one strictly as a means to easy backup my libvirt XML files, and hence do not need much in terms of redundancy. Even if this was to fail, I am not really worried about it.
I will be adding more OSD soonish, so this error will go away in the long term.
For now
Is there any way to either suppress this error, or adjust down the PGs on the CephFS Pool that I created? I tried to do:
ceph odd pool set pg_num 64
However that didn’t actually do it, perhaps it needs a restart?
I wouldn’t be opposed to deleting this pool and recreating, but when I try that it gives me an error that there must be at least one pool in MDS.
Thank you!
From: Marek Dohojda <mdohojda@xxxxxxxxxxxxxxxxxxx>
Date: Wed, Jun 10, 2015 at 7:49 AM
Subject: Too many PGs
To: ceph-users-request@xxxxxxxxxxxxxx
Hello
I am running “Hammer” Ceph and I am getting following:
health HEALTH_WARN
too many PGs per OSD (438 > max 300)
Now I realize that this is because I have too few OSD for the amount of pools I have. Currently I have 14 OSD, split into 7 each for SSD and LVM. I created another one for CephFS which started this error.
I think that the reason I have this error is because I created the last CephFS pool with 512 PGs, which in retrospect was a mistake. I am utilizing this one strictly as a means to easy backup my libvirt XML files, and hence do not need much in terms of redundancy. Even if this was to fail, I am not really worried about it.
I will be adding more OSD soonish, so this error will go away in the long term.
For now
Is there any way to either suppress this error, or adjust down the PGs on the CephFS Pool that I created? I tried to do:
ceph odd pool set pg_num 64
However that didn’t actually do it, perhaps it needs a restart?
I wouldn’t be opposed to deleting this pool and recreating, but when I try that it gives me an error that there must be at least one pool in MDS.
Thank you!
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com