On 05-01-15 11:04, ivan babrou wrote:
Hi!
I have a cluster with 106 osds and disk usage is varying from 166gb to
316gb. Disk usage is highly correlated to number of pg per osd (no
surprise here). Is there a reason for ceph to allocate more pg on some
nodes?
The biggest osds are 30, 42 and 69 (300gb+ each) and the smallest are
87, 33 and 55 (170gb each). The biggest pool has 2048 pgs, pools with
very little data has only 8 pgs. PG size in biggest pool is ~6gb
(5.1..6.3 actually).
With 106 OSDs you need about 3000 PGs, so with 2048 you are a bit low.
Also, what kind of data are you storing? RBD? RGW or raw RADOS?
Keep in mind that PG selection happens based on object name and that
object size is NOT taken into account by CRUSH, so you'll never see a
100% even distribution of data.
Lack of balanced disk usage prevents me from using all the disk space.
When the biggest osd is full, cluster does not accept writes anymore.
Here's gist with info about my cluster:
https://gist.github.com/bobrik/fb8ad1d7c38de0ff35ae
--
Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com