Re: Different disk usage on different OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 5 Jan 2015 14:04:28 +0400 ivan babrou wrote:

> Hi!
> 
> I have a cluster with 106 osds and disk usage is varying from 166gb to
> 316gb. Disk usage is highly correlated to number of pg per osd (no
> surprise here). Is there a reason for ceph to allocate more pg on some
> nodes?
> 
In essence what Wido said, you're a bit low on PGs.

Also given your current utilization, pool 14 is totally oversize with 1024
PGs. You might want to re-create it with a smaller size and double pool 0
to 512 PGs and 10 to 4096. 
I assume you did raise the PGPs as well when changing the PGs, right?

And yeah, CEPH isn't particular good at balancing stuff by itself, but
with sufficient PGs you ought to get the variance below/around 30%.

Christian

> The biggest osds are 30, 42 and 69 (300gb+ each) and the smallest are 87,
> 33 and 55 (170gb each). The biggest pool has 2048 pgs, pools with very
> little data has only 8 pgs. PG size in biggest pool is ~6gb (5.1..6.3
> actually).
> 
> Lack of balanced disk usage prevents me from using all the disk space.
> When the biggest osd is full, cluster does not accept writes anymore.
> 
> Here's gist with info about my cluster:
> https://gist.github.com/bobrik/fb8ad1d7c38de0ff35ae
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux