Uneven OSD usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Thu, 28 Aug 2014 19:49:59 -0400 J David wrote:

> On Thu, Aug 28, 2014 at 7:00 PM, Robert LeBlanc <robert at leblancnet.us>
> wrote:
> > How many PGs do you have in your pool? This should be about 100/OSD.
> 
> There are 1328 PG's in the pool, so about 110 per OSD.
> 
And just to be pedantic, the PGP_NUM is the same?

The formula is actually OSDs * 100 / replication, so in your case
12*100/2=600. Now with small clusters is is better to err on the large
side, see the next paragraph.

Now 1328 (just out of curiosity, how did you arrive at that number?) isn't
a power of 2 and the happy documentation says:
---
The result should be rounded up to the nearest power of two. Rounding up is optional, but recommended if you want to ensure that all placement groups are roughly the same size.
---

Since you can't go down, the only way is up. To 2048
See it as an early preparation step towards the time when you reach 48
OSDs. ^o^

Increasing PG (and PGP) will cause data movement of course, so do this in
an off-peak time and start with a small increment (firefly won't even let
you add more than 256 PGs at a time).

This should give you a much smoother distribution. 

Regards,

Christian

> Thanks!
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi at gol.com   	Global OnLine Japan/Fusion Communications
http://www.gol.com/


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux