Re: Weighting question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 01 Jan 2015 13:59:57 +1000 Lindsay Mathieson wrote:

> As mentioned before :) we have two osd ndoes with one 3TB osd each.
> (replica 2)
> 
> About to add a smaller (1TB) faster drive to each node
> 
> From the docs, normal practice would be to weight it in accordance with
> size, i.e 3 for the 3TB OSD, 1 for the 1TB OSD.
> 
> But I'd like to spread it 50/50 to take better advantage of the faster
> drive, so weight them all at 1. Bad idea?
> 
Other than the wasted space, no. It should achieve what you want.

> We only have 1TB of data so I'm presuming the 1TB drives would get 500GB
> each.
> 

Expect a good deal of variance, Ceph still isn't very good at evenly
distributing data (PGs actually):
---
Filesystem      1K-blocks      Used  Available Use% Mounted on
/dev/sdi1      2112738204 211304052 1794043640  11% /var/lib/ceph/osd/ceph-19
/dev/sdk1      2112738204 140998368 1864349324   8% /var/lib/ceph/osd/ceph-21
---

On OSD 19 are 157 PGs, on 21 just 105, perfectly explaining this size
difference of about 33%.

That's on a Firefly cluster with 24 OSDs and more than adequate number of
PGs per OSD (128).

Christian 
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux