Re: Weighting question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2015-01-01 08:27, Dyweni - Ceph-Users wrote:
Hi, I'm going to take a stab at this, since I've just recently/am currently
dealing with this/something similar myself.


On 2014-12-31 21:59, Lindsay Mathieson wrote:
As mentioned before :) we have two osd ndoes with one 3TB osd each. (replica
2)

About to add a smaller (1TB) faster drive to each node

From the docs, normal practice would be to weight it in accordance with size,
i.e 3 for the 3TB OSD, 1 for the 1TB OSD.

But I'd like to spread it 50/50 to take better advantage of the faster drive,
so weight them all at 1. Bad idea?


As long as your total data used (ceph df) / # of osds < your smallest drive
capacity, you should be fine.

I suspect a better configuration would be to leave your weights alone and to change your primary affinity so that the osd with the ssd is used first. You might a little improvement on the writes (since the spinners have to work too), but the reads should have the most improvement (since ceph only has to read
from the ssd).

http://ceph.com/docs/master/rados/operations/crush-map/#primary-affinity



This may help you too:

http://cephnotes.ksperis.com/blog/2014/08/20/ceph-primary-affinity





We only have 1TB of data so I'm presuming the 1TB drives would get 500GB each.

--
Lindsay

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux