Re: SSD Primary Affinity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Maxime,

This is a very interesting concept. Instead of the primary affinity being used to choose SSD for primary copy, you set crush rule to first choose an osd in the ‘ssd-root’, then the ‘hdd-root’ for the second set.

And with 'step chooseleaf first {num}’
> If {num} > 0 && < pool-num-replicas, choose that many buckets. 
So 1 chooses that bucket
> If {num} < 0, it means pool-num-replicas - {num}
And -1 means it will fill remaining replicas on this bucket.

This is a very interesting concept, one I had not considered.
Really appreciate this feedback.

Thanks,

Reed

> On Apr 19, 2017, at 12:15 PM, Maxime Guyot <Maxime.Guyot@xxxxxxxxx> wrote:
> 
> Hi,
> 
>>> Assuming production level, we would keep a pretty close 1:2 SSD:HDD ratio,
>> 1:4-5 is common but depends on your needs and the devices in question, ie. assuming LFF drives and that you aren’t using crummy journals.
> 
> You might be speaking about different ratios here. I think that Anthony is speaking about journal/OSD and Reed speaking about capacity ratio between and HDD and SSD tier/root. 
> 
> I have been experimenting with hybrid setups (1 copy on SSD + 2 copies on HDD), like Richard says you’ll get much better random read performance with primary OSD on SSD but write performance won’t be amazing since you still have 2 HDD copies to write before ACK. 
> 
> I know the doc suggests using primary affinity but since it’s a OSD level setting it does not play well with other storage tiers so I searched for other options. From what I have tested, a rule that selects the first/primary OSD from the ssd-root then the rest of the copies from the hdd-root works. Though I am not sure it is *guaranteed* that the first OSD selected will be primary.
> 
> “rule hybrid {
>  ruleset 2
>  type replicated
>  min_size 1
>  max_size 10
>  step take ssd-root
>  step chooseleaf firstn 1 type host
>  step emit
>  step take hdd-root
>  step chooseleaf firstn -1 type host
>  step emit
> }”
> 
> Cheers,
> Maxime
> 
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux