Re: Simple question about primary-affinity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you, I will look for cache-tiering so ;)


On 08/19/2016 02:50 AM, Christian Balzer wrote:
>
> Hello,
>
> completely ignoring your question about primary-affinity (which always
> struck me as a corner case thing). ^o^
>
> If you're adding SSDs to your cluster you will want to:
>
> a) use them for OSD journals (if you're not doing so already)
> b) create dedicated pools for high speed data (i.e. RBD images for DB
> storage) 
> c) use them for cache-tiering.
>
> The last one is a much more efficient approach than primary-affinity,
> since hot objects will wind up on the SSDs, as opposed to random ones.
>
> Christian
> On Thu, 18 Aug 2016 11:07:50 +0200 Florent B wrote:
>
>> Hi everyone,
>>
>> I begin to insert some SSD disks in my Ceph setup.
>>
>> For now I only have 600GB on SSD (14 000 GB total space).
>>
>> So my SSDs can't store *each* PG of my setup, for now.
>>
>> If I set primary-affinity to 0 on non-SSD disks, will I get a problem
>> for PGs stored on standard spinning disks ?
>>
>> For example, a PG is on OSD 4,15,18, if they have primary-affinity to
>> 0.00000, will it be a problem to elect a primary ?
>>
>> Do I have to set primary-affinity to 0.00001 for non-SSD disks ?
>>
>> Thank you ;)
>>
>> Flo
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux