Re: luminous/bluetsore osd memory requirements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi david,

sure i understand that. but how bad does it get when you oversubscribe
OSDs? if context switching itself is dominant, then using HT should
allow to run double the amount of OSDs on same CPU (on OSD per HT core);
but if the issue is actual cpu cycles, HT won't help that much either (1
OSD per HT core vs 2 OSD per phys core).

i guess the reason for this is that OSD processes have lots of threads?

maybe i can run some tests on a ceph test cluster myself ;)

stijn


On 08/12/2017 03:13 PM, David Turner wrote:
> The reason for an entire core peer osd is that it's trying to avoid context
> switching your CPU to death. If you have a quad-core processor with HT, I
> wouldn't recommend more than 8 osds on the box. I probably would go with 7
> myself to keep one core available for system operations. This
> recommendation has nothing to do with GHz. Higher GHz per core will likely
> improve your cluster latency. Of course if your use case says that you only
> need very minimal through-put... There is no need to hit or exceed the
> recommendation. The number of cores recommendation is not changing for
> bluestore. It might add a recommendation of how fast your processor should
> be... But making it based on how much GHz per TB is an invitation to
> context switch to death.
> 
> On Sat, Aug 12, 2017, 8:40 AM Stijn De Weirdt <stijn.deweirdt@xxxxxxxx>
> wrote:
> 
>> hi all,
>>
>> thanks for all the feedback. it's clear we should stick to the 1GB/TB
>> for the memory.
>>
>> any (changes to) recommendation for the CPU? in particular, is it still
>> the rather vague "1 HT core per OSD" (or was it "1 1Ghz HT core per
>> OSD"? it would be nice if we had some numbers like required specint per
>> TB and/or per Gbs. also any indication how much more cpu EC uses (10%,
>> 100%, ...)?
>>
>> i'm aware that this also depeneds on the use case, but i'll take any
>> pointers i can get. we will probably end up overprovisioning, but it
>> would be nice if we can avoid a whole cpu (32GB dimms are cheap, so lots
>> of ram with single socket is really possible these days).
>>
>> stijn
>>
>> On 08/10/2017 05:30 PM, Gregory Farnum wrote:
>>> This has been discussed a lot in the performance meetings so I've added
>>> Mark to discuss. My naive recollection is that the per-terabyte
>>> recommendation will be more realistic  than it was in the past (an
>>> effective increase in memory needs), but also that it will be under much
>>> better control than previously.
>>>
>>> On Thu, Aug 10, 2017 at 1:35 AM Stijn De Weirdt <stijn.deweirdt@xxxxxxxx
>>>
>>> wrote:
>>>
>>>> hi all,
>>>>
>>>> we are planning to purchse new OSD hardware, and we are wondering if for
>>>> upcoming luminous with bluestore OSDs, anything wrt the hardware
>>>> recommendations from
>>>> http://docs.ceph.com/docs/master/start/hardware-recommendations/
>>>> will be different, esp the memory/cpu part. i understand from colleagues
>>>> that the async messenger makes a big difference in memory usage (maybe
>>>> also cpu load?); but we are also interested in the "1GB of RAM per TB"
>>>> recommendation/requirement.
>>>>
>>>> many thanks,
>>>>
>>>> stijn
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux