Re: How to use cgroup to bind ceph-osd to a specific cpu core?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 27-07-15 15:28, Jan Schermer wrote:
> Cool! Any immediate effect you noticed? Did you partition it into 2 cpusets corresponding to NUMA nodes or more?
> 

Not yet. Cluster is still in build state. Will run benchmarks with and
without pinning set.

Currently the setup is to 2 cpusets with 2 NUMA nodes indeed.

Wido

> Jan
> 
>> On 27 Jul 2015, at 15:21, Wido den Hollander <wido@xxxxxxxx> wrote:
>>
>>
>>
>> On 27-07-15 14:56, Dan van der Ster wrote:
>>> On Mon, Jul 27, 2015 at 2:51 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
>>>> I'm testing with it on 48-core, 256GB machines with 90 OSDs each. This
>>>> is a +/- 20PB Ceph cluster and I'm trying to see how much we would
>>>> benefit from it.
>>>
>>> Cool. How many OSDs total?
>>>
>>
>> 50 hosts, 4500 OSDs at start. Plain RADOS only, no RBD or anything else.
>>
>> Wido
>>
>>> Cheers, Dan
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux