Re: [ceph-users] Intel power tuning - 30% throughput performance increase

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It just came to my attention that Intel has advised Red Hat never to
lock in C0 as it may affect the life expectancy of server components
such as fans and the CPUs themselves.

FYI, YMMV.

On Fri, May 19, 2017 at 5:53 PM, xiaoguang fan
<fanxiaoguang008@xxxxxxxxx> wrote:
> I have done a test about close c-stat , but performance didn't increase
>
>
>
> 2017-05-19 15:23 GMT+08:00 Xiaoxi Chen <superdebuger@xxxxxxxxx>:
>>
>> would be better to document it first on "Known system-wise tuning
>> knobs" in the doc?
>>
>>
>> 2017-05-05 8:28 GMT+08:00 Brad Hubbard <bhubbard@xxxxxxxxxx>:
>> > On Thu, May 4, 2017 at 10:58 AM, Haomai Wang <haomai@xxxxxxxx> wrote:
>> >> refer to https://github.com/ceph/ceph/pull/5013
>> >
>> > How about we issue a warning about possible performance implications
>> > if we detect this is not set to 1 *or* 0 at startup?
>> >
>> >>
>> >> On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard <bhubbard@xxxxxxxxxx>
>> >> wrote:
>> >>> +ceph-devel to get input on whether we want/need to check the value of
>> >>> /dev/cpu_dma_latency (platform dependant) at startup and issue a
>> >>> warning, or whether documenting this would suffice?
>> >>>
>> >>> Any doc contribution would be welcomed.
>> >>>
>> >>> On Wed, May 3, 2017 at 7:18 PM, Blair Bethwaite
>> >>> <blair.bethwaite@xxxxxxxxx> wrote:
>> >>>> On 3 May 2017 at 19:07, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
>> >>>>> Whether cpu_dma_latency should be 0 or 1, I'm not sure yet. I assume
>> >>>>> your 30% boost was when going from throughput-performance to
>> >>>>> dma_latency=0, right? I'm trying to understand what is the
>> >>>>> incremental
>> >>>>> improvement from 1 to 0.
>> >>>>
>> >>>> Probably minimal given that represents a state transition latency
>> >>>> taking only 1us. Presumably the main issue is when the CPU can drop
>> >>>> into the lower states and the compounding impact of that over time. I
>> >>>> will do some simple characterisation of that over the next couple of
>> >>>> weeks and report back...
>> >>>>
>> >>>> --
>> >>>> Cheers,
>> >>>> ~Blairo
>> >>>> _______________________________________________
>> >>>> ceph-users mailing list
>> >>>> ceph-users@xxxxxxxxxxxxxx
>> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Cheers,
>> >>> Brad
>> >>> _______________________________________________
>> >>> ceph-users mailing list
>> >>> ceph-users@xxxxxxxxxxxxxx
>> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> >
>> > --
>> > Cheers,
>> > Brad
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> > the body of a message to majordomo@xxxxxxxxxxxxxxx
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>



-- 
Cheers,
Brad
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux