Re: Ceph Hackathon: More Memory Allocator Testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 20, 2015 at 2:35 PM, Dałek, Piotr
<Piotr.Dalek@xxxxxxxxxxxxxx> wrote:
>> -----Original Message-----
>> From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-
>> owner@xxxxxxxxxxxxxxx] On Behalf Of Blinick, Stephen L
>> Sent: Wednesday, August 19, 2015 6:58 PM
>>
>> [..
>> Regarding the all-HDD or high density HDD nodes, is it certain these issues
>> with tcmalloc don't apply, due to lower performance, or would it potentially
>> be something that would manifest over a longer period of time
>> (weeks/months) of running?   I know we've seen some weirdness attributed
>> to tcmalloc on our 10-disk 20-node cluster with HDD's &  SSD journals, but it
>> took a few weeks.
>
> And it takes me just a few minutes with rados bench to reproduce this issue on mixed-storage node (SSDs, SAS disks, high-capacity SATA disks, etc).
> See here: http://ceph.predictor.org.pl/cpu_usage_over_time.xlsx
> It gets even worse when rebalancing starts...

Cool, it met my thought. I guess the only way to lighten memory
problem is solve this for each heavy memory allocation use case.

>
> With best regards / Pozdrawiam
> Piotr Dałek



-- 
Best Regards,

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux