Re: Mimic Bluestore memory optimization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Glen,

On 2/24/19 9:21 PM, Glen Baars wrote:
> I am tracking down a performance issue with some of our mimic 13.2.4 OSDs. It feels like a lack of memory but I have no real proof of the issue. I have used the memory profiling ( pprof tool ) and the OSD's are maintaining their 4GB allocated limit.

What are the symptoms? Does performance drop at a certain point? Did it
drop compared to a previous configuration? You're saying that only
*some* OSDs have a performance issue?

> My questions are:
>
> 1.How do you know if the allocated memory is enough for the OSD? My 1TB disks and 12TB disks take the same memory and I wonder if the OSDs should have memory allocated based on the size of the disks?
> 2.In the past, SSD disks needs 3 times the memory and now they don't, why is that? ( 1GB ram per HDD and 3GB ram per SSD both went to 4GB )

I think you're talking about the BlueStore caching settings for SSDs and
HDDs. You should take a look at the memory autotuning (notably
osd_memory_target):

http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/#automatic-cache-sizing

> 3.I have read that the number of placement groups per OSD is a significant factor in the memory usage. Generally I have ~200 placement groups per OSD, this is at the higher end of the recommended values and I wonder if its causing high memory usage?
>
> For reference the hosts are 1 x 6 core CPU, 72GB ram, 14 OSDs, 2 x 10Gbit. LSI cachecade / writeback cache for the HDD and LSI JBOD for SSDs. 9 hosts in this cluster.
>
> Kind regards,
> Glen Baars
> This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux