Re: ceph program uses lots of memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



And we may be interested in your cluster's configuration.

 # ceph --show-config > $(hostname).$(date +%Y%m%d).ceph_conf.txt

On Fri, Dec 30, 2016 at 7:48 AM, David Turner <david.turner@xxxxxxxxxxxxxxxx> wrote:

Another thing that I need to make sure on is that your number of PGs in the pool with 90% of the data is a power of 2 (256, 512, 1024, 2048, etc).  If that is the case, then I need the following information.

1) Pool replica size
2) The number of the pool with the data
3) A copy of your osdmap (ceph osd getmap -o osd_map.bin)
4) Full output of (ceph osd tree)
5) Full output of (ceph osd df)

With that I can generate a new crushmap that is balanced for your cluster to equalize all of the osds % used.

Our clusters have more than 1k osds and the difference between the top used osd and the least used osd is within 2% in those clusters.  We have 99.9% of our data in 1 pool.


David Turner | Cloud Operations Engineer | StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.


________________________________________
From: ceph-users [ceph-users-bounces@lists.ceph.com] on behalf of Bryan Henderson [bryanh@xxxxxxxxxxxxxxxx]
Sent: Thursday, December 29, 2016 3:31 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: ceph program uses lots of memory


Does anyone know why the 'ceph' program uses so much memory?  If I run it with
an address space rlimit of less than 300M, it usually dies with messages about
not being able to allocate memory.

I'm curious as to what it could be doing that requires so much address space.

It doesn't matter what specific command I'm doing and it does this even with
there is no ceph cluster running, so it must be something pretty basic.

--
Bryan Henderson                                   San Jose, California
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux