Re: Memory usage of ceph-mds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 19, 2012 at 1:48 PM, Sage Weil <sage@xxxxxxxxxxx> wrote:
> On Wed, 19 Sep 2012, Tren Blackburn wrote:
>> Hey List;
>>
>> I'm in the process of rsyncing in about 7TB of data to Ceph across
>> approximately 58565475 files (okay, so I guess that's not so
>> approximate). It's only managed to copy a small portion of this so far
>> (about 35GB) and the server that currently is the mds master shows the
>> ceph-mds process closing in on 2GB of RAM used. I'm getting a little
>> nervous about this steep increase.
>>
>> fern ceph # ps wwaux | grep ceph-mds
>> root     29943  3.7  0.9 2473884 1915468 ?     Ssl  08:42  11:22
>> /usr/bin/ceph-mds -i 1 --pid-file /var/run/ceph/mds.1.pid -c
>> /etc/ceph/ceph.conf
>>
>> Thank in advance!
>
> That's not necessarily a problem.  The mds memory usage is controlled via
> the 'mds cache size' knob which defaults to 100,000.  You might try
> halving that and restarting the MDS.

Last time we counted those inodes were ~1KB each, so getting 2GB of
memory used is an awful lot.

Tren, what does your tree look like — are there very large directories?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux