Re: ceph mds memory usage 20GB : is it normal ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi
>>You could use "mds_cache_size" to limit number of CAPS untill you have this fixed, but I'd say for your number of caps and inodes, 20GB is normal. 

The documentation (luminous) say:

"
mds cache size

Description:	The number of inodes to cache. A value of 0 indicates an unlimited number. It is recommended to use mds_cache_memory_limit to limit the amount of memory the MDS cache uses.
Type:	32-bit Integer
Default:	0
"

and, my mds_cache_memory_limit is currently at 5GB.





----- Mail original -----
De: "Webert de Souza Lima" <webert.boss@xxxxxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Vendredi 11 Mai 2018 20:18:27
Objet: Re:  ceph mds memory usage 20GB : is it normal ?

You could use "mds_cache_size" to limit number of CAPS untill you have this fixed, but I'd say for your number of caps and inodes, 20GB is normal. 
this mds (jewel) here is consuming 24GB RAM: 

{ 
"mds": { 
"request": 7194867047, 
"reply": 7194866688, 
"reply_latency": { 
"avgcount": 7194866688, 
"sum": 27779142.611775008 
}, 
"forward": 0, 
"dir_fetch": 179223482, 
"dir_commit": 1529387896, 
"dir_split": 0, 
"inode_max": 3000000, 
"inodes": 3001264, 
"inodes_top": 160517, 
"inodes_bottom": 226577, 
"inodes_pin_tail": 2614170, 
"inodes_pinned": 2770689, 
"inodes_expired": 2920014835, 
"inodes_with_caps": 2743194, 
"caps": 2803568, 
"subtrees": 2, 
"traverse": 8255083028, 
"traverse_hit": 7452972311, 
"traverse_forward": 0, 
"traverse_discover": 0, 
"traverse_dir_fetch": 180547123, 
"traverse_remote_ino": 122257, 
"traverse_lock": 5957156, 
"load_cent": 18446743934203149911, 
"q": 54, 
"exported": 0, 
"exported_inodes": 0, 
"imported": 0, 
"imported_inodes": 0 
} 
} 


Regards, 
Webert Lima 
DevOps Engineer at MAV Tecnologia 
Belo Horizonte - Brasil 
IRC NICK - WebertRLZ 


On Fri, May 11, 2018 at 3:13 PM Alexandre DERUMIER < [ mailto:aderumier@xxxxxxxxx | aderumier@xxxxxxxxx ] > wrote: 


Hi, 

I'm still seeing memory leak with 12.2.5. 

seem to leak some MB each 5 minutes. 

I'll try to resent some stats next weekend. 


----- Mail original ----- 
De: "Patrick Donnelly" < [ mailto:pdonnell@xxxxxxxxxx | pdonnell@xxxxxxxxxx ] > 
À: "Brady Deetz" < [ mailto:bdeetz@xxxxxxxxx | bdeetz@xxxxxxxxx ] > 
Cc: "Alexandre Derumier" < [ mailto:aderumier@xxxxxxxxx | aderumier@xxxxxxxxx ] >, "ceph-users" < [ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] > 
Envoyé: Jeudi 10 Mai 2018 21:11:19 
Objet: Re:  ceph mds memory usage 20GB : is it normal ? 

On Thu, May 10, 2018 at 12:00 PM, Brady Deetz < [ mailto:bdeetz@xxxxxxxxx | bdeetz@xxxxxxxxx ] > wrote: 
> [ceph-admin@mds0 ~]$ ps aux | grep ceph-mds 
> ceph 1841 3.5 94.3 133703308 124425384 ? Ssl Apr04 1808:32 
> /usr/bin/ceph-mds -f --cluster ceph --id mds0 --setuser ceph --setgroup ceph 
> 
> 
> [ceph-admin@mds0 ~]$ sudo ceph daemon mds.mds0 cache status 
> { 
> "pool": { 
> "items": 173261056, 
> "bytes": 76504108600 
> } 
> } 
> 
> So, 80GB is my configured limit for the cache and it appears the mds is 
> following that limit. But, the mds process is using over 100GB RAM in my 
> 128GB host. I thought I was playing it safe by configuring at 80. What other 
> things consume a lot of RAM for this process? 
> 
> Let me know if I need to create a new thread. 

The cache size measurement is imprecise pre-12.2.5 [1]. You should upgrade ASAP. 

[1] [ https://tracker.ceph.com/issues/22972 | https://tracker.ceph.com/issues/22972 ] 

-- 
Patrick Donnelly 

_______________________________________________ 
ceph-users mailing list 
[ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 




_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux