Re: ceph mds memory usage 20GB : is it normal ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>Can you also share `ceph daemon mds.2 cache status`, the full `ceph 
>>daemon mds.2 perf dump`, and `ceph status`? 

Sorry, too late, I needed to restart the mds daemon because I was out of memory :(

Seem stable for now. (around 500mb)

Not sure It was related, but I had a ganesha-nfs ->cephfs daemon running on this cluster. (but no client connected to it)


>>Note [1] will be in 12.2.5 and may help with your issue. 
>>[1] https://github.com/ceph/ceph/pull/20527 

ok thanks !



----- Mail original -----
De: "Patrick Donnelly" <pdonnell@xxxxxxxxxx>
À: "Alexandre Derumier" <aderumier@xxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Mardi 27 Mars 2018 20:35:08
Objet: Re:  ceph mds memory usage 20GB : is it normal ?

Hello Alexandre, 

On Thu, Mar 22, 2018 at 2:29 AM, Alexandre DERUMIER <aderumier@xxxxxxxxx> wrote: 
> Hi, 
> 
> I'm running cephfs since 2 months now, 
> 
> and my active msd memory usage is around 20G now (still growing). 
> 
> ceph 1521539 10.8 31.2 20929836 20534868 ? Ssl janv.26 8573:34 /usr/bin/ceph-mds -f --cluster ceph --id 2 --setuser ceph --setgroup ceph 
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 
> 
> 
> this is on luminous 12.2.2 
> 
> only tuning done is: 
> 
> mds_cache_memory_limit = 5368709120 
> 
> 
> (5GB). I known it's a soft limit, but 20G seem quite huge vs 5GB .... 
> 
> 
> Is it normal ? 

No, that's definitely not normal! 


> # ceph daemon mds.2 perf dump mds 
> { 
> "mds": { 
> "request": 1444009197, 
> "reply": 1443999870, 
> "reply_latency": { 
> "avgcount": 1443999870, 
> "sum": 1657849.656122933, 
> "avgtime": 0.001148095 
> }, 
> "forward": 0, 
> "dir_fetch": 51740910, 
> "dir_commit": 9069568, 
> "dir_split": 64367, 
> "dir_merge": 58016, 
> "inode_max": 2147483647, 
> "inodes": 2042975, 
> "inodes_top": 152783, 
> "inodes_bottom": 138781, 
> "inodes_pin_tail": 1751411, 
> "inodes_pinned": 1824714, 
> "inodes_expired": 7258145573, 
> "inodes_with_caps": 1812018, 
> "caps": 2538233, 
> "subtrees": 2, 
> "traverse": 1591668547, 
> "traverse_hit": 1259482170, 
> "traverse_forward": 0, 
> "traverse_discover": 0, 
> "traverse_dir_fetch": 30827836, 
> "traverse_remote_ino": 7510, 
> "traverse_lock": 86236, 
> "load_cent": 144401980319, 
> "q": 49, 
> "exported": 0, 
> "exported_inodes": 0, 
> "imported": 0, 
> "imported_inodes": 0 
> } 
> } 

Can you also share `ceph daemon mds.2 cache status`, the full `ceph 
daemon mds.2 perf dump`, and `ceph status`? 

Note [1] will be in 12.2.5 and may help with your issue. 

[1] https://github.com/ceph/ceph/pull/20527 

-- 
Patrick Donnelly 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux