Re: Ceph MDS and hard links

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 2, 2018 at 3:36 AM Benjeman Meekhof <bmeekhof@xxxxxxxxx> wrote:
>
> I've been encountering lately a much higher than expected memory usage
> on our MDS which doesn't align with the cache_memory limit even
> accounting for potential over-runs.  Our memory limit is 4GB but the
> MDS process is steadily at around 11GB used.
>
> Coincidentally we also have a new user heavily relying on hard links.
> This led me to the following (old) document which says "Hard links are
> also supported, although in their current implementation each link
> requires a small bit of MDS memory and so there is an implied limit
> based on your available memory. "
> (https://ceph.com/geen-categorie/cephfs-mds-status-discussion/)
>
> Is that statement still correct, could it potentially explain why our
> memory usage appears so high?  As far as I know this is a recent
> development and it does very closely correspond to a new user doing a
> lot of hardlinking.  Ceph Mimic 13.2.1, though we first saw the issue
> while still running 13.2.0.
>

That statement is no longer correct.   what are output of  "ceph
daemon mds.x dump_mempools" and "ceph tell mds.x heap stats"?


> thanks,
> Ben
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux