Re: Nautilus: significant increase in cephfs metadata pool usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have seen significant increases (1GB -> 8GB) proportional to number
of inodes open, just like the MDS cache grows. These went away once
the stat-heavy workloads (multiple parallel rsyncs) stopped. I
disabled autoscale warnings on the metadata pools due to this
fluctuation.

On Thu, Jul 25, 2019 at 1:31 PM Dietmar Rieder
<dietmar.rieder@xxxxxxxxxxx> wrote:
>
> On 7/25/19 11:55 AM, Konstantin Shalygin wrote:
> >> we just recently upgraded our cluster from luminous 12.2.10 to nautilus
> >> 14.2.1 and I noticed a massive increase of the space used on the cephfs
> >> metadata pool although the used space in the 2 data pools  basically did
> >> not change. See the attached graph (NOTE: log10 scale on y-axis)
> >>
> >> Is there any reason that explains this?
> >
> > Dietmar, how your metadata usage now? Is stop growing?
>
> it is stable now and only changes as the number of files in the FS changes.
>
> Dietmar
>
> --
> _________________________________________
> D i e t m a r  R i e d e r, Mag.Dr.
> Innsbruck Medical University
> Biocenter - Division for Bioinformatics
> Innrain 80, 6020 Innsbruck
> Phone: +43 512 9003 71402
> Fax: +43 512 9003 73100
> Email: dietmar.rieder@xxxxxxxxxxx
> Web:   http://www.icbi.at
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux