Re: CephFS trim_lru performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I tested it in 12.2.8, and the debug_mds log level is 10/10
I don't config the mds_cache_memory_limit, I think it will be 1GB.
Here is the log(I set MDS log as 10)
....
2018-11-23 02:01:02.995067 7f40535fe700  7 src/mds/MDCache.cc:6517
mds.0.cache trim bytes_used=1GB limit=1GB reservation=0.05% count=0
2018-11-23 02:01:02.995084 7f40535fe700  7 src/mds/MDCache.cc:6456
mds.0.cache trim_lru trimming 0 items from LRU size=694884 mid=472796
pintail=4942 pinned=19460
2018-11-23 02:01:02.995409 7f404e7fb700 10 src/mds/MDSContext.cc:51
MDSIOContextBase::complete: 21C_IO_Dir_OMAP_Fetched
2018-11-23 02:01:04.207192 7f40535fe700  7 src/mds/MDCache.cc:6502
mds.0.cache trim_lru trimmed 262781 items
2018-11-23 02:01:04.207211 7f40535fe700 10 src/mds/MDCache.cc:7426
mds.0.cache trim_client_leases
....

On Mon, Nov 26, 2018 at 10:43 PM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
>
> On Mon, Nov 26, 2018 at 7:00 AM Marvin Zhang <fanzier@xxxxxxxxx> wrote:
> >
> > Hi CephFs Experts,
> > I found that MDCache::trim_lru() will take more than 2s sometime, is
> > it as expected? And it seems like to hold mds_lock at that time, which
> > cause other client request can't be processed.
> > Thanks,
> > Marvin
>
> It's definitely not supposed to, but we've started noticing some
> performance issues with extremely large MDS cache sizes. How much
> memory did you give the cache, and what is your system configuration?



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux