Re: [BUG REPORT] missing memory counter introduced by xfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dave,

Thank you for your fast reply, look beblow please.

On 09/08/2016 05:22 AM, Dave Chinner wrote:
On Wed, Sep 07, 2016 at 06:36:19PM +0800, Lin Feng wrote:
Hi all nice xfs folks,

I'm a rookie and really fresh new in xfs and currently I ran into an
issue same as the following link described:
http://oss.sgi.com/archives/xfs/2014-04/msg00058.html

In my box(running cephfs osd using xfs kernel 2.6.32-358) and I sum
all possible memory counter can be find but it seems that nearlly
26GB memory has gone and they are back after I echo 2 >
/proc/sys/vm/drop_caches, so seems these memory can be reclaimed by
slab.

It isn't "reclaimed by slab". The XFS metadata buffer cache is
reclaimed by a memory shrinker, which are for reclaiming objects
from caches that aren't the page cache. "echo 2 >
/proc/sys/vm/drop_caches" runs the memory shrinkers rather than page
cache reclaim. Many slab caches are backed by memory shrinkers,
which is why it is thought that "2" is "slab reclaim"....

And according to what David said replying in the list:
..
That's where your memory is - in metadata buffers. The xfs_buf slab
entries are just the handles - the metadata pages in the buffers
usually take much more space and it's not accounted to the slab
cache nor the page cache.

That's exactly the case.

  Minimum / Average / Maximum Object : 0.02K / 0.33K / 4096.00K

   OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
4383036 4383014  99%    1.00K 1095759        4   4383036K xfs_inode
5394610 5394544  99%    0.38K 539461       10   2157844K xfs_buf

So, you have *5.4 million* active metadata buffers. Each buffer will
hold  1 or 2 4k pages on your kernel, so simple math says 4M * 4k +
1.4M * 8k = 26G. There's no missing counter here....

Does xattr contribute to such metadata buffers or there is something else?
After consulting to my teammate, who told me that in our case small files
(there are a looot, look below) always use xattr.

Another thing is do we need to export such thing or we have to make the computation every time to figure out if we leak memory. And more important is that seems these memory has a low priority to be reclaimed by memory reclaim mechanism, does it due to most of the slab objects are active?
>>    OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
>> 4383036 4383014  99%    1.00K 1095759        4   4383036K xfs_inode
>> 5394610 5394544  99%    0.38K 539461       10   2157844K xfs_buf

In fact xfs eats a lot of my ram and I will never know where it goes without diving into xfs source, at least I'm the second extreme user ;-)


Obviously your workload is doing something extremely metadata
intensive to have a cache footprint like this - you have more cached
buffers than inodes, dentries, etc. That in itself is very unusual -
can you describe what is stored on that filesystem and how large the
attributes being stored in each inode are?

The fs-user behavior is that ceph-osd daemon will intensively pull/synchronize/update files from other osd when the server is up. In our case cephfs osd stores a lot of small pictures in the filesystem, and I do some simple analysis, there are nearly 3,000,000 files on each disk and there are 10 such disk.
[root@wzdx49 osd.670]# find current -type f -size -512k | wc -l
2668769
[root@wzdx49 ~]# find /data/osd/osd.67 -type f | wc -l
2682891
[root@wzdx49 ~]# find /data/osd/osd.67 -type d | wc -l
109760

thanks,
linfeng

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux