On 24 Mar 2016 9:12 pm, "Dave Anderson" <anderson@xxxxxxxxxx> wrote:
>
>
>
> ----- Original Message -----
> > On Tue, Mar 15, 2016 at 8:08 PM, vinayak menon <vinayakm.list@xxxxxxxxx>
> > wrote:
> > >>
> > >> Although looking at it now, get_slabinfo() doesn't seem to take into account
> > >> the objects in the per_cpu caches?
> > >>
> > >> Anyway, 200 of 200 is clearly wrong.
> > >>
> > >> Dave
> > >>
> > >
> >
> > Added accounting for free objects in cpu free/partial lists. Patch
> > attached. Tried to compare with the kmem -S output for various caches
> > and output looks fine.
>
> Vinayak,
>
> While this is an improvement over your last post, there are still issues
> with this patchset. I've been testing it with a wide range of stashed
> vmcores, and I see problems with all of the old 2.6.24 vmcores. Those
> kernels used an early version of CONFIG_SLUB which did not even have
> the kmem_cache_node.total_objects field. However, given that the
> kmem_cache_node.total_objects field is encapsulated within CONFIG_SLUB_DEBUG,
> presumably the same problem would occur with a more recent kernel
> if it was *not* configured with CONFIG_SLUB_DEBUG.
>
> Here's an example -- note the negative ALLOCATED counts:
>
> crash> kmem -s
> CACHE NAME OBJSIZE ALLOCATED TOTAL SLABS SSIZE
> ffff81001e0df568 rpc_buffers 2048 7 9 3 8k
> ffff81001e0decd8 rpc_tasks 360 8 8 1 4k
> ffff81001e0df120 rpc_inode_cache 1392 2 10 2 8k
> ffff81001e0de448 ip_fib_alias 48 -14 34 1 4k
> ffff81001e0de000 ip_fib_hash 40 -16 36 1 4k
> ffff81001e1a2890 fib6_nodes 56 12 32 1 4k
> ffff81001e1a2448 ip6_dst_cache 320 5 24 3 4k
> ffff81001e1a2000 ndisc_cache 360 -6 8 1 4k
> ffff81001f1f19b0 RAWv6 1384 3 5 1 8k
> ffff81001e4b0890 UDPLITEv6 1376 0 0 0 8k
> ffff81001f1019b0 UDPv6 1376 3 5 1 8k
> ffff81001f101120 tw_sock_TCPv6 272 0 0 0 4k
> ffff81001f100cd8 request_sock_TCPv6 144 0 0 0 4k
> ffff81001f100890 TCPv6 2400 -2 6 2 8k
> ffff81001f1f1568 xfs_icluster 80 -24 26 1 4k
> ffff81001f1f1120 xfs_ili 192 0 0 0 4k
> ffff81001f1f0cd8 xfs_inode 824 -2 4 1 4k
> ffff81001f1f0890 xfs_efi_item 352 0 0 0 4k
> ffff81001f1f0448 xfs_efd_item 360 0 0 0 4k
> ffff81001f1f0000 xfs_buf_item 184 0 0 0 4k
> ffff81001e982890 fstrm_item 24 0 0 0 4k
> ffff81001f07acd8 xfs_mru_cache_elem 32 0 0 0 4k
> ffff81001f07a000 xfs_acl 304 0 0 0 4k
> ffff81001f07a890 xfs_ifork 64 0 0 0 4k
> ffff81001f07a448 xfs_dabuf 24 0 0 0 4k
> ffff81001e58d568 xfs_da_state 488 0 0 0 4k
> ffff81001e58d9b0 xfs_trans 880 0 0 0 4k
> ffff81001e9c59b0 xfs_btree_cur 192 0 0 0 4k
> ffff81001e56f9b0 xfs_bmap_free_item 24 0 0 0 4k
> ffff81001e4b9568 xfs_buf 512 8 12 2 4k
> ffff81001e4b8448 xfs_ioend 128 24 40 2 4k
> ffff81001e4b9120 xfs_vnode 1088 -4 6 1 8k
> ffff81001e4b0448 dm_mpath_io 40 0 0 0 4k
> ffff81001e56f120 dm_snap_pending_exception 112 124 132 6 4k
> ffff81001e56ecd8 dm_snap_exception 32 0 0 0 4k
> ffff81001e56e000 dm_uevent 2608 0 0 0 8k
> ffff81001e4b19b0 dm_target_io 24 1238 1344 32 4k
> ffff81001e4b1568 dm_io 40 1244 1332 37 4k
> ffff81001e4b8000 scsi_cmd_cache 400 -4 8 1 4k
> ffff81001e4d5120 sgpool-128 5120 1 3 3 8k
> ffff81001e4d4cd8 sgpool-64 2560 -1 6 2 8k
> ffff81001e4d4890 sgpool-32 1280 -1 5 1 8k
> ffff81001e4d4448 sgpool-16 640 -1 5 1 4k
> ffff81001e4d4000 sgpool-8 320 -4 8 1 4k
> ffff81001e4539b0 scsi_io_context 112 0 0 0 4k
> ffff81001e453568 ext3_inode_cache 1488 24752 25410 5082 8k
> ffff81001e453120 ext3_xattr 88 38 50 2 4k
> ffff81001e452448 journal_handle 56 -32 32 1 4k
> ffff81001e452000 journal_head 96 -23 48 2 4k
> ffff81001e983120 revoke_table 16 -34 46 1 4k
> ffff81001e983568 revoke_record 32 -39 39 1 4k
> ffff81001e9839b0 uhci_urb_priv 56 0 0 0 4k
> ffff81001e982448 UNIX 1368 43 55 11 8k
> ffff81001e982000 flow_cache 104 0 0 0 4k
> ffff81001e8dccd8 cfq_io_context 152 8 54 3 4k
> ffff81001e8dc890 cfq_queue 136 17 38 2 4k
> ffff81001e8dc000 bsg_cmd 312 0 0 0 4k
> ffff81001e9279b0 mqueue_inode_cache 1472 -2 4 1 8k
> ffff81001e927568 isofs_inode_cache 1136 0 0 0 8k
> ffff81001e927120 hugetlbfs_inode_cache 1152 -4 6 1 8k
> ffff81001e926cd8 dnotify_cache 40 -32 36 1 4k
> ffff81001e926890 dquot 368 0 0 0 4k
> ffff81001e926448 inotify_event_cache 40 -36 36 1 4k
> ffff81001e926000 inotify_watch_cache 72 60 112 4 4k
> ffff81001f02f9b0 kioctx 512 0 0 0 4k
> ffff81001f02f568 kiocb 248 0 0 0 4k
> ffff81001f02f120 fasync_cache 24 0 0 0 4k
> ffff81001f02ecd8 shmem_inode_cache 1408 653 665 133 8k
> ffff81001f02e448 nsproxy 56 0 0 0 4k
> ffff81001f02e000 posix_timers_cache 248 0 0 0 4k
> ffff81001fbd19b0 uid_cache 344 6 8 1 4k
> ffff81001f989568 ip_mrt_cache 112 0 0 0 4k
> ffff81001f989120 UDP-Lite 1224 0 0 0 8k
> ffff81001f988cd8 tcp_bind_bucket 32 -31 39 1 4k
> ffff81001f988890 inet_peer_cache 64 0 0 0 4k
> ffff81001f988448 secpath_cache 56 0 0 0 4k
> ffff81001f988000 xfrm_dst_cache 376 0 0 0 4k
> ffff81001fa3d568 ip_dst_cache 328 10 24 3 4k
> ffff81001fa3d120 arp_cache 348 -6 8 1 4k
> ffff81001fa3ccd8 RAW 1200 -2 6 1 8k
> ffff81001fa3c890 UDP 1224 5 15 3 8k
> ffff81001fa3c448 tw_sock_TCP 240 -10 10 1 4k
> ffff81001fa3c000 request_sock_TCP 96 -16 16 1 4k
> ffff81001fa179b0 TCP 2248 1 6 2 8k
> ffff81001f9fc890 eventpoll_pwq 72 -26 28 1 4k
> ffff81001f9fc448 eventpoll_epi 128 -14 16 1 4k
> ffff81001f96c890 blkdev_ioc 64 -4 60 2 4k
> ffff81001f96c448 blkdev_queue 2176 22 24 8 8k
> ffff81001f96c000 blkdev_requests 288 1 22 2 4k
> ffff81001f9779b0 biovec-256 4096 82 82 82 8k
> ffff81001f977568 biovec-128 2048 79 87 29 8k
> ffff81001f977120 biovec-64 1024 75 91 13 8k
> ffff81001f976cd8 biovec-16 256 72 100 10 4k
> ffff81001f976890 biovec-4 64 52 120 4 4k
> ffff81001f976448 biovec-1 16 36 138 3 4k
> ffff81001f976000 bio 104 68 96 6 4k
> ffff81001f9659b0 utrace_engine_cache 64 0 0 0 4k
> ffff81001f965568 utrace_cache 96 0 0 0 4k
> ffff81001f909120 sock_inode_cache 1216 80 100 20 8k
> ffff81001f908cd8 skbuff_fclone_cache 452 -6 6 1 4k
> ffff81001f908890 skbuff_head_cache 224 63 70 7 4k
> ffff81001f908448 file_lock_cache 224 -3 13 1 4k
> ffff81001f9079b0 Acpi-Operand 64 740 750 25 4k
> ffff81001f907568 Acpi-ParseExt 64 -30 30 1 4k
> ffff81001f907120 Acpi-Parse 40 -36 36 1 4k
> ffff81001f906cd8 Acpi-State 80 -26 26 1 4k
> ffff81001f906890 Acpi-Namespace 32 78 156 4 4k
> ffff81001f906448 task_delay_info 120 53 105 5 4k
> ffff81001f906000 taskstats 312 -6 10 1 4k
> ffff81001f8a59b0 proc_inode_cache 1120 228 252 42 8k
> ffff81001f8a5568 sigqueue 160 -15 17 1 4k
> ffff81001f8a5120 radix_tree_node 552 2639 3576 596 4k
> ffff81001f8a4cd8 bdev_cache 1448 22 30 6 8k
> ffff81001f8a4890 sysfs_dir_cache 80 6928 6994 269 4k
> ffff81001f8a4448 mnt_cache 208 22 36 3 4k
> ffff81001f8a4000 inode_cache 1088 100 161 23 8k
> ffff81001f827568 dentry 256 27195 29004 2417 4k
> ffff81001f827120 filp 288 552 620 62 4k
> ffff81001f826cd8 names_cache 4096 -1 1 1 8k
> ffff81001f826890 avc_node 72 2 28 1 4k
> ffff81001f826448 selinux_inode_security 184 209 320 20 4k
> ffff81001f826000 key_jar 232 4 24 2 4k
> ffff81001f8259b0 idr_layer_cache 528 154 162 27 4k
> ffff81001f825568 buffer_head 104 9543 22747 989 4k
> ffff81001f825120 mm_struct 1152 28 42 7 8k
> ffff81001f824cd8 vm_area_struct 176 1694 1920 120 4k
> ffff81001f824890 fs_cache 120 12 63 3 4k
> ffff81001f824448 files_cache 768 30 40 10 4k
> ffff81001f824000 signal_cache 888 64 76 19 4k
> ffff81001f8039b0 sighand_cache 2184 65 72 24 8k
> ffff81001f803568 task_struct 4688 72 74 74 8k
> ffff81001f803120 anon_vma 72 587 644 23 4k
> ffff81001f802cd8 pid_namespace 2104 0 0 0 8k
> ffff81001f802890 pid_1 88 56 105 5 4k
> ffff81001f802448 shared_policy_node 48 0 0 0 4k
> ffff81001f802000 numa_policy 24 -42 42 1 4k
> ffffffff81481ca8 kmalloc-2048 2048 199 201 67 8k
> ffffffff814817b0 kmalloc-1024 1024 311 329 47 8k
> ffffffff814812b8 kmalloc-512 512 357 392 56 4k
> ffffffff81480dc0 kmalloc-256 256 308 348 29 4k
> ffffffff814808c8 kmalloc-128 128 208 220 11 4k
> ffffffff814803d0 kmalloc-64 64 1505 1740 58 4k
> ffffffff8147fed8 kmalloc-32 32 264 312 8 4k
> ffffffff8147f9e0 kmalloc-16 16 916 1012 22 4k
> ffffffff8147f4e8 kmalloc-8 8 1670 1734 34 4k
> ffffffff8147eff0 kmalloc-192 192 101 120 8 4k
> ffffffff8147eaf8 kmalloc-96 96 164 216 9 4k
> ffffffff8147e600 kmem_cache_node 104 0 0 0 4k
> crash>
>
> So more attention needs to be placed in dealing with kernels that do not
> have kmem_cache_node.total_objects. You've done a pretty good job segregating
> the code based upon its existence, and so I'm guessing that this should be
> a simple fix.
>
> Other nits:
>
> Please fix these:
>
> $ make warn
> ... [ cut ] ...
> cc -c -g -DX86_64 -DLZO -DSNAPPY -DGDB_7_6 memory.c -Wall -O2 -Wstrict-prototypes -Wmissing-prototypes -fstack-protector -Wformat-security
> memory.c:17916:7: warning: no previous prototype for 'count_cpu_partial' [-Wmissing-prototypes]
> short count_cpu_partial(struct meminfo *si, int cpu)
> ^
> memory.c: In function 'get_kmem_cache_slub_data':
> memory.c:17982:15: warning: unused variable 'free' [-Wunused-variable]
> short inuse, free;
> ...
>
>
> In defs.h, please put the kmem_cache_node_total_objects offset_table entry at the end
> of the structure so that it won't break extension modules that haven't been recompiled.
>
> And lastly, add the display of the kmem_cache_node_total_objects offset to the
> dump_offset_table() function in symbols.c, which is used by "help -o". That would
> be especially helpful in this case so a user can verify whether the field even
> exists in the kernel being analyzed. You can put its output line next to the other
> kmem_cache_node offsets.
>
> Thanks,
> Dave
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Crash-utility mailing list
> Crash-utility@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/crash-utility
Thanks for the comments Dave. I am on vacation. Will be back in a week, and will submit a fix for these.
Vinayak
-- Crash-utility mailing list Crash-utility@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/crash-utility