Hi Bob, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on v5.17-rc2] [also build test WARNING on next-20220131] [cannot apply to rdma/for-next] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Bob-Pearson/Move-two-object-pools-to-rxe_mcast-c/20220201-061231 base: 26291c54e111ff6ba87a164d85d4a4e134b7315c config: ia64-allyesconfig (https://download.01.org/0day-ci/archive/20220201/202202010836.EmoG3Ot8-lkp@xxxxxxxxx/config) compiler: ia64-linux-gcc (GCC) 11.2.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/f9d560658bbbd5a17cc3c62e566cb9bb77697530 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Bob-Pearson/Move-two-object-pools-to-rxe_mcast-c/20220201-061231 git checkout f9d560658bbbd5a17cc3c62e566cb9bb77697530 # save the config file to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=ia64 SHELL=/bin/bash drivers/infiniband/sw/rxe/ If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@xxxxxxxxx> All warnings (new ones prefixed by >>): >> drivers/infiniband/sw/rxe/rxe_mcast.c:57:6: warning: no previous prototype for '__rxe_destroy_mcg' [-Wmissing-prototypes] 57 | void __rxe_destroy_mcg(struct rxe_mcg *grp) | ^~~~~~~~~~~~~~~~~ vim +/__rxe_destroy_mcg +57 drivers/infiniband/sw/rxe/rxe_mcast.c 55 56 /* caller is holding a ref from lookup and mcg->mcg_lock*/ > 57 void __rxe_destroy_mcg(struct rxe_mcg *grp) 58 { 59 rxe_drop_key(grp); 60 rxe_drop_ref(grp); 61 62 rxe_mcast_delete(grp->rxe, &grp->mgid); 63 } 64 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@xxxxxxxxxxxx