Sorry, the ceph version is 15.0.0 韦皓诚 <whc0000001@xxxxxxxxx> 于2019年6月4日周二 下午4:13写道: > > Hi~ > I recently tried to use SPDK in version 14.2.1 to speed up access > to nvme. But I encountered the following error when I start osd: > > EAL: Detected 72 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: No free hugepages reported in hugepages-1048576kB > EAL: Probing VFIO support... > EAL: PCI device 0000:1a:00.0 on NUMA socket 0 > EAL: probe driver: 1c5f:550 spdk_nvme > EAL: PCI device 0000:1b:00.0 on NUMA socket 0 > EAL: probe driver: 1c5f:550 spdk_nvme > EAL: PCI device 0000:3e:00.0 on NUMA socket 0 > EAL: probe driver: 1c5f:550 spdk_nvme > /data/weihaocheng/ceph-rpm/rpmbuild/BUILD/ceph-15.0.0-1494-ged2ce0e/src/common/PriorityCache.cc: > In function 'void PriorityCache::Manager::balance()' thread > 7f8e8bffc700 time 2019-06-04T15:41:00.171667+0800 > /data/weihaocheng/ceph-rpm/rpmbuild/BUILD/ceph-15.0.0-1494-ged2ce0e/src/common/PriorityCache.cc: > 288: FAILED ceph_assert(mem_avail >= 0) > ceph version 15.0.0-1494-ged2ce0e > (ed2ce0efad31b2b953c49be957fd2f46199e84b1) octopus (dev) > 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char > const*)+0x14a) [0x7f9e97084f59] > 2: (()+0x4c5121) [0x7f9e97085121] > 3: (PriorityCache::Manager::balance()+0x421) [0x7f9e9771b2c1] > 4: (BlueStore::MempoolThread::entry()+0x501) [0x7f9e97655601] > 5: (()+0x7e25) [0x7f9e937e4e25] > 6: (clone()+0x6d) [0x7f9e926a5bad] > *** Caught signal (Aborted) ** > Is my system configured incorrect, or dependent versions incorrect or > any bug in the codes?