as log showed: EAL: open shared lib /usr/lib64/dpdk-pmds/librte_pmd_ixgbe.so.1 EAL: /usr/lib64/dpdk-pmds/librte_pmd_ixgbe.so.1: undefined symbol: rte_eth_devices PANIC in rte_eal_init(): It looks you failed to init dpdk env, plz ensure you have successfully build dpdk library. spdk doesn't require any pmd driver. On Thu, Apr 21, 2016 at 11:29 PM, 李 天祥 <lanceflee@xxxxxxx> wrote: > > Hi,all: > > When I tried to use SPDK, some problems occurred as follows. > > It looks like something goes wrong in Libs of dpdk-devel > > Maybe the same as BUG#15386 > > HowTo fix it? Any help? > > > — > Best Regards! > > > > EAL: TSC frequency is ~2394457 KHz > EAL: open shared lib /usr/lib64/dpdk-pmds/librte_pmd_ixgbe.so.1 > EAL: /usr/lib64/dpdk-pmds/librte_pmd_ixgbe.so.1: undefined symbol: rte_eth_devices > PANIC in rte_eal_init(): > Cannot init plugins > 8: [/lib64/libc.so.6(clone+0x6d) [0x7f428051528d]] > 7: [/lib64/libpthread.so.0(+0x7dc5) [0x7f4281e89dc5]] > 6: [/lib64/libstdc++.so.6(+0xb5220) [0x7f4280dad220]] > 5: [./ceph-osd(+0x77f58d) [0x7f428474758d]] > 4: [./ceph-osd(+0x77eee5) [0x7f4284746ee5]] > 3: [/lib64/librte_eal.so.2(rte_eal_init+0xecb) [0x7f42836d4f8b]] > 2: [/lib64/librte_eal.so.2(__rte_panic+0xd0) [0x7f42836d3460]] > 1: [/lib64/librte_eal.so.2(rte_dump_stack+0x2d) [0x7f42836db5fd]] > *** Caught signal (Aborted) ** > in thread 7f427c226700 thread_name:ceph-osd > ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9) > 1: (()+0x8fb342) [0x7f42848c3342] > 2: (()+0xf100) [0x7f4281e91100] > 3: (gsignal()+0x37) [0x7f42804545f7] > 4: (abort()+0x148) [0x7f4280455ce8] > 5: (rte_log()+0) [0x7f42836d346a] > 6: (rte_eal_init()+0xecb) [0x7f42836d4f8b] > 7: (()+0x77eee5) [0x7f4284746ee5] > 8: (()+0x77f58d) [0x7f428474758d] > 9: (()+0xb5220) [0x7f4280dad220] > 10: (()+0x7dc5) [0x7f4281e89dc5] > 11: (clone()+0x6d) [0x7f428051528d] > 2016-04-22 10:45:31.345324 7f427c226700 -1 *** Caught signal (Aborted) ** > in thread 7f427c226700 thread_name:ceph-osd > > ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9) > 1: (()+0x8fb342) [0x7f42848c3342] > 2: (()+0xf100) [0x7f4281e91100] > 3: (gsignal()+0x37) [0x7f42804545f7] > 4: (abort()+0x148) [0x7f4280455ce8] > 5: (rte_log()+0) [0x7f42836d346a] > 6: (rte_eal_init()+0xecb) [0x7f42836d4f8b] > 7: (()+0x77eee5) [0x7f4284746ee5] > 8: (()+0x77f58d) [0x7f428474758d] > 9: (()+0xb5220) [0x7f4280dad220] > 10: (()+0x7dc5) [0x7f4281e89dc5] > 11: (clone()+0x6d) [0x7f428051528d] > NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. > > -25> 2016-04-22 10:45:29.811045 7f4283f9cb00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb > -24> 2016-04-22 10:45:29.811247 7f4283f9cb00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb > -22> 2016-04-22 10:45:29.811282 7f4283f9cb00 -1 WARNING: experimental feature 'bluestore' is enabled > Please be aware that this feature is experimental, untested, > unsupported, and may result in data corruption, data loss, > and/or irreparable damage to your cluster. Do not use > feature with important data. > > -21> 2016-04-22 10:45:29.833866 7f4283f9cb00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb > -5> 2016-04-22 10:45:29.869880 7f4283f9cb00 -1 bluestore(/var/local/osd0) _read_fsid unparsable uuid > 0> 2016-04-22 10:45:31.345324 7f427c226700 -1 *** Caught signal (Aborted) ** > in thread 7f427c226700 thread_name:ceph-osd > > ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9) > 1: (()+0x8fb342) [0x7f42848c3342] > 2: (()+0xf100) [0x7f4281e91100] > 3: (gsignal()+0x37) [0x7f42804545f7] > 4: (abort()+0x148) [0x7f4280455ce8] > 5: (rte_log()+0) [0x7f42836d346a] > 6: (rte_eal_init()+0xecb) [0x7f42836d4f8b] > 7: (()+0x77eee5) [0x7f4284746ee5] > 8: (()+0x77f58d) [0x7f428474758d] > 9: (()+0xb5220) [0x7f4280dad220] > 10: (()+0x7dc5) [0x7f4281e89dc5] > 11: (clone()+0x6d) [0x7f428051528d] > > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html