On Fri, Apr 22, 2016 at 8:43 AM, 李 天祥 <lanceflee@xxxxxxx> wrote: > > Thx! > > BTW, > > We have thought about if it is a waste to put only 1 OSD on a whole NVMe > SSD, especially a high-end one, such as Intel DC P3700. > > And we read some spdk codes, noticed that it is band with NVMe namespace. > > Theoretically, a NVMe SSD can be multi-namespaces by partition LBAs. > Each namespace has its own Submission/Completion Queues, so it would work well. > > But now, took P3700 as an example, it only supports NVMe v1.0, and the Namespace > Management Cmd Sets is not supported yet. It would be easier to create namespaces someday. > > > And "os/bluestore/NVMEDevices.cc" identifies 1 namespace per SSD too. > > Hopes to see multi-namespaces support to make full use of SSDs. yep, from my current view, I hope bluestore can do device sharding as mentioned recently on ceph-devel ML. so nvmedevice can directly divided into multi partition to make full use of. And you can check the newest pr(https://github.com/ceph/ceph/pull/8503) since I made a performance regression for Jewel release.... > > > > -- > Best Regards! > > > > > > > > On 4/22/16, 20:02, "Haomai Wang" <haomai@xxxxxxxx> wrote: > >>On Fri, Apr 22, 2016 at 5:52 AM, Li Tianxiang <lanceflee@xxxxxxx> wrote: >>> >>> Sorry, >>> I mean that we have two NVMe SSD each node,one for a OSD,. But only 1 OSD is up. >>> >>> It seams that the hugepages are all taken by 1 spdk instance, nothing left for another. >> >>Yep, this is probrem..... >> >>we have two options now, one is use dpdk multi process to let two >>ceph-osd aware, it need some codes. another is waiting for "fedora osd >>process" feature which is more suitable...... >> >>> >>> >>> >>> >>> -- >>> Best Regards! >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On 4/22/16, 17:28, "李 天祥" <ceph-devel-owner@xxxxxxxxxxxxxxx on behalf of lanceflee@xxxxxxx> wrote: >>> >>>>1. Fisrt of all , $ yum remove dpdk-devel dpdk -y >>>>2.get dpdk-2.2.0 source code , and compile it as shared library, As follows: >>>> a. vim config/common_linuxapp, modify "CONFIG_RTE_BUILD_SHARED_LIB=y" >>>> b.make install T=x86_64-native-linuxapp-gcc DESTDIR=/usr >>>>3.build ceph ./configure --with-spdk >>> > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html