Re: SPDK for BlueStore rocksDB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jorge,

I'd suggest to start with regular (non-SPDK) configuration and deploy test cluster. Then do some benchmarking against it and check if nvme drive is the actual bottleneck. I doubt it is though. I did some experiments a while ago and didn't see any benefit from SPDK in my case - probably due to bottlenecks were  somewhere else.


Hope this helps,

Igor


On 1/24/2018 12:46 PM, Jorge Pinilla López wrote:

Hey, sorry if the question doesnt really make a lot of sense I am talking from almost complete ignorace of the topic, but there is not a lof info about it

I am planning about creating a cluster with 7~10 NL-SAS HDD and 1 nmve per host.

The nmve would be used as rocksDB and journal for each osd (hdd block device) but I am worried about that being a bottleneck as all the osds would share the same device.

I've been reading about spdk for nmve devices and I have seen that blueStore configuration supports it (http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#spdk-usage) but I would like the know the current status of the spdk and if I could use it for rocksDB device and not for block device

I've been reading also about RDMA and I would like to know if I could use it in this scenario, all  I have found was using hole nmve devices for block and RocksDB

I would really apreciate if someone could introduce me about this topic, its really interesting but also confusing at the same time.

Thanks a lot!

--

Jorge Pinilla López
jorpilo@xxxxxxxxx
Estudiante de ingenieria informática
Becario del area de sistemas (SICUZ)
Universidad de Zaragoza
PGP-KeyID: A34331932EBC715A



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux