On Sun, Oct 16, 2022 at 2:57 AM Eshcar Hillel <eshcarh@xxxxxxxxxx> wrote: > > Hi Ceph Devs, > > I want to run ceph on the KStore storage backend. > > I realize KStore is an experimental backend, and the current default is bluestore. However, I found work that compared the performance of the 2 backends and I would like to repeat these tests. > > The dev deployment instruction https://docs.ceph.com/en/latest/dev/dev_cluster_deployment/ explains how to start a development cluster using either the --bluestore or --kstore flags. Would this deployment allow me to run a proper benchmark test? Well, that developer documentation is for vstart, which turns on a whole cluster on a single machine. It's good for developers, but certainly doesn't reflect any meaningful performance value. > In CMakeCache.txt I found the parameter WITH_BLUESTORE:BOOL=ON but no equivalent param for KStore. I need to make some minor changes to the code so I need to build the source code and cannot use the packages from the distribution. Do I need to checkout an older version than master to be able to run KStore? I believe this is just some cruft left over from when BlueStore was new and not everybody doing development wanted to wait to build it. KStore is very small code-wise so I guess it's built unconditionally. > In addition, I would like to run the experiment on our own version of RocksDB. The readme file https://github.com/ceph/ceph/blob/main/README.md indicates that cmake builds RocksDB from source but allows to opt-in to using a system library (provided it meets the minimum version required by Ceph) WITH_SYSTEM_ROCKSDB:BOOL=ON where can I define the path to the package, or to our RocksDB code? You're going to need to dig into the build settings for this one. More broadly though, you should be aware KStore isn't so much "experimental" as "an idea-testing toy", so it's not something people have spent any effort on. -Greg _______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx