crimson-osd updates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,

want to update you guys with the current status of the crimson-osd.

where we are:
we have ported/adapted messenger, mon client, config, logging, perf
counter to seastar framework. based on them, we are able to boot[0]
the crimson-osd now. by "boot", i mean, crimson-osd can be discovered
by monitor after it starts. so, we still don't have anything to test
with, aside from the messenger.

the next steps:
to implement the i/o path to serve the read requests with the memstore
using the 1-to-1 threading model. the reason why we want to prioritize
this is to understand the behavior of crimson-osd as soon as possible
with minimal efforts. so, the focus of current stage of this project
will be:
 - the 1:1 mapping implementation. as 1:1 mapping is easier to design
and debugging, and there are a lot valuable insights in the 1:1 versus
m:n discussion. it seems the performance of 1:1 model is not likely to
suffer from any obvious problems. so we think it can be used to
present the best performance we can achieve using share-nothing
architecture.
- read path only. but we need to be sure we don't skip any critical
step that could impact the performance in the i/o path in the future.
this will help us to get an authentic benchmark result.
- memstore only. the access to memstore is practically very fast and
non-blocking.so we can read/write it without worrying about how to
confine bluestore in an alien world and how to cater for its needs for
the facilities only available in alien world.

once we have the read path ready, and we have a good understanding of
it in the perspective of its performance, and design. we will be
better prepared for implementing the write path and for adding more
features back to crimson-osd.

as to the methodology of benchmarking, we want to have an
apple-to-apple comparison between crimson-osd and existing osd, we
could start $(npro) instances of crimson-osd side by side, and to
profile it. and on the same box, the same number of existing-osd
instances will be used as the controlled group.

if i am missing anything or apparently wrong. please point it out.

thanks,

--
[0] https://github.com/ceph/ceph/pull/25304

-- 
Regards
Kefu Chai



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux