Re: crimson-osd updates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,


I just wanted to chime in and say that I think if nothing else, a 1:1 mapping model should be much easier to implement and will give us insight into whether or not we are on the right track.  I don't know if I think it's right in the long run, but I think it's very reasonable to prototype it and see how well it works.  If the prototype shows significant promise we may have more questions to answer, but I strongly support Kefu and Radek in this approach if for no other reason than I think we will learn quite a bit from it.  Good luck Seastar team!


Mark


On 1/11/19 8:45 AM, Matt Benjamin wrote:
Hi Kefu,

This seems like great progress, and, strongly, concentrating on the
1:1 model feels like a very solid approach.

Matt

On Fri, Jan 11, 2019 at 9:38 AM kefu chai <tchaikov@xxxxxxxxx> wrote:
hi,

want to update you guys with the current status of the crimson-osd.

where we are:
we have ported/adapted messenger, mon client, config, logging, perf
counter to seastar framework. based on them, we are able to boot[0]
the crimson-osd now. by "boot", i mean, crimson-osd can be discovered
by monitor after it starts. so, we still don't have anything to test
with, aside from the messenger.

the next steps:
to implement the i/o path to serve the read requests with the memstore
using the 1-to-1 threading model. the reason why we want to prioritize
this is to understand the behavior of crimson-osd as soon as possible
with minimal efforts. so, the focus of current stage of this project
will be:
  - the 1:1 mapping implementation. as 1:1 mapping is easier to design
and debugging, and there are a lot valuable insights in the 1:1 versus
m:n discussion. it seems the performance of 1:1 model is not likely to
suffer from any obvious problems. so we think it can be used to
present the best performance we can achieve using share-nothing
architecture.
- read path only. but we need to be sure we don't skip any critical
step that could impact the performance in the i/o path in the future.
this will help us to get an authentic benchmark result.
- memstore only. the access to memstore is practically very fast and
non-blocking.so we can read/write it without worrying about how to
confine bluestore in an alien world and how to cater for its needs for
the facilities only available in alien world.

once we have the read path ready, and we have a good understanding of
it in the perspective of its performance, and design. we will be
better prepared for implementing the write path and for adding more
features back to crimson-osd.

as to the methodology of benchmarking, we want to have an
apple-to-apple comparison between crimson-osd and existing osd, we
could start $(npro) instances of crimson-osd side by side, and to
profile it. and on the same box, the same number of existing-osd
instances will be used as the controlled group.

if i am missing anything or apparently wrong. please point it out.

thanks,

--
[0] https://github.com/ceph/ceph/pull/25304

--
Regards
Kefu Chai





[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux