Re: crimson-osd vs legacy-osd: should the perf difference be already noticeable?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 14, 2020 at 12:13 PM Roman Penyaev <rpenyaev@xxxxxxx> wrote:
> Am I right that you are talking about several connections between
> a primary osd and a single client instance?  At least I'm picturing
> that each connection represents a software cpu (or how this thread,
> which does scheduling, is called?) on osd side.  Then I can imagine
> that a request to a PG goes to one of the connections by simple
> modulo operation (something like PG_id % Number_of_conns).  So all
> requests to a PG from all clients will be eventually handled by one
> of the cpu threads.  Something like that?

Yup, basically a set of PGs would get its own crimson-msgr instance
to let clients to talk directly with the proper CPU core – without
crossbar or, in general, any data / message passing between CPU cores
on hot paths.

> May I take a look on the link with numbers and what exactly persistent
> object store you've mentioning?

+Mark.
This was the testing Mark has initially mentioned. ceph-osd + BlueStore
has been compared with ceph-osd + MemStore  during random reads.
I can't find the spreadsheet but I asked Mark today.

Regards,
Radek
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux