Re: crimson-osd vs legacy-osd: should the perf difference be already noticeable?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Roman,

I consider *the difference* between BlueStore and MemStore in ceph-osd
as rough boundary on how much is achievable for this single component.
It's rather unlikely that SeaStore can be beat MemStore. ;-)
Still, it's very, very rough due to the dependencies between components.
For example, it's not impossible that a syscall in messenger potentially
decreases IPC also in ObjectStore. So, please take it with a grain of salt.

Regards,
Radek

On Wed, Jan 15, 2020 at 12:05 PM Roman Penyaev <rpenyaev@xxxxxxx> wrote:
>
> On 2020-01-14 23:07, Mark Nelson wrote:
> > On 1/14/20 2:05 PM, Radoslaw Zarzynski wrote:
> >> On Tue, Jan 14, 2020 at 12:13 PM Roman Penyaev <rpenyaev@xxxxxxx>
> >> wrote:
> >>> Am I right that you are talking about several connections between
> >>> a primary osd and a single client instance?  At least I'm picturing
> >>> that each connection represents a software cpu (or how this thread,
> >>> which does scheduling, is called?) on osd side.  Then I can imagine
> >>> that a request to a PG goes to one of the connections by simple
> >>> modulo operation (something like PG_id % Number_of_conns).  So all
> >>> requests to a PG from all clients will be eventually handled by one
> >>> of the cpu threads.  Something like that?
> >> Yup, basically a set of PGs would get its own crimson-msgr instance
> >> to let clients to talk directly with the proper CPU core – without
> >> crossbar or, in general, any data / message passing between CPU cores
> >> on hot paths.
> >>
> >>> May I take a look on the link with numbers and what exactly
> >>> persistent
> >>> object store you've mentioning?
> >> +Mark.
> >> This was the testing Mark has initially mentioned. ceph-osd +
> >> BlueStore
> >> has been compared with ceph-osd + MemStore  during random reads.
> >> I can't find the spreadsheet but I asked Mark today.
> >>
> >
> > Found it:
> >
> >
> > https://docs.google.com/spreadsheets/d/1kfzbvtdhUvrzjn9eW0r6Fqrm8gE6X0bmZnECfrLSt8g/edit?usp=sharing
> >
>
> Thanks for sharing.  Do I understand correctly that this ~270k cycles/io
> for writes can be treated as a best estimation for objectstore?
> Kind of ideal boundary to which we should strive for? (doing comparison
> on the same hardware, of course).  Since memstore is log-less, this
> estimation, of course, is hardly reachable, but can be treated as
> a perfect unattainable reference.
>
> --
> Roman
>
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux