Re: performance between ceph-osd and crimson-osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Igor,

Thank you for answering.
I understand what you are saying, but I still wonder why the crimson-osd
(alienstore) has poor performance.
I tested the performance on SeaStore also, but still ObjectStores which use
crimson-osd (alienstore, seastore) have extremely poor performance.

Do you have any idea about that issues...? Please let me know.

Shin
ᐧ

2021년 8월 19일 (목) 오후 8:08, Igor Fedotov <ifedotov@xxxxxxx>님이 작성:

> Hi Shin,
>
> a side note -  you disk choice might be inappropriate for Ceph - AFAIK
> this is a consumer model which apparently lacks power loss protection.
>
> This causes dramatic performance drop when each write request is
> followed by fsync one. And this is the mode RocksDB/BlueFS work in.
> Hence expect pretty low performance numbers for BlueStore - especially
> when small writes are benchmarked.
>
> There were plenty of discussion on this topic on the web (e.g. at
> ceph-users mailing list or at
>
> https://yourcmc.ru/wiki/index.php?title=Ceph_performance&mobileaction=toggle_view_desktop#CAPACITORS.21
> ).
>
>
> Thanks,
>
> Igor
>
> On 8/19/2021 10:32 AM, 신희원 / 학생 / 컴퓨터공학부 wrote:
> > Hi Mark.
> > Thanks for answering.
> >
> > I tested the performance on Samsung NVMe SSD 960 PRO 512GB, 2-way
> > E5-2650(2.2GHz, 12-core) CPU , not HDD...
> > Is it a problem that I check the performance on ceph that is deployed by
> > 'vstart'..?
> > Please tell me how to deploy ceph to achieve similar results..!
> >
> > Thank you
> >
> > Shin
> > ᐧ
> >
> > 2021년 8월 19일 (목) 오후 3:55, Mark Nelson <mnelson@xxxxxxxxxx>님이 작성:
> >
> >> On 8/18/21 9:22 PM, 신희원 / 학생 / 컴퓨터공학부 wrote:
> >>> Hi,
> >>>
> >>> I measured the performance of ceph-osd and crimson-osd with same single
> >>> core affinity.
> >>> I checked IOPS, Latency by rados-bench write , and crimson-osd has
> lower
> >>> performance than ceph-osd about 3 times. (ceph-osd(BlueStore): 228
> IOPS,
> >>> crimson-osd(AlienStore): 73 IOPS)
> >>> -> " $ rados bench -p rbd 10 write --no-cleanup "
> >>>
> >>> Then, crimson-osd's CPU utilization is almost 100%.
> >>> I think this is the reason of performance degradation..
> >>>
> >>> Crimson-osd is known for lower CPU consumption than ceph-osd.
> >>> I wonder why crimson-osd has more CPU usage in this experiment.
> >>>
> >>> Please let me know how to fix it,
> >>>
> >>> Shin
> >>
> >> Those are pretty low numbers in general.  Is that on HDD?  You are
> >> correct that crimson right now is only using a single reactor core.
> >> That means that tests done with cyanstore will (almost!) always be
> >> limited to ~100% CPU usage.  With alienstore you'll still have bluestore
> >> worker threads, but we are often still limited by work being done in the
> >> reactor thread.  Here's the most recent performance data we've got
> >> internally (from a ~july build of crimson-osd vs ceph-osd) on NVMe
> >> drives using fio:
> >>
> >>
> >>
> >>
> https://docs.google.com/spreadsheets/d/1AXj9h0yDc2ztFWuptqcTrNU2Ui3wMyAn6QUft3CPdcc/edit?usp=sharing
> >>
> >>
> >> The gist of it is that on the read path, crimson+cyanstore is
> >> significantly more efficient than crimson+alienstore and any classic
> >> setup.  We are slower in terms of absolute performance, but that's
> >> expected to be the case until the multi-reactor work is done.  The
> >> thinking right now is that we probably have some optimization we can do
> >> alienstore and of course bluestore as well (We have ongoing work
> >> there).  On the write path things are a little murkier.  Cyanstore for
> >> some reason is more efficient with very small and very large datasets
> >> but not the middle size case. alienstore/bluestore and classic memstore
> >> efficiency seems to drop overall as the dataset size grows.  In fact
> >> classic memstore is significantly less efficient on the write path than
> >> bluestore is and this isn't the first dataset to show this.
> >>
> >>
> >> I guess all of this is a round about way of saying that you are testing
> >> very "wild west" code right now.  73 iops is pretty abysmal, but if this
> >> is on HDD you might be the only person that's ever tried
> >> crimson+alienstore+hdd so far.  We might be more sensitive to backend
> >> latency on HDD with crimson+alienstore, but that's just a guess.
> >>
> >>
> >> Mark
> >>
> >>
> >>> ᐧ
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users@xxxxxxx
> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux