Re: Ceph Tech Talk - High-Performance Production Databases on Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was fascinated as well. This is how it should be done ☺

We are in the middle of ordering and I saw the notice that they use single socket systems for the OSDs due to latency issues. I have only seen dual socket systems on the OSD setups here. Is this something you should do with new SSD clusters?

Regards,
Josef


On Sat, 30 Jan 2016 09:43 Nick Fisk <nick@xxxxxxxxxx> wrote:
Yes, thank you very much. I've just finished going through this and found it very interesting. The dynamic nature of the infrastructure from top to bottom is fascinating, especially the use of OSPF per container.

One question though, are those latency numbers for writes on Ceph correct? 9us is very fast or is it something to do with the 1/100 buffered nature of the test?

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Gregory Farnum
> Sent: 29 January 2016 21:25
> To: Patrick McGarry <pmcgarry@xxxxxxxxxx>
> Cc: Ceph Devel <ceph-devel@xxxxxxxxxxxxxxx>; Ceph-User <ceph-
> users@xxxxxxxx>
> Subject: Re: Ceph Tech Talk - High-Performance Production
> Databases on Ceph
>
> This is super cool — thanks, Thorvald, for the realistic picture of how
> databases behave on rbd!
>
> On Thu, Jan 28, 2016 at 11:56 AM, Patrick McGarry <pmcgarry@xxxxxxxxxx>
> wrote:
> > Hey cephers,
> >
> > Here are the links to both the video and the slides from the Ceph Tech
> > Talk today. Thanks again to Thorvald and Medallia for stepping forward
> > to present.
> >
> > Video: https://youtu.be/OqlC7S3cUKs
> >
> > Slides:
> > http://www.slideshare.net/Inktank_Ceph/2016jan28-high-performance-
> prod
> > uction-databases-on-ceph-57620014
> >
> >
> > --
> >
> > Best Regards,
> >
> > Patrick McGarry
> > Director Ceph Community || Red Hat
> > http://ceph.com  ||  http://community.redhat.com @scuttlemonkey ||
> > @ceph
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > in the body of a message to majordomo@xxxxxxxxxxxxxxx More
> majordomo
> > info at  http://vger.kernel.org/majordomo-info.html
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux