Basically sticking to a single socket lets you avoid a lot of NUMA
issues that can crop up on dual socket machines so long as you still
have enough overall CPU power. Ben England and Joe Mario here at Red
Hat have been looking into some of these issues using C2C to observe
things like remote cache line contention under Ceph workloads.
I think a very interesting high performance setup could be tightly
packed nodes with 40GbE, high-clocked E3 processors, 2-3 2.5" NVMe
devices per node, and optionally a single system disk. That would match
the form factor of a lot of blade or sled chassis so long as you have
NVMe capable single-socket sleds. You lose per-node disk density but
partially make it back up with the cheap high clock speed processors and
overall node density.
Mark
On 02/03/2016 03:01 PM, Josef Johansson wrote:
I was fascinated as well. This is how it should be done ☺
We are in the middle of ordering and I saw the notice that they use
single socket systems for the OSDs due to latency issues. I have only
seen dual socket systems on the OSD setups here. Is this something you
should do with new SSD clusters?
Regards,
Josef
On Sat, 30 Jan 2016 09:43 Nick Fisk <nick@xxxxxxxxxx
<mailto:nick@xxxxxxxxxx>> wrote:
Yes, thank you very much. I've just finished going through this and
found it very interesting. The dynamic nature of the infrastructure
from top to bottom is fascinating, especially the use of OSPF per
container.
One question though, are those latency numbers for writes on Ceph
correct? 9us is very fast or is it something to do with the 1/100
buffered nature of the test?
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx
<mailto:ceph-users-bounces@xxxxxxxxxxxxxx>] On Behalf Of
> Gregory Farnum
> Sent: 29 January 2016 21:25
> To: Patrick McGarry <pmcgarry@xxxxxxxxxx
<mailto:pmcgarry@xxxxxxxxxx>>
> Cc: Ceph Devel <ceph-devel@xxxxxxxxxxxxxxx
<mailto:ceph-devel@xxxxxxxxxxxxxxx>>; Ceph-User <ceph-
> users@xxxxxxxx <mailto:users@xxxxxxxx>>
> Subject: Re: Ceph Tech Talk - High-Performance
Production
> Databases on Ceph
>
> This is super cool — thanks, Thorvald, for the realistic picture
of how
> databases behave on rbd!
>
> On Thu, Jan 28, 2016 at 11:56 AM, Patrick McGarry
<pmcgarry@xxxxxxxxxx <mailto:pmcgarry@xxxxxxxxxx>>
> wrote:
> > Hey cephers,
> >
> > Here are the links to both the video and the slides from the
Ceph Tech
> > Talk today. Thanks again to Thorvald and Medallia for stepping
forward
> > to present.
> >
> > Video: https://youtu.be/OqlC7S3cUKs
> >
> > Slides:
> > http://www.slideshare.net/Inktank_Ceph/2016jan28-high-performance-
> prod
> > uction-databases-on-ceph-57620014
> >
> >
> > --
> >
> > Best Regards,
> >
> > Patrick McGarry
> > Director Ceph Community || Red Hat
> > http://ceph.com || http://community.redhat.com @scuttlemonkey ||
> > @ceph
> > --
> > To unsubscribe from this list: send the line "unsubscribe
ceph-devel"
> > in the body of a message to majordomo@xxxxxxxxxxxxxxx
<mailto:majordomo@xxxxxxxxxxxxxxx> More
> majordomo
> > info at http://vger.kernel.org/majordomo-info.html
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com