Re: Ceph SSD array with Intel DC S3500's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 6 Oct 2014 14:59:02 +1300 Andrew Thrift wrote:

> Hi Mark,
> 
> Would you see any benefit in using a Intel P3700 NVMe drive as a journal
> for say 6x Intel S3700 OSD's ?
> 
I don't wanna sound facetious, but buy some, find out and tell us. ^o^

Seriously, common sense might suggest it being advantageous, but all the
recent posts of having a SSD journaling for another SSD showed it to be
slower than just have the journal on the same OSD SSD.

YMMV.

Christian

> 
> 
> On Fri, Oct 3, 2014 at 6:58 AM, Mark Nelson <mark.nelson@xxxxxxxxxxx>
> wrote:
> 
> > On 10/02/2014 12:48 PM, Adam Boyhan wrote:
> >
> >> Hey everyone, loving Ceph so far!
> >>
> >
> > Hi!
> >
> >
> >
> >> We are looking to role out a Ceph cluster with all SSD's.  Our
> >> application is around 30% writes and 70% reads random IO.  The plan is
> >> to start with roughly 8 servers with 8 800GB Intel DC S3500's per
> >> server.  I wanted to get some input on the use of the DC S3500. Seeing
> >> that we are primarily a read environment, I was thinking we could
> >> easily get away with the S3500 instead of the S3700 but I am unsure?
> >> Obviously the price point of the S3500 is very attractive but if they
> >> start failing on us too soon, it might not be worth the savings.  My
> >> largest concern is the journaling of Ceph, so maybe I could use the
> >> S3500's for the bulk of the data and utilize a S3700 for the
> >> journaling?
> >>
> >
> > I'd suggest if you are using SSDs for OSDs anyway, you are better off
> > just putting the journal on the SSD so you don't increase the number
> > of devices per OSD that can cause failure.  In terms of the S3500 vs
> > the S3700, it's all a numbers game.  Figure out how much data you
> > expect to write, how many drives you have, what the expected write
> > endurance of each drive is, replication, journaling, etc, and figure
> > out what you need! :)
> >
> > The S3500 may be just fine, but it depends entirely on your write
> > workload.
> >
> >
> >> I appreciate the input!
> >>
> >> Thanks All!
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux