Re: SSD selection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 1 Mar 2015 22:47:48 -0600 Tony Harris wrote:

> On Sun, Mar 1, 2015 at 10:18 PM, Christian Balzer <chibi@xxxxxxx> wrote:
> 
> > On Sun, 1 Mar 2015 21:26:16 -0600 Tony Harris wrote:
> >
> > > On Sun, Mar 1, 2015 at 6:32 PM, Christian Balzer <chibi@xxxxxxx>
> > > wrote:
> > >
> > > >
> > > > Again, penultimately you will need to sit down, compile and
> > > > compare the numbers.
> > > >
> > > > Start with this:
> > > > http://ark.intel.com/products/family/83425/Data-Center-SSDs
> > > >
> > > > Pay close attention to the 3610 SSDs, while slightly more expensive
> > > > they offer 10 times the endurance.
> > > >
> > >
> > > Unfortunately, $300 vs $100 isn't really slightly more expensive ;)
> > >  Although I did notice that the 3710's can be gotten for ~210.
> > >
> > >
> > I'm not sure where you get those prices from or what you're comparing
> > with what but if you look at the OEM prices in the URL up there (which
> > compare quite closely to what you can find when looking at shopping
> > prices) a comparison with closely matched capabilities goes like this:
> >
> > http://ark.intel.com/compare/71913,86640,75680,75679
> >
> >
> I'll be honest, the pricing on Intel's website is far from reality.  I
> haven't been able to find any OEMs, and retail pricing on the 200GB 3610
> is ~231 (the $300 must have been a different model in the line).
> Although $231 does add up real quick if I need to get 6 of them :(
> 
> 
Using the google shopping (which isn't ideal, but for simplicities sake)
search I see the 100GB DC S3700 from 170USD and the 160GB DC S3500 from
150USD, which are a pretty good match to the OEM price on the Intel site
of 180 and 160 respectively.

> > You really wouldn't want less than 200MB/s, even in your setup which I
> > take to be 2Gb/s from what you wrote below.
> 
> 
> 
> > Note that the 100GB 3700 is going to perform way better and last
> > immensely longer than the 160GB 3500 while being moderately more
> > expensive, while the the 200GB 3610 is faster (IOPS), lasting 10 times
> > long AND cheaper than the 240GB 3500.
> >
> > It is pretty much those numbers that made me use 4 100GB 3700s instead
> > of 3500s (240GB), much more bang for the buck and it still did fit my
> > budget and could deal with 80% of the network bandwidth.
> >
> 
> So the 3710's would be an ok solution?  

No, because they start from 200GB and with a 300USD price tag. The 3710s
do not replace the 3700s, they extend the selection upwards (in size
mostly).  

>I have seen the 3700s for right
> about $200, which although doesn't seem a lot cheaper, when getting 6,
> that does shave about $200 after shipping costs as well...
> 
See above, google shopping. The lowballer is Walmart, of all places:

http://www.walmart.com/ip/26972768?wmlspartner=wlpa&selectedSellerId=0


> 
> >
> > >
> > > >
> > > > Guestimate the amount of data written to your cluster per day,
> > > > break that down to the load a journal SSD will see and then
> > > > multiply by at least 5 to be on the safe side. Then see which SSD
> > > > will fit your expected usage pattern.
> > > >
> > >
> > > Luckily I don't think there will be a ton of data per day written.
> > > The majority of servers whose VHDs will be stored in our cluster
> > > don't have a lot of frequent activity - aside from a few windows
> > > servers that have DBs servers in them (and even they don't write a
> > > ton of data per day really).
> > >
> >
> > Being able to put even a coarse number on this will tell you if you can
> > skim on the endurance and have your cluster last like 5 years or if
> > getting a higher endurance SSD is going to be cheaper.
> >
> 
> Any suggestions on how I can get a really accurate number on this?  I
> mean, I could probably get some good numbers from the database servers
> in terms of their writes in a given day, but when it comes to other
> processes running in the background I'm not sure how much these  might
> really affect this number.
>

If you have existing servers that run linux and have been up for
reasonably long time (months), iostat will give you a very good idea.
No ideas about Windows, but I bet those stats exist someplace, too.
 
For example a Ceph storage node, up 74 days with OS and journals on the
first 4 drives and OSD HDDs on the other 8:

Device:            tps    kB_read/s    kB_wrtn/s    kB_read kB_wrtn 
sda               9.82        29.88       187.87  191341125 1203171718
sdb               9.79        29.57       194.22  189367432 1243850846
sdc               9.77        29.83       188.89  191061000 1209676622
sdd               8.77        29.57       175.40  189399240 1123294410
sde               5.24       354.19        55.68 2268306443  356604748
sdi               5.02       335.61        63.60 2149338787  407307544
sdj               4.96       350.33        52.43 2243590803  335751320
sdl               5.04       374.62        48.49 2399170183  310559488
sdf               4.85       354.52        50.43 2270401571  322947192
sdh               4.77       332.38        50.60 2128622471  324065888
sdg               6.26       403.97        65.42 2587109283  418931316
sdk               5.86       385.36        55.61 2467921295  356120140

> 
> >
> >
> > >
> > So it's 2x1Gb/s then?
> >
> 
> client side 2x1, cluster side, 3x1.
> 
So 500MB/s with trailing wind on a sunny day.

Meaning that something that can do about 400MB/s will do nicely, as you're
only even going to get near that when doing massive backfilling AND client
writes.

> 
> >
> > At that speed a single SSD from the list above would do, if you're
> > a) aware of the risk that this SSD failing will kill all OSDs on that
> > node and
> > b) don't expect your cluster to be upgraded
> >
> 
> I'd really prefer 2 per node from our discussions so far - it's all a
> matter of cost, but I also don't want to jump to a poor decision just
> because it can't be afforded immediately.  I'd rather gradually upgrade
> nodes as can be afforded then jump into cheap now only to have to pay a
> bigger price later.
> 
Yup, 2 is clearly better, I'd go with 2 100GB DC S3700s.

> 
> >
> > > Well, I'd like to steer away from the consumer models if possible
> > > since they (AFAIK) don't contain caps to finish writes should a
> > > power loss occur, unless there is one that does?
> > >
> > Not that I'm aware of.
> >
> > Also note that while Andrei is happy with his 520s (especially
> > compared to the Samsungs) I have various 5x0 Intel SSDs in use as well
> > and while they are quite nice the 3700s are so much faster
> > (consistently) in comparison that one can't believe it ain't butter.
> > ^o^
> >
> 
> I'll have to see if I can get funding, I've already donated enough to get
> the (albeit used) servers and nic cards, I just can't personally afford
> to donate another 1K-1200, but hopefully I'll soon have it nailed down
> what exact model I would like to have and maybe I can get them to pay
> for at least 1/2 of them...  God working for a school can be taxing at
> times.
> 
It's not just schools, but yeah. ^.^

Christian
> -Tony
> 
> 
> 
> >
> > Christian
> >
> >


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux