Re: SSD selection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Sun, Mar 1, 2015 at 11:19 PM, Christian Balzer <chibi@xxxxxxx> wrote:

> >
> I'll be honest, the pricing on Intel's website is far from reality.  I
> haven't been able to find any OEMs, and retail pricing on the 200GB 3610
> is ~231 (the $300 must have been a different model in the line).
> Although $231 does add up real quick if I need to get 6 of them :(
>
>
Using the google shopping (which isn't ideal, but for simplicities sake)
search I see the 100GB DC S3700 from 170USD and the 160GB DC S3500 from
150USD, which are a pretty good match to the OEM price on the Intel site
of 180 and 160 respectively.


If I have to buy them personally, that'll work well.  If I can get work to get them, then I kinda have to limit myself to whom we have marked as suppliers as it's a pain to get a new company in the mix.

 
> > You really wouldn't want less than 200MB/s, even in your setup which I
> > take to be 2Gb/s from what you wrote below.
>
>
>
> > Note that the 100GB 3700 is going to perform way better and last
> > immensely longer than the 160GB 3500 while being moderately more
> > expensive, while the the 200GB 3610 is faster (IOPS), lasting 10 times
> > long AND cheaper than the 240GB 3500.
> >
> > It is pretty much those numbers that made me use 4 100GB 3700s instead
> > of 3500s (240GB), much more bang for the buck and it still did fit my
> > budget and could deal with 80% of the network bandwidth.
> >
>
> So the 3710's would be an ok solution?

No, because they start from 200GB and with a 300USD price tag. The 3710s
do not replace the 3700s, they extend the selection upwards (in size
mostly).

I thought I had corrected that - I was thinking the 3700's and typed 3710 :)
 

>I have seen the 3700s for right
> about $200, which although doesn't seem a lot cheaper, when getting 6,
> that does shave about $200 after shipping costs as well...
>
See above, google shopping. The lowballer is Walmart, of all places:

http://www.walmart.com/ip/26972768?wmlspartner=wlpa&selectedSellerId=0


>
> >
> > >
> > > >
> > > > Guestimate the amount of data written to your cluster per day,
> > > > break that down to the load a journal SSD will see and then
> > > > multiply by at least 5 to be on the safe side. Then see which SSD
> > > > will fit your expected usage pattern.
> > > >
> > >
> > > Luckily I don't think there will be a ton of data per day written.
> > > The majority of servers whose VHDs will be stored in our cluster
> > > don't have a lot of frequent activity - aside from a few windows
> > > servers that have DBs servers in them (and even they don't write a
> > > ton of data per day really).
> > >
> >
> > Being able to put even a coarse number on this will tell you if you can
> > skim on the endurance and have your cluster last like 5 years or if
> > getting a higher endurance SSD is going to be cheaper.
> >
>
> Any suggestions on how I can get a really accurate number on this?  I
> mean, I could probably get some good numbers from the database servers
> in terms of their writes in a given day, but when it comes to other
> processes running in the background I'm not sure how much these  might
> really affect this number.
>

If you have existing servers that run linux and have been up for
reasonably long time (months), iostat will give you a very good idea.
No ideas about Windows, but I bet those stats exist someplace, too.

I can't say months, but at least a month, maybe two - trying to remember when our last extended power outage was - I can find out later.
 

For example a Ceph storage node, up 74 days with OS and journals on the
first 4 drives and OSD HDDs on the other 8:

Device:            tps    kB_read/s    kB_wrtn/s    kB_read kB_wrtn
sda               9.82        29.88       187.87  191341125 1203171718
sdb               9.79        29.57       194.22  189367432 1243850846
sdc               9.77        29.83       188.89  191061000 1209676622
sdd               8.77        29.57       175.40  189399240 1123294410
sde               5.24       354.19        55.68 2268306443  356604748
sdi               5.02       335.61        63.60 2149338787  407307544
sdj               4.96       350.33        52.43 2243590803  335751320
sdl               5.04       374.62        48.49 2399170183  310559488
sdf               4.85       354.52        50.43 2270401571  322947192
sdh               4.77       332.38        50.60 2128622471  324065888
sdg               6.26       403.97        65.42 2587109283  418931316
sdk               5.86       385.36        55.61 2467921295  356120140

I do have some linux vms that have been up for a while, can't say how many months since the last extended power outage off hand (granted I know once I look at the uptime), but hopefully it will at least give me an idea.


>
> >
> >
> > >
> > So it's 2x1Gb/s then?
> >
>
> client side 2x1, cluster side, 3x1.
>
So 500MB/s with trailing wind on a sunny day.

Meaning that something that can do about 400MB/s will do nicely, as you're
only even going to get near that when doing massive backfilling AND client
writes.

Yeah.  Eventually one day I'll get them to get 10Gig, but it won't be until it comes way down in price.  
 

>
> >
> > At that speed a single SSD from the list above would do, if you're
> > a) aware of the risk that this SSD failing will kill all OSDs on that
> > node and
> > b) don't expect your cluster to be upgraded
> >
>
> I'd really prefer 2 per node from our discussions so far - it's all a
> matter of cost, but I also don't want to jump to a poor decision just
> because it can't be afforded immediately.  I'd rather gradually upgrade
> nodes as can be afforded then jump into cheap now only to have to pay a
> bigger price later.
>
Yup, 2 is clearly better, I'd go with 2 100GB DC S3700s.


I'll have to see what I can get done this week.  Thanks for the input - it's really clarified the SSD usage a lot!

-Tony 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux