Re: List of SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Honestly, we are scared to try the same tests with the m600s. When we
> first put them in, we had them more full, but we backed them off to
> reduce the load on them.

I see.
Did you tune anything on linux layer like:

 vm.vfs_cache_pressure

It may not be necessary to mention specifically since I'm just asking you.
There is a lot of tuning in your cluster environment, I guess.

Rgds,
Shinobu

----- Original Message -----
From: "Robert LeBlanc" <robert@xxxxxxxxxxxxx>
To: "Shinobu Kinjo" <skinjo@xxxxxxxxxx>
Cc: "Christian Balzer" <chibi@xxxxxxx>, ceph-users@xxxxxxxxxxxxxx
Sent: Saturday, February 27, 2016 9:46:36 AM
Subject: Re:  List of SSDs

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Honestly, we are scared to try the same tests with the m600s. When we
first put them in, we had them more full, but we backed them off to
reduce the load on them. Based on that I don't expect them to fair any
better. We'd love to get more IOPs out of our clusters considering the
ability of the s3610s. We constantly tune the cluster and try to
provide code back to Ceph that helps under high loads and congestion.
-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.3.6
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJW0PHnCRDmVDuy+mK58QAAxh4QAIJG5blyxFMRJ9DdF3U+
J1U47Yd1jGMNUrzSsipA6TCm7FeoKs9/y2PRTIIFdnanmKj/J3X+F2T4M+b4
oHr1V8HzbxRk6dB6q2+Z6DkXMG9I48qhTbxnksAsn/vbEDrCq1t0ctvL3Zxu
h97FRsnW5DBH/HAfble6cLZ9vbBbo394yqtR8wu/1gE0E/zpLAgAw5ZnJ4t8
n8RIWyOIwQgapUspQ3KtzGdFl1HP/MLjA/QQiexn8CEhtluwxJTZZB6Fy4q6
b60gLj3HEszo76bcExCrGETCqlT5kiy1qGJCUrHO6sQ7YHpSDVGt9o1muoGJ
FXvbGkdhSbqGYB0P5xx83ab3ZQ9Eyg2tf0hreZo9q1kyP5rXTfylr6IGgaCF
qNj0QTvcE0TYeUVIUxKkHfG0Ys06kFqdxJAEF3A4tJJp0KyBKwK7eJrj4P2H
xclQWUDMTDJk+JSufBNxo5AY94TOLhUsWieuEFGyZeW8gji+oOrIWHHilxz7
De0Xi2Y+9O/OKcKkbBE/g+Pys0S/L9ZwAId5EEMzNRXEoQwlbPVclvukpEQJ
xFiLdEJLQzwXP7hRT9lMQkHs3IKKL/0TgsfN2bszoXbHk1rN1NqMVt9BDqHr
ZGb++dyfjUFaMOM/S8WXfkxV3dtYi7LKGEn4pSQ2IyZ92REwcTWej2TPV5r9
Nq0g
=LM6/
-----END PGP SIGNATURE-----
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Feb 26, 2016 at 5:41 PM, Shinobu Kinjo <skinjo@xxxxxxxxxx> wrote:
> Thank you for your very precious output.
> "s3610s write iops high-load" is very interesting to me.
> Have you every did any same test set of s3610s for m600s?
>
>> These clusters normally service 12K IOPs with bursts up to 22K IOPs all RBD. I've seen a peak of 64K IOPs from client traffic.
>
> That's pretty good result, isn't it?
> I guess you've been tuning your cluster??
>
> Rgds,
> Shinobu
>
> ----- Original Message -----
> From: "Robert LeBlanc" <robert@xxxxxxxxxxxxx>
> To: "Shinobu Kinjo" <skinjo@xxxxxxxxxx>
> Cc: "Christian Balzer" <chibi@xxxxxxx>, ceph-users@xxxxxxxxxxxxxx
> Sent: Saturday, February 27, 2016 8:52:34 AM
> Subject: Re:  List of SSDs
>
> A picture is worth a thousand words:
>
>
> The red lines are the m600s IO time (dotted) and IOPs (solid) and our
> baseline s3610s in green and our test set of s3610s in blue.
>
> We used weighting to manipulate how many PGs each SSD took. The m600s are
> 1TB while the s3610s are 800GBs and we only have the m600s about half
> filled. So we weighted the s3610s individually until they were about ~40GBs
> within the m600s. We did the same weighting to achieve similar percentage
> usage and 80% usage. This graph is stepping from 50% to 70% and finally
> very close to 80%.
>
> We have two production clusters currently, third one will be built in the
> next month all about the same size.
>
> 16 nodes, 3 - 1TB m600 drives and 9 - 4TB HGST HDDs, single E5-2640v2 and
> 64 GB RAM dual 40 Gigabit Ethernet ports, direct attached SATA. These
> clusters normally service 12K IOPs with bursts up to 22K IOPs all RBD. I've
> seen a peak of 64K IOPs from client traffic.
>
> ----------------
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>
> On Fri, Feb 26, 2016 at 4:05 PM, Shinobu Kinjo <skinjo@xxxxxxxxxx> wrote:
>
>> Hello,
>>
>> > We started having high wait times on the M600s so we got 6 S3610s, 6
>> M500dcs, and 6 500 GB M600s (they have the SLC to MLC conversion that we
>> thought might work better).
>>
>> Is it working better as you were expecting?
>>
>> > We have graphite gathering stats on the admin sockets for Ceph and the
>> standard system stats.
>>
>> Very cool!
>>
>> > We weighted the drives so they had the same byte usage and let them run
>> for a week or so, then made them the same percentage of used space, let
>> them run a couple of weeks, then set them to 80% full and let them run a
>> couple of weeks.
>>
>> Almost exactly same *byte* usage? I'm pretty interesting to how you
>> realized that.
>>
>> > We compared IOPS and IO time of the drives to get our comparison.
>>
>> What is your feeling about the comparison?
>>
>> > This was done on live production clusters and not synthetic benchmarks.
>>
>> How large is your production the Ceph cluster?
>>
>> Rgds,
>> Shinobu
>>
>> >
>> > Hello,
>> >
>> > On Wed, 24 Feb 2016 22:56:15 -0700 Robert LeBlanc wrote:
>> >
>> > > We are moving to the Intel S3610, from our testing it is a good balance
>> > > between price, performance and longevity. But as with all things, do
>> your
>> > > testing ahead of time. This will be our third model of SSDs for our
>> > > cluster. The S3500s didn't have enough life and performance tapers off
>> > > add it gets full. The Micron M600s looked good with the Sebastian
>> journal
>> > > tests, but once in use for a while go downhill pretty bad. We also
>> tested
>> > > Micron M500dc drives and they were on par with the S3610s and are more
>> > > expensive and are closer to EoL. The S3700s didn't have quite the same
>> > > performance as the S3610s, but they will last forever and are very
>> stable
>> > > in terms of performance and have the best power loss protection.
>> > >
>> > That's interesting, how did you come to that conclusion and how did test
>> > it?
>> > Also which models did you compare?
>> >
>> >
>> > > Short answer is test them for yourself to make sure they will work. You
>> > > are pretty safe with the Intel S3xxx drives. The Micron M500dc is also
>> > > pretty safe based on my experience. It had also been mentioned that
>> > > someone has had good experience with a Samsung DC Pro (has to have both
>> > > DC and Pro in the name), but we weren't able to get any quick enough to
>> > > test so I can't vouch for them.
>> > >
>> > I have some Samsung DC Pro EVOs in production (non-Ceph, see that
>> > non-barrier thread).
>> > They do have issues with LSI occasionally, haven't gotten around to make
>> > that FS non-barrier to see if it fixes things.
>> >
>> > The EVOs are also similar to the Intel DC S3500s, meaning that they are
>> > not really suitable for Ceph due to their endurance.
>> >
>> > Never tested the "real" DC Pro ones, but they are likely to be OK.
>> >
>> > Christian
>> >
>> > > Sent from a mobile device, please excuse any typos.
>> > > On Feb 24, 2016 6:37 PM, "Shinobu Kinjo" <skinjo@xxxxxxxxxx> wrote:
>> > >
>> > > > Hello,
>> > > >
>> > > > There has been a bunch of discussion about using SSD.
>> > > > Does anyone have any list of SSDs describing which SSD is highly
>> > > > recommended, which SSD is not.
>> > > >
>> > > > Rgds,
>> > > > Shinobu
>> > > > _______________________________________________
>> > > > ceph-users mailing list
>> > > > ceph-users@xxxxxxxxxxxxxx
>> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> > > >
>> >
>> >
>> > --
>> > Christian Balzer        Network/Systems Engineer
>> > chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
>> > http://www.gol.com/
>> >
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux