On 09/07/2015 11:34 AM, Quentin Hartman wrote:
fwiw, I am not confused about the various types of SSDs that Samsung
offers. I knew exactly what I was getting when I ordered them. Based on
their specs and my WAG on how much writing I would be doing they should
have lasted about 6 years. Turns out my estimates were wrong, but even
adjusting for actual use, I should have gotten about 18 months out of
these drives, but I have them dying now at 9 months, with about half of
their theoretical life left.
A list of hardware that is known to work well would be incredibly
valuable to people getting started. It doesn't have to be exhaustive,
nor does it have to provide all the guidance someone could want. A
simple "these things have worked for others" would be sufficient. If
nothing else, it will help people justify more expensive gear when their
approval people say "X seems just as good and is cheaper, why can't we
get that?".
So I have my opinions on different drives, but I think we do need to be
really careful not to appear to endorse or pick on specific vendors.
The more we can stick to high-level statements like:
- Drives should have high write endurance
- Drives should perform well with O_DSYNC writes
- Drives should support power loss protection for data in motion
The better I think. Once those are established, I think it's reasonable
to point out that certain drives meet (or do not meet) those criteria
and get feedback from the community as to whether or not vendor's
marketing actually reflects reality. It'd also be really nice to see
more information available like the actual hardware (capacitors, flash
cells, etc) used in the drives. I've had to show photos of the innards
of specific drives to vendors to get them to give me accurate
information regarding certain drive capabilities. Having a database of
such things available to the community would be really helpful.
To that point, I think perhaps though something more important than a
list of known "good" hardware would be a list of known "bad" hardware,
I'm rather hesitant to do this unless it's been specifically confirmed
by the vendor. It's too easy to point fingers (see the recent kernel
trim bug situation).
and perhaps some more experience about what kind of write volume people
should reasonably expect. Setting aside for a moment the early death
problem the recent Samsung drives clearly have (I wonder if it's a
side-effect of the "3D-NAND" tech?) I wouldn't have gotten them had my
estimates told me I'd only get 18 months out of them. That would have
also provided me the information I needed to justify the DC-class drives
that cost four times as much to those that approve purchases. Without
that critical piece of information, I'm left trying to justify thousands
of extra dollars with only "because they're better".
Also, I talked to a Samsung rep last week and he told me the DC 845 line
has been discontinued. The DC-class drives from Samsung are now model
pm863. They are theoretically on the market, but I've not been able to
find them in stock anywhere.
QH
On Mon, Sep 7, 2015 at 4:22 AM, Jan Schermer <jan@xxxxxxxxxxx
<mailto:jan@xxxxxxxxxxx>> wrote:
It is not just a question of which SSD.
It's the combination of distribution (kernel version), disk
controller and firmware, SSD revision and firmware.
There are several ways to select hardware
1) the most traditional way where you build your BoM on a single
vendor - so you buy servers including SSDs and HBAs as a single unit
and then scream at the vendor when it doesn't work. I had a good
experience with vendors in this scenario.
2) based on Hardware Compatibility Lists - usually means you can't
use tha latest hardware. For example LSI doesn't list most SSDs as
compatible, or they only list really old firmware versions.
Unusable, nobody will really help you.
3) You get a sample and test it, and you hope you will get the same
hardware when you order in bulk later. We went this route and got
nothing but trouble when Kingston changed their SSDs completely
without changing their PN.
Would we recommend s3700/3710 for Ceph? Absolutely. But there are
still people who have trouble with them in combination with LSI
controllers.
Can we recommend Samsung 845 DC PRO then? I can say it worked nicely
with my hardware. But surely some people had trouble with it.
I "vote" against creating such a list because of all those reasons,
it could get someone in trouble.
Jan
On 07 Sep 2015, at 11:14, Andrija Panic <andrija.panic@xxxxxxxxx
<mailto:andrija.panic@xxxxxxxxx>> wrote:
There is
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
On the other hand, I'm not sure if SSD vendors would be happy to
see their device listed performing total crap (for Journaling)
...but yes, I vote for having some oficial page if possible !
On 7 September 2015 at 11:12, Eino Tuominen <eino@xxxxxx
<mailto:eino@xxxxxx>> wrote:
Hello,
Should we (somebody, please?) gather up a comprehensive list
of suitable SSD devices to use as ceph journals? This seems to
be a FAQ, and it would be nice if all the knowledge and user
experiences from several different threads could be referenced
easily in the future. I took a look at wiki.ceph.org
<http://wiki.ceph.org/> and there was nothing on this.
--
Eino Tuominen
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx
<mailto:ceph-users-bounces@xxxxxxxxxxxxxx>] On Behalf Of Jan
Schermer
Sent: 7. syyskuuta 2015 11:44
To: Christian Balzer
Cc: ceph-users; Межов Игорь Александрович
Subject: Re: which SSD / experiences with Samsung
843T vs. Intel s3700
Re: Samsungs - I feel some of you are mixing and confusing
different Samsung drives.
There is a DC line of Samsung drives meant for DataCenter use.
Those have EVO (write once read many) and PRO (write mostly)
variants.
You don't want to go anywhere near the EVO line with Ceph.
Then there are "regular" EVO and PRO drives - they are not
meant for server use so don't use them.
The main difference is that the "DC" line should provide
reliable and stable performance over time, no surprises, while
the desktop drives can just pause and perform garbage
collection and have completely different cache setup. If you
torture desktop drive hard enough it will protect itself (slow
down to a crawl).
So the only usable drivess for us are "DC PRO" and nothing else.
Jan
> On 05 Sep 2015, at 04:36, Christian Balzer <chibi@xxxxxxx
<mailto:chibi@xxxxxxx>> wrote:
>
>
> Hello,
>
> On Fri, 4 Sep 2015 22:37:06 +0000 Межов Игорь Александрович
wrote:
>
>> Hi!
>>
>>
>> Have worked with Intel DC S3700 200Gb. Due to budget
restrictions, one
>>
>> ssd hosts a system volume and 1:12 OSD journals. 6 nodes,
120Tb raw
>> space.
>>
> Meaning you're limited to 360MB/s writes per node at best.
> But yes, I do understand budget constraints. ^o^
>
>> Cluster serves as RBD storage for ~100VM.
>>
>>
>> Not a single failure per year - all devices are healthy.
>>
>> The remainig resource (by smart) is ~92%.
>>
> I use 1:2 or 1:3 journals and haven't made any dent into my
200GB S3700
> yet.
>
>>
>> Now we're try to use DC S3710 for journals.
>
> As I wrote a few days ago, unless you go for the 400GB
version the the
> 200GB S3710 is actually slower (for journal purposes) than
the 3700, as
> sequential write speed is the key factor here.
>
> Christian
> --
> Christian Balzer Network/Systems Engineer
> chibi@xxxxxxx <mailto:chibi@xxxxxxx> Global OnLine
Japan/Fusion Communications
> http://www.gol.com/
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Andrija Panić
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com