Re: which SSD / experiences with Samsung 843T vs. Intel s3700

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Anrija,

Your feedback is greatly appreciated.

 

Regards,

James

 

From: Andrija Panic [mailto:andrija.panic@xxxxxxxxx]
Sent: Friday, September 04, 2015 12:39 PM
To: James (Fei) Liu-SSI
Cc: Quentin Hartman; ceph-users
Subject: Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

 

James, 

 

there are simple FIO tests or even DD test on Linux, which you can run to see how good SSD will perform as CEPH Journal device (CEPH does writes with O_DIRECT and D_SYNC flags to SSDs) - Samsung 850 perform here extremely bad, as many, many other vendors (D_SYNC kills performance for them...)

 

If you are not using D_SYNC flag, then Samsung can achieve some nice numbers...

dd if=/dev/zero of=/dev/sda bs=4k count=100000 oflag=direct,dsync (where /dev/sda is raw drive, or replace that with mount point i.e. /root/ddfile)

 

Thanks

 

On 4 September 2015 at 21:31, James (Fei) Liu-SSI <james.liu@xxxxxxxxxxxxxxx> wrote:

Andrija,

In your email thread, (18.000 (4Kb) IOPS constant write speed stands for 18K iops with 4k block size, right? However, you can only achieve 200IOPS with Samsung 850Pro, right?

 

Theoretically, Samsung 850 Pro can get up to 100,000 IOPS with 4k Random Read with certain workload.  It is a little bit strange over here.

 

Regards,

James

 

 

From: Andrija Panic [mailto:andrija.panic@xxxxxxxxx]
Sent: Friday, September 04, 2015 12:21 PM
To: Quentin Hartman
Cc: James (Fei) Liu-SSI; ceph-users


Subject: Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

 

Quentin, 

 

try fio or dd with O_DIRECT and D_SYNC flags, and you will see less than 1MB/s - that is common for most "home" drives - check the post down to understand....

We removed all Samsung 850 pro 256GB from our new CEPH installation and replaced with Intel S3500 (18.000 (4Kb) IOPS constant write speed with O_DIRECT, D_SYNC, in comparison to 200 IOPS for Samsun 850pro - you can imagine the difference...):

 

Best

 

On 4 September 2015 at 21:09, Quentin Hartman <qhartman@xxxxxxxxxxxxxxxxxxx> wrote:

Mine are also mostly 850 Pros. I have a few 840s, and a few 850 EVOs in there just because I couldn't find 14 pros at the time we were ordering hardware. I have 14 nodes, each with a single 128 or 120GB SSD that serves as the boot drive  and the journal for 3 OSDs. And similarly, mine just started disappearing a few weeks ago. I've now had four fail (three 850 Pro, one 840 Pro). I expect the rest to fail any day.

 

As it turns out I had a phone conversation with the support rep who has been helping me with RMA's today and he's putting together a report with my pertinent information in it to forward on to someone.

 

FWIW, I tried to get your 845's for this deploy, but couldn't find them anywhere, and since the 850's looked about as durable on paper I figured they would do ok. Seems not to be the case.

 

QH

 

On Fri, Sep 4, 2015 at 12:53 PM, Andrija Panic <andrija.panic@xxxxxxxxx> wrote:

Hi James,

 

I had 3 CEPH nodes as folowing: 12 OSDs(HDD) and 2 SSDs (2x 6 Journals partitions on each SSD) - SSDs just vanished with no warning, no smartctl errors nothing... so 2 SSDs in 3 servers vanished in...2-3 weeks, after a 3-4 months of being in production (VMs/KVM/CloudStack)

Mine were also Samsung 850 PRO 128GB.

 

Best,

Andrija 

 

On 4 September 2015 at 19:27, James (Fei) Liu-SSI <james.liu@xxxxxxxxxxxxxxx> wrote:

Hi Quentin and Andrija,

Thanks so much for reporting the problems with Samsung.

 

Would be possible to get to know your configuration of your system?  What kind of workload are you running?  Do you use Samsung SSD as separate journaling disk, right?

 

Thanks so much.

 

James

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Quentin Hartman
Sent: Thursday, September 03, 2015 1:06 PM
To: Andrija Panic
Cc: ceph-users
Subject: Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

 

Yeah, we've ordered some S3700's to replace them already. Should be here early next week. Hopefully they arrive before we have multiple nodes die at once and can no longer rebalance successfully.

 

Most of the drives I have are the 850 Pro 128GB (specifically MZ7KE128HMGA)

There are a couple 120GB 850 EVOs in there too, but ironically, none of them have pooped out yet. 

 

On Thu, Sep 3, 2015 at 1:58 PM, Andrija Panic <andrija.panic@xxxxxxxxx> wrote:

I really advise removing the bastards becore they die...no rebalancing hapening just temp osd down while replacing journals...

What size and model are yours Samsungs?

On Sep 3, 2015 7:10 PM, "Quentin Hartman" <qhartman@xxxxxxxxxxxxxxxxxxx> wrote:

We also just started having our 850 Pros die one after the other after about 9 months of service. 3 down, 11 to go... No warning at all, the drive is fine, and then it's not even visible to the machine. According to the stats in hdparm and the calcs I did they should have had years of life left, so it seems that ceph journals definitely do something they do not like, which is not reflected in their stats.

 

QH

 

On Wed, Aug 26, 2015 at 7:15 AM, 10 minus <t10tennn@xxxxxxxxx> wrote:

Hi ,

We got a good deal on 843T and we are using it in our Openstack setup ..as journals .
They have been running for last six months ... No issues .

When we compared with  Intel SSDs I think it was 3700 they  were shade slower for our workload and considerably cheaper.

We did not run any synthetic benchmark since we had a specific use case.

The performance was better than our old setup so it was good enough.

hth

 

On Tue, Aug 25, 2015 at 12:07 PM, Andrija Panic <andrija.panic@xxxxxxxxx> wrote:

We have some 850 pro 256gb ssds if anyone interested to buy:)

And also there was new 850 pro firmware that broke peoples disk which was revoked later etc... I'm sticking with only vacuum cleaners from Samsung for now, maybe... :)

On Aug 25, 2015 12:02 PM, "Voloshanenko Igor" <igor.voloshanenko@xxxxxxxxx> wrote:

To be honest, Samsung 850 PRO not 24/7 series... it's something about desktop+ series, but anyway - results from this drives - very very bad in any scenario acceptable by real life...

 

Possible 845 PRO more better, but we don't want to experiment anymore... So we choose S3500 240G. Yes, it's cheaper than S3700 (about 2x times), and no so durable for writes, but we think more better to replace 1 ssd per 1 year than to pay double price now. 

 

2015-08-25 12:59 GMT+03:00 Andrija Panic <andrija.panic@xxxxxxxxx>:

And should I mention that in another CEPH installation we had samsung 850 pro 128GB and all of 6 ssds died in 2 month period - simply disappear from the system, so not wear out...

Never again we buy Samsung :)

On Aug 25, 2015 11:57 AM, "Andrija Panic" <andrija.panic@xxxxxxxxx> wrote:

First read please:
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/

We are getting 200 IOPS in comparison to Intels3500 18.000 iops - those are  constant performance numbers, meaning avoiding drives cache and running for longer period of time...
Also if checking with FIO you will get better latencies on intel s3500 (model tested in our case) along with 20X better IOPS results...

We observed original issue by having high speed at begining of i.e. file transfer inside VM, which than halts to zero... We moved journals back to HDDs and performans was acceptable...no we are upgrading to intel S3500...

Best

any details on that ?

On Tue, 25 Aug 2015 11:42:47 +0200, Andrija Panic
<andrija.panic@xxxxxxxxx> wrote:

> Make sure you test what ever you decide. We just learned this the hard way
> with samsung 850 pro, which is total crap, more than you could imagine...
>
> Andrija
> On Aug 25, 2015 11:25 AM, "Jan Schermer" <jan@xxxxxxxxxxx> wrote:
>
> > I would recommend Samsung 845 DC PRO (not EVO, not just PRO).
> > Very cheap, better than Intel 3610 for sure (and I think it beats even
> > 3700).
> >
> > Jan
> >
> > > On 25 Aug 2015, at 11:23, Christopher Kunz <chrislist@xxxxxxxxxxx>
> > wrote:
> > >
> > > Am 25.08.15 um 11:18 schrieb Götz Reinicke - IT Koordinator:
> > >> Hi,
> > >>
> > >> most of the times I do get the recommendation from resellers to go with
> > >> the intel s3700 for the journalling.
> > >>
> > > Check out the Intel s3610. 3 drive writes per day for 5 years. Plus, it
> > > is cheaper than S3700.
> > >
> > > Regards,
> > >
> > > --ck
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >



--
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczewski@xxxxxxxxxxxx
<mailto:mariusz.gronczewski@xxxxxxxxxxxx>


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 



 

--

 

Andrija Panić

 



 

--

 

Andrija Panić



 

--

 

Andrija Panić

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux