Re: Experiences with the Samsung SM/PM883 disk?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We've been using the PM863 (which is now EOL and replaced by PM883) for years. Stable disks, always a good performance. We feel they're meet the right balance for price, speed and capacity.

--
Mark Schouten <mark@xxxxxxxx>
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208 
 



----- Original Message -----


From: Paul Emmerich (paul.emmerich@xxxxxxxx)
Date: 22-03-2019 20:35
To: Jacob DeGlopper (jacob@xxxxxxxx)
Cc: Ceph Users (ceph-users@xxxxxxxxxxxxxx)
Subject: Re:  Experiences with the Samsung SM/PM883 disk?


We've now got 48 of the 960 GB version of the PM883a in production;
they consistently deliver a latency of below 1ms so far but they are
only loaded with an average of a 150 write IOPS in that cluster.

Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Tue, Mar 5, 2019 at 11:07 AM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
>
> Well, tried a different disk and basically got the same results as Jacob.
> So I've just had a bad disk there (server was fine as other disks work in it).
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Sun, Feb 24, 2019 at 8:55 PM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
> >
> > That sounds more like the result I expected, maybe there's something
> > wrong with my disk or server (other disks perform fine, though).
> >
> > Paul
> >
> >
> >
> > Paul
> >
> > On Fri, Feb 22, 2019 at 8:25 PM Jacob DeGlopper <jacob@xxxxxxxx> wrote:
> > >
> > > What are you connecting it to?  We just got the exact same drive for
> > > testing, and I'm seeing much higher performance, connected to a
> > > motherboard 6 Gb SATA port on a Supermicro X9 board.
> > >
> > > [root@centos7 jacob]# smartctl -a /dev/sda
> > >
> > > Device Model:     Samsung SSD 883 DCT 960GB
> > > Firmware Version: HXT7104Q
> > > SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
> > >
> > > [root@centos7 jacob]# fio --filename=/dev/sda --direct=1 --sync=1
> > > --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based
> > > --group_reporting --name=journal-test
> > >
> > > write: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(3728MiB/60001msec)
> > >
> > > 8 processes:
> > >
> > > write: IOPS=58.1k, BW=227MiB/s (238MB/s)(13.3GiB/60003msec)
> > >
> > >
> > > On 2/22/19 8:47 AM, Paul Emmerich wrote:
> > > > Hi,
> > > >
> > > > it looks like the beloved Samsung SM/PM863a is no longer available and
> > > > the replacement is the new SM/PM883.
> > > >
> > > > We got an 960GB PM883 (MZ7LH960HAJR-00005) here and I ran the usual
> > > > fio benchmark... and got horrible results :(
> > > >
> > > > fio --filename=/dev/sdX --direct=1 --sync=1 --rw=write --bs=4k
> > > > --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting
> > > > --name=journal-test
> > > >
> > > >   1 thread  - 1150 iops
> > > >   4 threads - 2305 iops
> > > >   8 threads - 4200 iops
> > > > 16 threads - 7230 iops
> > > >
> > > > Now that's a factor of 15 or so slower than the PM863a.
> > > >
> > > > Someone here reports better results with a 883:
> > > > https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
> > > >
> > > > Maybe there's a difference between the SM and PM variant of these new
> > > > disks for performance? (This wasn't the case for the 863a)
> > > >
> > > > Does anyone else have these new 883 disks yet?
> > > > Any experience reports?
> > > >
> > > > Paul
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@xxxxxxxxxxxxxx
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux