Re: Experiences with the Samsung SM/PM883 disk?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That sounds more like the result I expected, maybe there's something
wrong with my disk or server (other disks perform fine, though).

Paul



Paul

On Fri, Feb 22, 2019 at 8:25 PM Jacob DeGlopper <jacob@xxxxxxxx> wrote:
>
> What are you connecting it to?  We just got the exact same drive for
> testing, and I'm seeing much higher performance, connected to a
> motherboard 6 Gb SATA port on a Supermicro X9 board.
>
> [root@centos7 jacob]# smartctl -a /dev/sda
>
> Device Model:     Samsung SSD 883 DCT 960GB
> Firmware Version: HXT7104Q
> SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
>
> [root@centos7 jacob]# fio --filename=/dev/sda --direct=1 --sync=1
> --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based
> --group_reporting --name=journal-test
>
> write: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(3728MiB/60001msec)
>
> 8 processes:
>
> write: IOPS=58.1k, BW=227MiB/s (238MB/s)(13.3GiB/60003msec)
>
>
> On 2/22/19 8:47 AM, Paul Emmerich wrote:
> > Hi,
> >
> > it looks like the beloved Samsung SM/PM863a is no longer available and
> > the replacement is the new SM/PM883.
> >
> > We got an 960GB PM883 (MZ7LH960HAJR-00005) here and I ran the usual
> > fio benchmark... and got horrible results :(
> >
> > fio --filename=/dev/sdX --direct=1 --sync=1 --rw=write --bs=4k
> > --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting
> > --name=journal-test
> >
> >   1 thread  - 1150 iops
> >   4 threads - 2305 iops
> >   8 threads - 4200 iops
> > 16 threads - 7230 iops
> >
> > Now that's a factor of 15 or so slower than the PM863a.
> >
> > Someone here reports better results with a 883:
> > https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
> >
> > Maybe there's a difference between the SM and PM variant of these new
> > disks for performance? (This wasn't the case for the 863a)
> >
> > Does anyone else have these new 883 disks yet?
> > Any experience reports?
> >
> > Paul
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux