Re: SSD test results with Plextor M6 Pro, HyperX Fury, Kingston V300, ADATA SP90

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Those are interesting numbers - can you rerun the test with write cache enabled this time? I wonder how much your drop will be…

thanks

Jan

> On 18 Jun 2015, at 17:48, Jelle de Jong <jelledejong@xxxxxxxxxxxxx> wrote:
> 
> Hello everybody,
> 
> I thought I would share the benchmarks from these four ssd's I tested
> (see attachment)
> 
> I do still have some question:
> 
> #1     *    Data Set Management TRIM supported (limit 1 block)
>    vs
>       *    Data Set Management TRIM supported (limit 8 blocks)
> and how this effects Ceph and also how can I test if TRIM is actually
> working and not corruption data.
> 
> #2 are there other things I should test to compare ssd's for Ceph Journals
> 
> #3 are the power loss security mechanisms on SSD relevant in Ceph when
> configured in a way that a full node can fully die and that a power loss
> of all nodes at the same time should not be possible (or has an extreme
> low probability)
> 
> #4 how to benchmarks the OSD (disk+ssd-journal) combination so I can
> compare them.
> 
> I got some other benchmarks question, but I will make an separate mail
> for them.
> 
> Kind regards,
> 
> Jelle de Jong
> <setup-ceph01-ceph-ssd-benchmark.txt>_______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux