Re: Ceph cluster with SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 21 Aug 2017 01:48:49 +0000 Adrian Saul wrote:

> > SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage - MZ-
> > 75E4T0B/AM | Samsung  
> 
> The performance difference between these and the SM or PM863 range is night and day.  I would not use these for anything you care about with performance, particularly IOPS or latency.
> Their write latency is highly variable and even at best is still 5x higher than what the SM863 range does.  When we compared them we could not get them below 6ms and they frequently spiked to much higher values (25-30ms).  With the SM863s they were a constant sub 1ms and didn't fluctuate.  I believe it was the garbage collection on the Evos that causes the issue.  Here was the difference in average latencies from a pool made of half Evo and half SM863:
> 
> Write latency - Evo 7.64ms - SM863 0.55ms
> Read Latency - Evo 2.56ms - SM863  0.16ms
> 
Yup, you get these unpredictable (and thus unsuitable) randomness and
generally higher latency with nearly all consumer SSDs.
And yes, typically GC related.

The reason it's so slow with sync writes is with near certainty that their
large DRAM cache is useless with these, as said cache isn't protected
against power failures and thus needs to be bypassed. 
Other consumer SSDs (IIRC Intel 510s amongst them) used to blatantly lie
about sync writes and thus appeared fast while putting your data at
significant risk.

Christian

> Add to that Christian's remarks on the write endurance and they are only good for desktops that wont exercise them that much.   You are far better investing in DC/Enterprise grade devices.
> 
> 
> 
> 
> >
> > On Sat, Aug 19, 2017 at 10:44 PM, M Ranga Swami Reddy
> > <swamireddy@xxxxxxxxx> wrote:  
> > > Yes, Its in production and used the pg count as per the pg calcuator @  
> > ceph.com.  
> > >
> > > On Fri, Aug 18, 2017 at 3:30 AM, Mehmet <ceph@xxxxxxxxxx> wrote:  
> > >> Which ssds are used? Are they in production? If so how is your PG Count?
> > >>
> > >> Am 17. August 2017 20:04:25 MESZ schrieb M Ranga Swami Reddy
> > >> <swamireddy@xxxxxxxxx>:  
> > >>>
> > >>> Hello,
> > >>> I am using the Ceph cluster with HDDs and SSDs. Created separate
> > >>> pool for each.
> > >>> Now, when I ran the "ceph osd bench", HDD's OSDs show around 500
> > >>> MB/s and SSD's OSD show around 280MB/s.
> > >>>
> > >>> Ideally, what I expected was - SSD's OSDs should be at-least 40%
> > >>> high as compared with HDD's OSD bench.
> > >>>
> > >>> Did I miss anything here? Any hint is appreciated.
> > >>>
> > >>> Thanks
> > >>> Swami
> > >>> ________________________________
> > >>>
> > >>> ceph-users mailing list
> > >>> ceph-users@xxxxxxxxxxxxxx
> > >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com  
> > >>
> > >>
> > >> _______________________________________________
> > >> ceph-users mailing list
> > >> ceph-users@xxxxxxxxxxxxxx
> > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >>  
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com  
> Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Rakuten Communications
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux