Re: Ceph cluster with SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 23 Aug 2017 16:48:12 +0530 M Ranga Swami Reddy wrote:

> On Mon, Aug 21, 2017 at 5:37 PM, Christian Balzer <chibi@xxxxxxx> wrote:
> > On Mon, 21 Aug 2017 17:13:10 +0530 M Ranga Swami Reddy wrote:
> >  
> >> Thank you.
> >> Here I have NVMes from Intel. but as the support of these NVMes not
> >> there from Intel, we decided not to use these NVMes as a journal.  
> >
> > You again fail to provide with specific model numbers...  
> 
> NEMe - Intel DC P3608  - 1.6TB

3DWPD, so you could put this in front (journal~ of 30 or so of those
Samsungs and it still would last longer.

Christian

> 
> Thanks
> Swami
> 
> > No support from Intel suggests that these may be consumer models again.
> >
> > Samsung also makes DC grade SSDs and NVMEs, as Adrian pointed out.
> >  
> >> Btw, if we split this SSD with multiple OSD (for ex: 1 SSD with 4 or 2
> >> OSDs), is  this help any performance numbers?
> >>  
> > Of course not, if anything it will make it worse due to the overhead
> > outside the SSD itself.
> >
> > Christian
> >  
> >> On Sun, Aug 20, 2017 at 9:33 AM, Christian Balzer <chibi@xxxxxxx> wrote:  
> >> >
> >> > Hello,
> >> >
> >> > On Sat, 19 Aug 2017 23:22:11 +0530 M Ranga Swami Reddy wrote:
> >> >  
> >> >> SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage -
> >> >> MZ-75E4T0B/AM | Samsung
> >> >>  
> >> > And there's your answer.
> >> >
> >> > A bit of googling in the archives here would have shown you that these are
> >> > TOTALLY unsuitable for use with Ceph.
> >> > Not only because of the horrid speed when used with/for Ceph journaling
> >> > (direct/sync I/O) but also their abysmal endurance of 0.04 DWPD over 5
> >> > years.
> >> > Or in other words 160GB/day, which after the Ceph journal double writes
> >> > and FS journals, other overhead and write amplification in general
> >> > probably means less that effective 40GB/day.
> >> >
> >> > In contrast the lowest endurance DC grade SSDs tend to be 0.3 DWPD and
> >> > more commonly 1 DWPD.
> >> > And I'm not buying anything below 3 DWPD for use with Ceph.
> >> >
> >> > Your only chance to improve the speed here is to take the journals off
> >> > them and put them onto fast and durable enough NVMes like the Intel DC P
> >> > 3700 or at worst 3600 types.
> >> >
> >> > That still leaves you with their crappy endurance, only twice as high than
> >> > before with the journals offloaded.
> >> >
> >> > Christian
> >> >  
> >> >> On Sat, Aug 19, 2017 at 10:44 PM, M Ranga Swami Reddy
> >> >> <swamireddy@xxxxxxxxx> wrote:  
> >> >> > Yes, Its in production and used the pg count as per the pg calcuator @ ceph.com.
> >> >> >
> >> >> > On Fri, Aug 18, 2017 at 3:30 AM, Mehmet <ceph@xxxxxxxxxx> wrote:  
> >> >> >> Which ssds are used? Are they in production? If so how is your PG Count?
> >> >> >>
> >> >> >> Am 17. August 2017 20:04:25 MESZ schrieb M Ranga Swami Reddy
> >> >> >> <swamireddy@xxxxxxxxx>:  
> >> >> >>>
> >> >> >>> Hello,
> >> >> >>> I am using the Ceph cluster with HDDs and SSDs. Created separate pool for
> >> >> >>> each.
> >> >> >>> Now, when I ran the "ceph osd bench", HDD's OSDs show around 500 MB/s
> >> >> >>> and SSD's OSD show around 280MB/s.
> >> >> >>>
> >> >> >>> Ideally, what I expected was - SSD's OSDs should be at-least 40% high
> >> >> >>> as compared with HDD's OSD bench.
> >> >> >>>
> >> >> >>> Did I miss anything here? Any hint is appreciated.
> >> >> >>>
> >> >> >>> Thanks
> >> >> >>> Swami
> >> >> >>> ________________________________
> >> >> >>>
> >> >> >>> ceph-users mailing list
> >> >> >>> ceph-users@xxxxxxxxxxxxxx
> >> >> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com  
> >> >> >>
> >> >> >>
> >> >> >> _______________________________________________
> >> >> >> ceph-users mailing list
> >> >> >> ceph-users@xxxxxxxxxxxxxx
> >> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >> >>  
> >> >> _______________________________________________
> >> >> ceph-users mailing list
> >> >> ceph-users@xxxxxxxxxxxxxx
> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >>  
> >> >
> >> >
> >> > --
> >> > Christian Balzer        Network/Systems Engineer
> >> > chibi@xxxxxxx           Rakuten Communications  
> >>  
> >
> >
> > --
> > Christian Balzer        Network/Systems Engineer
> > chibi@xxxxxxx           Rakuten Communications  
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Rakuten Communications
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux