Re: SSD Journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 13 juli 2016 om 11:34 schreef Ashley Merrick <ashley@xxxxxxxxxxxxxx>:
> 
> 
> Hello,
> 
> Looking at using 2 x 960GB SSD's (SM863)
> 
> Reason for larger is I was thinking would be better off with them in Raid 1 so enough space for OS and all Journals.
> 
> Instead am I better off using 2 x 200GB S3700's instead, with 5 disks per a SSD?
> 

Both the Samsung SM and Intel DC (3510/3710) SSDs are good. If you can, put the OS on it's own device. Maybe a SATA-DOM for example?

Wido

> Thanks,
> Ashley
> 
> -----Original Message-----
> From: Christian Balzer [mailto:chibi@xxxxxxx] 
> Sent: 13 July 2016 01:12
> To: ceph-users@xxxxxxxxxxxxxx
> Cc: Wido den Hollander <wido@xxxxxxxx>; Ashley Merrick <ashley@xxxxxxxxxxxxxx>
> Subject: Re:  SSD Journal
> 
> 
> Hello,
> 
> On Tue, 12 Jul 2016 19:14:14 +0200 (CEST) Wido den Hollander wrote:
> 
> > 
> > > Op 12 juli 2016 om 15:31 schreef Ashley Merrick <ashley@xxxxxxxxxxxxxx>:
> > > 
> > > 
> > > Hello,
> > > 
> > > Looking at final stages of planning / setup for a CEPH Cluster.
> > > 
> > > Per a Storage node looking @
> > > 
> > > 2 x SSD OS / Journal
> > > 10 x SATA Disk
> > > 
> > > Will have a small Raid 1 Partition for the OS, however not sure if best to do:
> > > 
> > > 5 x Journal Per a SSD
> > 
> > Best solution. Will give you the most performance for the OSDs. RAID-1 will just burn through cycles on the SSDs.
> > 
> > SSDs don't fail that often.
> >
> What Wido wrote, but let us know what SSDs you're planning to use.
> 
> Because the detailed version of that sentence should read: 
> "Well known and tested DC level SSDs whose size/endurance levels are matched to the workload rarely fail, especially unexpected."
>  
> > Wido
> > 
> > > 10 x Journal on Raid 1 of two SSD's
> > > 
> > > Is the "Performance" increase from splitting 5 Journal's on each SSD worth the "issue" caused when one SSD goes down?
> > > 
> As always, assume at least a node being the failure domain you need to be able to handle.
> 
> Christian
> 
> > > Thanks,
> > > Ashley
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> 
> 
> -- 
> Christian Balzer        Network/Systems Engineer                
> chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux