Journal SSD durability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 13 May 2014 14:46:23 +0200 Xabier Elkano wrote:

> El 13/05/14 14:23, Christian Balzer escribi?:
> > On Tue, 13 May 2014 12:07:12 +0200 Xabier Elkano wrote:
> >
> >> El 13/05/14 11:31, Christian Balzer escribi?:
> >>> Hello,
> >>>
> >>> No actual question, just some food for thought and something that
> >>> later generations can scour from the ML archive.
> >>>
> >>> I'm planning another Ceph storage cluster, this time a "classic" Ceph
> >>> design, 3 storage nodes with 8 HDDs for OSDs and 4 SSDs for OS and
> >>> journal.
> >> Christian, do yo have many clusters in production? Are there any
> >> advantages with many clusters vs different pools per cluster? What is
> >> the right way to go?, maintain a big cluster or different clusters?
> > Nope, I'm certainly a Ceph newb in many ways. That will be my third.
> >
> > The reasons for having different clusters can be locality (one is not
> > at our main DC) and also special use cases (speed vs. size vs. cost vs.
> > density, etc).
> >
> > Pools can do pretty much cover a lot of reasons why one would have
> > different clusters and I think the lower administrative overhead makes
> > them quite attractive.
> >
> >>> When juggling the budget for it the 12 DC3700 200GB SSDs of my first
> >>> draft stood out like the proverbial sore thumb, nearly 1/6th of the
> >>> total budget. 
> >>> I really like those SSDs with their smooth performance and durability
> >>> of 1TB/day writes (over 5 years, same for all the other numbers
> >>> below), but wondered if that was really needed. 
> >>>
> >>> This cluster is supposed to provide the storage for VMs (Vservers
> >>> really) that are currently on 3 DRBD cluster pairs.
> >>> Not particular write intensive, all of them just total about
> >>> 20GB/day. With 2 journals per SSD that's 5GB/day of writes, well
> >>> within the Intel specification of 20GB/day for their 530 drives
> >>> (180GB version).
> >>>
> >>> However the uneven IOPS of the 530 and potential future changes in
> >>> write patterns make this 300% safety margin still to slim for my
> >>> liking.
> >>>
> >>> Alas a DC3500 240GB SSD will perform well enough at half the price of
> >>> the DC3700 and give me enough breathing room at about 80GB/day
> >>> writes, so this is what I will order in the end.
> >> Did you consider DC3700 100G with similar price?
> > The 3500 is already potentially slower than the actual HDDs when doing
> > sequential writes, the 100GB 3700 most definitely so.
> >
> > Christian.
> What type of disks are you going to use for OSDs? 3700 100G can handle
> 200MB/s in sequential writes. Is this not enought for 2 SAS disk journal?
> 
Toshiba DT01ACA300, which according to the link below and my own testing
can do sustained sequential writes of 140MB/s.

http://www.tomshardware.com/charts/hdd-charts-2013/-04-Write-Throughput-Average-h2benchw-3.16,2904.html

In the cluster I'm talking about in the other thread I basically (and
knowingly) crippled sequential writes by even having a SSD in front of the
storage device (I don't need that sequential speed). 
So with this new one I will try to have as little bottlenecks as possible
to make sure I get a very good understanding of how fast one can make
things when planning for future large scale deployments.

Mind, the higher IOPS and WAY higher endurance of the 3700 may have me
reconsider my choice, but given this cluster will actually have 2GB/s
(Byte, not bit) network bandwidth (front and backend Infiniband) and
1.12GB/s of HDD bandwidth per storage node only the 200GB DC3700 would
really be "good enough". ^o^

Christian

> Xabier
> >
> >>> Christian
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users at lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >
> 
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi at gol.com   	Global OnLine Japan/Fusion Communications
http://www.gol.com/


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux