Re: Real world benefit from SSD Journals for a more read than write cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Thu, 09 Jul 2015 08:57:27 +0200 Götz Reinicke - IT Koordinator wrote:

> Hi again,
> 
> time is passing, so is my budget :-/ and I have to recheck the options
> for a "starter" cluster. An expansion next year for may be an openstack
> installation or more performance if the demands rise is possible. The
> "starter" could always be used as test or slow dark archive.
> 
> At the beginning I was at 16SATA OSDs with 4 SSDs for journal per node,
> but now I'm looking for 12 SATA OSDs without SSD journal. Less
> performance, less capacity I know. But thats ok!
> 
Leave the space to upgrade these nodes with SSDs in the future.
If your cluster grows large enough (more than 20 nodes) even a single
P3700 might do the trick and will need only a PCIe slot.

> There should be 6 may be with the 12 OSDs 8 Nodes with a repl. of 2.
> 
Danger, Will Robinson.
This is essentially a RAID5 and you're plain asking for a double disk
failure to happen.

See this recent thread:
"calculating maximum number of disk and node failure that can be handled
by cluster with out data loss"
for some discussion and python script which you will need to modify for
2 disk replication.

With a RAID5 failure calculator you're at 1 data loss event per 3.5
years...

> The workload I expect is more writes of may be some GB of Office files
> per day and some TB of larger video Files from a few users per week.
> 
> At the end of this year we calculate to have +- 60 to 80 TB of lager
> videofiles in that cluster, which are accessed from time to time.
> 
> Any suggestion on the drop of ssd journals?
> 
You will miss them when the cluster does write, be it from clients or when
re-balancing a lost OSD.

Christian
> 	Thanks as always for your feedback . Götz
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux