Re: Recomendations for building 1PB RadosGW with Erasure Code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just to add, check out this excellent paper by Mark

http://www.spinics.net/lists/ceph-users/attachments/pdf6QGsF7Xi1G.pdf

Unfortunately his test hardware at the time didn't have enough horsepower to give an accurate view on required CPU for EC pools over all the tests. But you should get a fairly good idea about the hardware requirements from this.



> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Nick Fisk
> Sent: 16 February 2016 08:12
> To: 'Tyler Bishop' <tyler.bishop@xxxxxxxxxxxxxxxxx>; 'Василий Ангапов'
> <angapov@xxxxxxxxx>
> Cc: 'ceph-users' <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  Recomendations for building 1PB RadosGW with
> Erasure Code
> 
> 
> 
> > -----Original Message-----
> > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> > Of Tyler Bishop
> > Sent: 16 February 2016 04:20
> > To: Василий Ангапов <angapov@xxxxxxxxx>
> > Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> > Subject: Re:  Recomendations for building 1PB RadosGW with
> > Erasure Code
> >
> > You should look at a 60 bay 4U chassis like a Cisco UCS C3260.
> >
> > We run 4 systems at 56x6tB with dual E5-2660 v2 and 256gb ram.
> > Performance is excellent.
> 
> Only thing I will say to the OP, is that if you only need 1PB, then likely 4-5 of
> these will give you enough capacity. Personally I would prefer to spread the
> capacity around more nodes. If you are doing anything serious with Ceph its
> normally a good idea to try and make each node no more than 10% of total
> capacity. Also with Ec pools you will be limited to the K+M combo's you can
> achieve with smaller number of nodes.
> 
> >
> > I would recommend a cache tier for sure if your data is busy for reads.
> >
> > Tyler Bishop
> > Chief Technical Officer
> > 513-299-7108 x10
> >
> >
> >
> > Tyler.Bishop@xxxxxxxxxxxxxxxxx
> >
> >
> > If you are not the intended recipient of this transmission you are
> > notified that disclosing, copying, distributing or taking any action
> > in reliance on the contents of this information is strictly prohibited.
> >
> > ----- Original Message -----
> > From: "Василий Ангапов" <angapov@xxxxxxxxx>
> > To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> > Sent: Friday, February 12, 2016 7:44:07 AM
> > Subject:  Recomendations for building 1PB RadosGW with
> > Erasure	Code
> >
> > Hello,
> >
> > We are planning to build 1PB Ceph cluster for RadosGW with Erasure
> > Code. It will be used for storing online videos.
> > We do not expect outstanding write performace, something like 200-
> > 300MB/s of sequental write will be quite enough, but data safety is
> > very important.
> > What are the most popular hardware and software recomendations?
> > 1) What EC profile is best to use? What values of K/M do you recommend?
> 
> The higher total k+m you go, you will require more CPU and sequential
> performance will degrade slightly as the IO's are smaller going to the disks.
> However larger numbers allow you to be more creative with failure scenarios
> and "replication" efficiency.
> 
> > 2) Do I need to use Cache Tier for RadosGW or it is only needed for
> > RBD? Is it
> 
> Only needed for RBD, but depending on workload it may still benefit. If you
> are mostly doing large IO's, the gains will be a lot smaller.
> 
> > still an overall good practice to use Cache Tier for RadosGW?
> > 3) What hardware is recommended for EC? I assume higher-clocked CPUs
> > are needed? What about RAM?
> 
> Total Ghz is more important (ie ghzxcores) Go with the cheapest/power
> efficient you can get. Aim for somewhere around 1Ghz per disk.
> 
> > 4) What SSDs for Ceph journals are the best?
> 
> Intel S3700 or P3700 (if you can stretch)
> 
> By all means explore other options, but you can't go wrong by buying these.
> Think "You can't get fired for buying Cisco" quote!!!
> 
> >
> > Thanks a lot!
> >
> > Regards, Vasily.
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux