On 10/27/2014 04:30 PM, Mike wrote: > Hello, > My company is plaining to build a big Ceph cluster for achieving and > storing data. > By requirements from customer - 70% of capacity is SATA, 30% SSD. > First day data is storing in SSD storage, on next day moving SATA storage. > How are you planning on moving this data? Do you expect Ceph to do this? What kind of access to Ceph are you planning on using? RBD? Raw RADOS? The RADOS Gateway (S3/Swift)? > By now we decide use a SuperMicro's SKU with 72 bays for HDD = 22 SSD + > 50 SATA drives. That are some serious machines. It will require a LOT of CPU power in those machines to run 72 OSDs. Probably 4 CPUs per machine. > Our racks can hold 10 this servers and 50 this racks in ceph cluster = > 36000 OSD's, 36.000 OSDs shouldn't really be the problem, but you are thinking really big scale here. > With 4tb SATA drives and replica = 2 and nerfull ratio = 0.8 we have 40 > Petabyte of useful capacity. > > It's too big or normal use case for ceph? > No, it's not to big for Ceph. This is what it was designed for. But a setup like this shouldn't be taken lightly. Think about the network connectivity required to connect all these machines and other decisions to be made. _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 9902 Skype: contact42on _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com