Re: ceph data store size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 1 Aug 2012, Wido den Hollander wrote:
> On 08/01/2012 10:06 AM, Robert Hajime Lanning wrote:
> > Anyone using ceph on multi petabyte data stores?
> 
> I don't think so

DreamHost is!

> > For example, a cluster of 12 systems with a combined storage of 2.3P.
> > 
> 
> With 12 systems, you mean 12 servers with a lot of disks? That's roughly 200TB
> per server.
> 
> On that scale I don't think Ceph would work, it would mean you would need a
> HUGE amount of CPU and memory on those boxes to run the OSD's.
> 
> If you want to scale to 2.3P you should be looking in the direction of
> hunderds of small machines all storing a couple of TB.

You can also put ceph-osd daemons in front of large disk arrays, but that 
is a configuration we have less experience with to date.  That being the 
case, some users are actively evaluating that now, so we'll know soon how 
well it works out...

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux