Re: Ceph with high disk densities?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 7, 2013 at 9:15 AM, Scott Devoid <devoid@xxxxxxx> wrote:
> I brought this up within the context of the RAID discussion, but it did not
> garner any responses. [1]
>
> In our small test deployments (160 HDs and OSDs across 20 machines) our
> performance is quickly bounded by CPU and memory overhead. These are 2U
> machines with 2x 6-core Nehalem; and running 8 OSDs consumed 25% of the
> total CPU time. This was a cuttlefish deployment.

That sounds about right. One of Ceph's design goals was to use the CPU
power which is generally available in storage boxes to make your
storage better — it is not targeted as a low-power way to aggregate
your spare compute server drives.
That said, we are pretty much always on the lookout for ways to reduce
CPU requirements so you may see this go down a respectable amount in
the future.

> This seems like a rather high CPU overhead. Particularly when we are looking
> to hit density target of 10-15 4TB drives / U within 1.5 years. Does anyone
> have suggestions for hitting this requirement? Are there ways to reduce CPU
> and memory overhead per OSD?

There are a few tradeoffs you can make to reduce memory usage (I
believe the big one is maintaining a shorter PG log, which lets nodes
catch up without going through a full backfill), and there is also a
relationship between cpu/memory usage and PG count — but of course the
cost of reducing PGs is having less even storage distributions.

> My one suggestion was to do some form of RAID to join multiple drives and
> present them to a single OSD. A 2 drive RAID-0 would halve the OSD overhead
> while doubling the failure rate and doubling the rebalance overhead. It is
> not clear to me if that is better or not.

I expect that some form of RAID will be necessary on the hyper-dense
systems that vendors are starting to come up with, yes. Nobody has
enough experience with a running system yet to know if that's a good
tradeoff to make.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux