Re: Using locally replicated OSDs to reduce Ceph replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 17, 2013 at 06:23:43PM -0400, Gregory Farnum wrote:
> On Wed, Apr 17, 2013 at 3:02 PM, I wrote:
> > In particular, has anyone tried making a big RAID set (of any type) and
> > carving out space (logical volumes, zvols, etc.) to become virtual OSDs?
> > Any architectural gotchas with this idea?
> 
> I believe there are some people running with this architecture;
> there's just less knowledge about how it behaves in the long term. It
> should be fine subject to the standard issues with RAID5/6 small
> writes, which OSDs do a lot of (and I don't know why you'd bother
> using a mirroring RAID instead of Ceph replication!).
> I can say that there would be little point to carving up the arrays
> into multiple OSDs; other than that, have fun. :)
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com

So you think it'd be worth trying just running a few really large OSDs in
a configuration like that?  I wasn't sure that would scale as well, but I'm
still pretty new to Ceph.

About mirroring/RAID vs. Ceph replication, I was under the impression that
there would be a lot of extra network traffic generated by writes with so
many replicas which might not be optimal.  True enough about RAID 5/6 small
writes.

Just gotta try it and see I guess.

Thanks for the feedback!
Steve
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux