Re: Using locally replicated OSDs to reduce Ceph replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 17, 2013 at 4:02 PM, Steve Barber <steve.barber@xxxxxxxx> wrote:
> On Wed, Apr 17, 2013 at 06:23:43PM -0400, Gregory Farnum wrote:
>> On Wed, Apr 17, 2013 at 3:02 PM, I wrote:
>> > In particular, has anyone tried making a big RAID set (of any type) and
>> > carving out space (logical volumes, zvols, etc.) to become virtual OSDs?
>> > Any architectural gotchas with this idea?
>>
>> I believe there are some people running with this architecture;
>> there's just less knowledge about how it behaves in the long term. It
>> should be fine subject to the standard issues with RAID5/6 small
>> writes, which OSDs do a lot of (and I don't know why you'd bother
>> using a mirroring RAID instead of Ceph replication!).
>> I can say that there would be little point to carving up the arrays
>> into multiple OSDs; other than that, have fun. :)
>> -Greg
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
> So you think it'd be worth trying just running a few really large OSDs in
> a configuration like that?  I wasn't sure that would scale as well, but I'm
> still pretty new to Ceph.

The scaling ought to be fine, though you might need to go through more
config tuning to scale it up for that level of "disk" underneath.
The bigger concerns are that if you lose an OSD it's a huge chunk of
data at once, but if they're all on the same RAID array you can't
really lose them incrementally anyway.

> About mirroring/RAID vs. Ceph replication, I was under the impression that
> there would be a lot of extra network traffic generated by writes with so
> many replicas which might not be optimal.  True enough about RAID 5/6 small
> writes.

Well, yeah, by sticking RAID underneath you get more reliability
without having to traverse a network. But you've still got a ceiling
in terms of how many physical nodes are storing the data, etc.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux