Re: architecture questions - OSD layout

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the input guys.  I think I will end up going with some sort
of RAID groups, simply because if we decide in the future to use it
and scale to say 4000 or so drives like our current system, I don't
want two drive failures out of 4000 to cause downtime or an i/o error
on clients, something like that. At the moment we can often have 3 or
4 drives out or rebuilding at the same time across those 4k.

The flapping thing is also good to know, we'll probably have the OSD
hardware as quad core, 8 thread with 16 or 32 GB ram. So we'll maybe
do 16-20 cosd processes among ~200 drives.

Still, I'll try a few configs and see what I get. I'd be happy to post
the info I gather if anyone is interested, and of course submit
problems to the proper channels. Right now I'm having a few systems
built and we'll start testing with 84 drives.

On Thu, Jul 21, 2011 at 10:34 AM, Gregory Farnum
<gregory.farnum@xxxxxxxxxxxxx> wrote:
> On Thu, Jul 21, 2011 at 9:25 AM, Colin Patrick McCabe
> <colin.mccabe@xxxxxxxxxxxxx> wrote:
>> On the other hand, there is some fixed overhead to having multiple
>> cosd processes running. RAIDing a few disks might be a good option if
>> the fixed overhead of having a cosd per disk is too much.
> This in particular is important. In general the cosd processes won't
> use much CPU or memory, but in some situations (like recovery) they
> can spike in a correlated fashion that ends up causing OSD flapping.
> These behavior characteristics aren't well-modeled yet simply because
> there's bug fixing still going on in those code paths (and some
> optimization), but it's something to be aware of. I think you want
> something like 1GHz of a modern core per cosd to handle this, even
> though a stable system will often do just fine running 4 cosds on an
> Atom. (Presumably we can bring these requirements down once we move on
> from stabilization to performance.)
> -Greg
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux