Question on OSD node failure recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The default rules are sane for small clusters with few failure domains.
 Anything larger than a single rack should customize their rules.

It's a good idea to figure this out early.  Changes to your CRUSH rules can
result in a large percentage of data moving around, which will make your
cluster unusable until the migration completes.

It is possible to make changes after the cluster has a lot of data.  From
what I've been able to figure out, it involves a lot of work to manually
migrate data to new pools using the new rules.




On Thu, Aug 21, 2014 at 6:23 AM, Sean Noonan <Sean.Noonan at twosigma.com>
wrote:

> Ceph uses CRUSH (http://ceph.com/docs/master/rados/operations/crush-map/)
> to determine object placement.  The default generated crush maps are sane,
> in that they will put replicas in placement groups into separate failure
> domains.  You do not need to worry about this simple failure case, but you
> should consider the network and disk i/o consequences of re-replicating
> large amounts of data.
>
> Sean
> ________________________________________
> From: ceph-users [ceph-users-bounces at lists.ceph.com] on behalf of
> LaBarre, James  (CTR)      A6IT [James.LaBarre at Cigna.com]
> Sent: Thursday, August 21, 2014 9:17 AM
> To: ceph-users at ceph.com
> Subject: [ceph-users] Question on OSD node failure recovery
>
> I understand the concept with Ceph being able to recover from the failure
> of an OSD (presumably with a single OSD being on a single disk), but I?m
> wondering what the scenario is if an OSD server node containing  multiple
> disks should fail.  Presuming you have a server containing 8-10 disks, your
> duplicated placement groups could end up on the same system.  From diagrams
> I?ve seen they show duplicates going to separate nodes, but is this in fact
> how it handles it?
>
>
> ------------------------------------------------------------------------------
> CONFIDENTIALITY NOTICE: If you have received this email in error,
> please immediately notify the sender by e-mail at the address shown.
> This email transmission may contain confidential information.  This
> information is intended only for the use of the individual(s) or entity to
> whom it is intended even if addressed incorrectly.  Please delete it from
> your files if you are not the intended recipient.  Thank you for your
> compliance.  Copyright (c) 2014 Cigna
>
> ==============================================================================
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140821/c8226f44/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux