Re: Advice to create a EC pool with 75% raw capacity usable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello

Thanks for your reply
Sorry I'm relatively new comer in ceph env

6+3, or 8+4 with failure domain to host it's acceptable in regard of usable space.
We have 5 more hosts (actually zfs nas, but we will move data to ceph to get back theses hosts to expand the cluster)

In our first ceph cluster, 6 hosts with 8 ssd OSD, we have this conf, witch is ok for us in terms of perf and space :
rule erasure_ruleset {
  ruleset 2
  type erasure
  step take default
  step choose indep 4 type host
  step choose indep 3 type osd
  step emit
}
and
ceph osd erasure-code-profile get cephfs :
crush-failure-domain=host
crush-root=default
directory=/usr/lib64/ceph/erasure-code
jerasure-per-chunk-alignment=false
k=8
m=4
packetsize=4096
plugin=jerasure
technique=reed_sol_van
w=8

Do you think this is good for the 15 nodes hdd cluster ? We loose 1/3, but it's acceptable, 
it's cold storage used only with cephfs and where we store only big files.

We cannot have a full replicated cluster, and we need maximum uptime... 

Regards

----- Mail original -----
> De: "Anthony D'Atri" <aad@xxxxxxxxxxxxxx>
> À: "Bailey Allison" <ballison@xxxxxxxxxxxx>
> Cc: "Danny Webb" <Danny.Webb@xxxxxxxxxxxxxxx>, "Christophe BAILLON" <cb@xxxxxxx>, "ceph-users" <ceph-users@xxxxxxx>
> Envoyé: Jeudi 8 Septembre 2022 02:27:50
> Objet: Re:  Re: Advice  to create a EC pool with 75% raw capacity usable

> 12+3 means that all writes touch every node.  Notably, backfill / recovery are
> writes.  I suggest that 6+3 would suffice wrt capacity but offer a better
> experience.   When possible, there is benefit in having at least one more
> failure domain than replicas, otherwise every OSD down has an outsize impact on
> capacity.
> 
> 
> 
>> On Sep 7, 2022, at 5:22 PM, Bailey Allison <ballison@xxxxxxxxxxxx> wrote:
>> 
>> Just to add onto Danny, I think a K+M of 12+3 with setting the failure
>> domain at the host level would give you what you want? It would actually be
>> 80% useable rather than 75%, (12/15 = 80%, 3/15 = 20%) but you could lose 2
>> full hosts and still have access to the cluster.
>> 
>> The only downside is you could technically just lose for example 4
>> individual OSDs across 4 hosts and if they share the same PG(s) you might
>> have not a very good time, but such is life with erasure coded pools.
>> 
>> 
>> -----Original Message-----
>> From: Danny Webb <Danny.Webb@xxxxxxxxxxxxxxx>
>> Sent: September 7, 2022 11:51 AM
>> To: Christophe BAILLON <cb@xxxxxxx>; ceph-users <ceph-users@xxxxxxx>
>> Subject:  Re: Advice to create a EC pool with 75% raw capacity
>> usable
>> 
>> Seeing as you want to be able to lose 2 hosts and still have a writeable
>> cluster your minimum m is at least 3 (k+1 is required for a writeable
>> cluster usually).  Your failure domain will have to be set to host and your
>> k+m can't be larger than your failure domain (so no number larger than 15 or
>> ideally smaller than that so the cluster could rebalance in event of hard
>> failure).  I'm not entirely sure what the performance impact is of creating
>> an overly large stripe, you'll have to test that out.
>> ________________________________
>> From: Christophe BAILLON <cb@xxxxxxx>
>> Sent: 07 September 2022 15:25
>> To: ceph-users <ceph-users@xxxxxxx>
>> Subject:  Advice to create a EC pool with 75% raw capacity
>> usable
>> 
>> CAUTION: This email originates from outside THG
>> 
>> Hello,
>> 
>> I need advice on the creation of an EC profile and the associate crush rule,
>> for a cluster of 15 nodes, each with 12 x 18Tb disks with the objective of
>> being able to lose 2 hosts or 4 disks.
>> I would like to have the most space available, a 75% ratio would be ideal
>> 
>> If you can give me some examples ou some good links, it will be nice
>> 
>> Regards
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
>> to ceph-users-leave@xxxxxxx
>> 
>> 
>> Danny Webb
>> Principal OpenStack Engineer
>> The Hut Group<http://www.thehutgroup.com/>
>> 
>> Tel:
>> Email: Danny.Webb@xxxxxxxxxxxxxxx<mailto:Danny.Webb@xxxxxxxxxxxxxxx>
>> 
>> For the purposes of this email, the "company" means The Hut Group Limited, a
>> company registered in England and Wales (company number 6539496) whose
>> registered office is at Fifth Floor, Voyager House, Chicago Avenue,
>> Manchester Airport, M90 3DQ and/or any of its respective subsidiaries.
>> 
>> Confidentiality Notice
>> This e-mail is confidential and intended for the use of the named recipient
>> only. If you are not the intended recipient please notify us by telephone
>> immediately on +44(0)1606 811888 or return it to us by e-mail. Please then
>> delete it from your system and note that any use, dissemination, forwarding,
>> printing or copying is strictly prohibited. Any views or opinions are solely
>> those of the author and do not necessarily represent those of the company.
>> 
>> Encryptions and Viruses
>> Please note that this e-mail and any attachments have not been encrypted.
>> They may therefore be liable to be compromised. Please also note that it is
>> your responsibility to scan this e-mail and any attachments for viruses. We
>> do not, to the extent permitted by law, accept any liability (whether in
>> contract, negligence or otherwise) for any virus infection and/or external
>> compromise of security and/or confidentiality in relation to transmissions
>> sent by e-mail.
>> 
>> Monitoring
>> Activity and use of the company's systems is monitored to secure its
>> effective use and operation and for other lawful business purposes.
>> Communications using these systems will also be monitored and may be
>> recorded to secure effective use and operation and for other lawful business
>> purposes.
>> 
>> hgvyjuv
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
>> to ceph-users-leave@xxxxxxx
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux