Re: Theory about min_size and its implications

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> so size 4 / min_size 2 would be a lot better (of course)

More copies (or parity) are always more reliable, but one quickly gets into diminishing returns.

In your scenario you might look into stretch mode, which currently would require 4 replicas.  In the future maybe it could support EC with a carefully-chosen profile.

> we have to stay at 3/2 for now though, because our OSDs are filled 60% in sum
> 
> maybe someone can answer additional questions:
> 
> - what is the best practice to avoid a full OSD scenario, where ceph tries to recreate all 3 replicas in one of the two rooms when the other room is down? because 3 replicas don't fit in one room obviously. 

If your CRUSH rule limits replicas to 2 per room, then you’ll have roughly half of your PGs undersized and inactive, and the other half undersized and active.  In such a situation you could temporarily set min_size=1 on a given pool, with associated risks.

Do you have any ability to spread the nodes across *3* rooms?

> - when does it make sense to migrate a pool to Erasure Coding, when doesn't it make sense and does Erasure Coding NEED a caching tier as some article on the internet stated? I think using a caching tier helps with migrating but other then that I don't understand why we'd need a caching tier.

Cache tiers are deprecated, AIUI RHCS even compiles them out of their packages.  They can be tricky to get right.

As far as when EC makes sense, there are lots of factors, including:
* Your media type
* How important write throughput/latency are
* Your workload.  If e.g. your workload is RGW or CephFS with predominately tiny objects, EC can result in increased space amplification on bucket pools.

> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux