Hi Nick, What about subdividing your hosts using containers ? For instance four container per host on your four hosts which gives you 16 hosts. When you add more hosts you move containers around and reduce the number of containers per host. But you don't need to change the rulesets. Cheers On 05/01/2015 17:58, Nick Fisk wrote: > Hi All, > > > > Would anybody have an idea a) If it’s possible and b) if it’s a good idea > > to have more EC chunks than the total number of hosts? > > > > For instance if I wanted to have a k=6 m=2, but only across 4 hosts and I wanted to be able to withstand 1 host failure and 1 disk failure(any host), would a crush map rule be able to achieve that? > > > > Ie It would first instruct data to be 1^st split evenly across hosts and then across OSD’s? > > > > If I set the erasure profile failure domain to OSD and the crushmap to chooseleaf host, will this effectively achieve what I have described? > > > > I would be interested in doing this for two reasons, one being for better increased capacity than k=2 m=2 and the other is that when I expand this cluster in the near future to 8 hosts I won’t have to worry about re-creating the pool. I fully understand I would forfeit the ability to withstand to lose 2 hosts, but I would think this to be quite an unlikely event having only 2 hosts to start with. > > > > Many thanks, > > Nick > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Loïc Dachary, Artisan Logiciel Libre
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com