Hi Loic, That's an interesting idea, I suppose the same could probably be achieved by just creating more "Crush Host Buckets" for each actual host and then treat the actual physical host as a chassis (Chassis-1 contains Host-1-A, Host-1-B...etc) I was thinking about this some more and I don't think my original idea of k=6 m=2 will allow me to sustain a host + disk failure as that would involve 3 disk failures in total (assuming 2 failed chunks are on failed host). I believe k=5 m=3 would be a better match. Nick -----Original Message----- From: Loic Dachary [mailto:loic@xxxxxxxxxxx] Sent: 05 January 2015 17:38 To: Nick Fisk; ceph-users@xxxxxxxx Subject: Re: Erasure Encoding Chunks > Number of Hosts Hi Nick, What about subdividing your hosts using containers ? For instance four container per host on your four hosts which gives you 16 hosts. When you add more hosts you move containers around and reduce the number of containers per host. But you don't need to change the rulesets. Cheers On 05/01/2015 17:58, Nick Fisk wrote: > Hi All, > > > > Would anybody have an idea a) If it?s possible and b) if it?s a good idea > > to have more EC chunks than the total number of hosts? > > > > For instance if I wanted to have a k=6 m=2, but only across 4 hosts and I wanted to be able to withstand 1 host failure and 1 disk failure(any host), would a crush map rule be able to achieve that? > > > > Ie It would first instruct data to be 1^st split evenly across hosts and then across OSD?s? > > > > If I set the erasure profile failure domain to OSD and the crushmap to chooseleaf host, will this effectively achieve what I have described? > > > > I would be interested in doing this for two reasons, one being for better increased capacity than k=2 m=2 and the other is that when I expand this cluster in the near future to 8 hosts I won?t have to worry about re-creating the pool. I fully understand I would forfeit the ability to withstand to lose 2 hosts, but I would think this to be quite an unlikely event having only 2 hosts to start with. > > > > Many thanks, > > Nick > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Loïc Dachary, Artisan Logiciel Libre _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com