On Thu, Aug 24, 2017 at 6:44 PM, David Turner <drakonstein@xxxxxxxxx> wrote: >> min_size 1 > STOP THE MADNESS. Search the ML to realize why you should never user a > min_size of 1. This is a (completely understandable) misunderstanding. The "min_size" in a crush rule is a different thing to the min_size in a pool. In a crush rule, the min_size/max_size parameters are just declaring a range of pool sizes to which the rule may be applied. This is a legacy thing from when we had "rule sets" with multiple rules, and pools were given a rule set, then the rule's min_size/max_size where used to decide which rule in a rule set should apply to a given pool, based on the pool's size. I think we still have the field for backward compatibility with systems that might have had interesting ruleset configurations, but now is probably a good time to look at trying to find a way to rip it out or at least hide it. John > > I'm curious as well as to what this sort of configuration will do for how > many copies are stored between DCs. > > On Thu, Aug 24, 2017 at 1:03 PM Sinan Polat <sinan@xxxxxxxx> wrote: >> >> Hi, >> >> >> >> In a Multi Datacenter Cluster I have the following rulesets: >> >> ------------------ >> >> rule ams5_ssd { >> >> ruleset 1 >> >> type replicated >> >> min_size 1 >> >> max_size 10 >> >> step take ams5-ssd >> >> step chooseleaf firstn 2 type host >> >> step emit >> >> step take ams6-ssd >> >> step chooseleaf firstn -2 type host >> >> step emit >> >> } >> >> rule ams6_ssd { >> >> ruleset 2 >> >> type replicated >> >> min_size 1 >> >> max_size 10 >> >> step take ams6-ssd >> >> step chooseleaf firstn 2 type host >> >> step emit >> >> step take ams5-ssd >> >> step chooseleaf firstn -2 type host >> >> step emit >> >> } >> >> ------------------ >> >> >> >> The replication size is set to 3. >> >> >> >> When for example ruleset 1 is used, how is the replication being done? >> Does it store 2 replica’s in ams5-ssd and store 1 replica in ams6-ssd? Or >> does it store 3 replicas in ams5-ssd and 3 replicas in ams6-ssd? >> >> >> >> Thanks! >> >> >> >> Sinan >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com