Updating CRUSH Tunables to Jewel from Hammer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a stage cluster with 4 HDDs and an SSD in each host.  I have an EC profile that specifically chooses HDDs for placement.  Also several Replica pools that write to either HDD or SSD.  This has all worked well for a while.  When I updated the Tunables to Jewel on the cluster, all of a sudden the data for the EC profile started placing it's data on the SSDs and filling them up.  Setting the CRUSH Tunables back to Hammer reverts this change and all is well again.

The odd part is that it's not like it's choosing to mix the data on HDDs and SSDs, it just moves the data to all SSDs and off of the HDDs.  Has anyone else experienced this or know what is causing it to choose to place the EC PGs on the wrong device-class?

[1] This is the rule in question.


[1]
{
    "rule_id": 2,
    "rule_name": "local-stage.rgw.buckets.data",
    "ruleset": 2,
    "type": 3,
    "min_size": 3,
    "max_size": 5,
    "steps": [
        {
            "op": "set_chooseleaf_tries",
            "num": 5
        },
        {
            "op": "set_choose_tries",
            "num": 100
        },
        {
            "op": "take",
            "item": -24,
            "item_name": "default~hdd"
        },
        {
            "op": "chooseleaf_indep",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux