OSD storage not balancing properly when crush map uses multiple device classes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We are experimenting with using manually created crush maps to pick one SSD
as primary and and two HDD devices. Since all our HDDs have the DB & WAL on
NVMe drives, this gives us a nice combination of pretty good write
performance, and great read performance while keeping costs manageable for
hundreds of TB of storage.

We have 16 nodes with ~300 HDDs and four separate nodes with 64 7.6TB SSDs.

However, we're noticing that the usage on the SSDs isn't very balanced at
all - it's ranging from 26% to 52% for some reason (The balancer is active
and seems to be happy).


I suspect this might have to do with the placement groups now being mixed
(i.e., each pg uses 1x SSD and 2x HDD). Is there anything we can do about
this to achieve balanced SSD usage automatically?

I've included the crush map below, just in case we/I screwed up something
there instead :-)


Cheers,

Erik


{
    "rule_id": 11,
    "rule_name": "1ssd_2hdd",
    "ruleset": 11,
    "type": 1,
    "min_size": 1,
    "max_size": 10,
    "steps": [
        {
            "op": "take",
            "item": -52,
            "item_name": "default~ssd"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 1,
            "type": "host"
        },
        {
            "op": "emit"
        },
        {
            "op": "take",
            "item": -24,
            "item_name": "default~hdd"
        },
        {
            "op": "chooseleaf_firstn",
            "num": -1,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

-- 
Erik Lindahl <erik.lindahl@xxxxxxxxx>
Science for Life Laboratory, Box 1031, 17121 Solna, Sweden
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux