adding SSD only pool to existing ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have a ceph cluster with 64 OSD (3 TB SATA) disks on 10 servers, and run an OpenStack cluster.

We are planning to move the images of the running VM instances from the physical machines to CephFS. Our plan is to add 10 SSDs (one in each server) and create a pool that is backed only by these SSDs and mount that pool in a specific location in CephFS.

References perused:


The difference between Sebastiens and the Ceph approach is that Sebastien has mixed SAS/SSD servers, while the ceph documentation assumes either or servers.

We have tried to replicate both approaches by manually editing the CRUSH map like so:

Option 1)

Create new "virtual" SSD only servers (where we have a h0 physical server, we'd set a h0-ssd for the ssd) in the CRUSH map, together with a related server/rack/datacenter/root hierarchy

--- cut ---
host s1-ssd {
        id -15          # do not change unnecessarily
        # weight 0.500
        alg straw
        hash 0  # rjenkins1
        item osd.36 weight 0.500
}

rack cla-r71-ssd {
        id -24          # do not change unnecessarily
        # weight 2.500
        alg straw
        hash 0  # rjenkins1
        item s0-ssd weight 0.000
        item s1-ssd weight 0.500
[…]
        item h5-ssd weight 0.000
}
root ssd {
        id -25          # do not change unnecessarily
        # weight 2.500
        alg straw
        hash 0  # rjenkins1
        item cla-r71-ssd weight 2.500
}

rule ssd {
        ruleset 3
        type replicated
        min_size 1
        max_size 10
        step take ssd
        step chooseleaf firstn 0 type host
        step emit
}

--- cut ---

Option 2)
Create two pools (SATA and SSD) and list all SSDs manually in them

--- cut ---
pool ssd {
        id -14          # do not change unnecessarily
        # weight 2.500
        alg straw
        hash 0  # rjenkins1
        item osd.36 weight 0.500
        item osd.65 weight 0.500
        item osd.66 weight 0.500
        item osd.67 weight 0.500
        item osd.68 weight 0.500
        item osd.69 weight 0.500
}

--- cut ---


We extracted the CRUSH map, decompiled, changed, compiled and injected it. Both tries didn't seem to "really work" (™) as we saw the cluster go into reshuffling mode immediately (probably due to the changed layout (OSD -> Host -> Rack -> Root) in both cases.

We reverted to the original CRUSH map and the cluster has been quiet since then.

Now the question: What is the best way to handle our use case?

Add 10 SSD drives, create a separate pool with them, don't upset the current pools (We don't want the "regular/existing" data to migrate towards the SSD pool, and no disruption of service?

thanks
Jens-Christian
 
-- 
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fischer@xxxxxxxxx

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux