Re: adding SSD only pool to existing ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 2, 2013 at 5:09 AM, Jens-Christian Fischer
<jens-christian.fischer@xxxxxxxxx> wrote:
> We have a ceph cluster with 64 OSD (3 TB SATA) disks on 10 servers, and run
> an OpenStack cluster.
>
> We are planning to move the images of the running VM instances from the
> physical machines to CephFS. Our plan is to add 10 SSDs (one in each server)
> and create a pool that is backed only by these SSDs and mount that pool in a
> specific location in CephFS.
>
> References perused:
>
> http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/
> http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
>
> The difference between Sebastiens and the Ceph approach is that Sebastien
> has mixed SAS/SSD servers, while the ceph documentation assumes either or
> servers.
>
> We have tried to replicate both approaches by manually editing the CRUSH map
> like so:
>
> Option 1)
>
> Create new "virtual" SSD only servers (where we have a h0 physical server,
> we'd set a h0-ssd for the ssd) in the CRUSH map, together with a related
> server/rack/datacenter/root hierarchy
>
> --- cut ---
> host s1-ssd {
>         id -15          # do not change unnecessarily
>         # weight 0.500
>         alg straw
>         hash 0  # rjenkins1
>         item osd.36 weight 0.500
> }
>
>
> rack cla-r71-ssd {
>         id -24          # do not change unnecessarily
>         # weight 2.500
>         alg straw
>         hash 0  # rjenkins1
>         item s0-ssd weight 0.000
>         item s1-ssd weight 0.500
> […]
>         item h5-ssd weight 0.000
> }
> root ssd {
>         id -25          # do not change unnecessarily
>         # weight 2.500
>         alg straw
>         hash 0  # rjenkins1
>         item cla-r71-ssd weight 2.500
> }
>
> rule ssd {
>         ruleset 3
>         type replicated
>         min_size 1
>         max_size 10
>         step take ssd
>         step chooseleaf firstn 0 type host
>         step emit
> }
>
> --- cut ---
>
> Option 2)
> Create two pools (SATA and SSD) and list all SSDs manually in them
>
> --- cut ---
> pool ssd {
>         id -14          # do not change unnecessarily
>         # weight 2.500
>         alg straw
>         hash 0  # rjenkins1
>         item osd.36 weight 0.500
>         item osd.65 weight 0.500
>         item osd.66 weight 0.500
>         item osd.67 weight 0.500
>         item osd.68 weight 0.500
>         item osd.69 weight 0.500
> }
>
> --- cut ---
>
>
> We extracted the CRUSH map, decompiled, changed, compiled and injected it.
> Both tries didn't seem to "really work" (™) as we saw the cluster go into
> reshuffling mode immediately (probably due to the changed layout (OSD ->
> Host -> Rack -> Root) in both cases.
>
> We reverted to the original CRUSH map and the cluster has been quiet since
> then.
>
> Now the question: What is the best way to handle our use case?
>
> Add 10 SSD drives, create a separate pool with them, don't upset the current
> pools (We don't want the "regular/existing" data to migrate towards the SSD
> pool, and no disruption of service?

If you saw your existing data migrate that means you changed its
hierarchy somehow. It sounds like maybe you reorganized your existing
nodes slightly, and that would certainly do it (although simply adding
single-node higher levels would not). It's also possible that you
introduced your SSD devices/hosts in a way that your existing data
pool rules believed they should make use of them (if, for instance,
your data pool rule starts out at root and you added your SSDs
underneath there). What you'll want to do is add a whole new root for
your SSD nodes, and then make the SSD pool rule (and only that rule)
start out there.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux