Re: Clusters and pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 07/27/2012 03:25 AM, Ryan Nicholson wrote:
Thanks, Wido, for your help. Here's another question for the group:

I've created 2 new rados pools;

SCSI and
Large
I've also REMOVED 'data' from the pools. <--not sure if this is an issue.

How do I setup mountpoints? I wish to mount SCSI as CephFS, or, LaCie as CephFS. My goal is, from the client's perspective to treat these two entities as separate. In production, only RBD's will be on SCSI, and CephFS stuff would live on Large but I do want to learn how to configure this before I take the system out of testing.

I wasn't sure if there was an option for mount.ceph that allowed me to select the pool? Say, x.x.x.x:6789:??:/ or otherwise.


I'm not completely sure about this one. But the "data" pool is being used by the filesystem.

There is only one tree for the filesystem, but sub-directories can be given separate pools.

So, create the "data" pool again and create a directory called "scsi" and "large".

With the cephfs tool you should now be able to say that all the data of the scsi directory goes into the SCSI pool and all the data of the large directory goes into the pool "large".

I haven't done this myself yet, so I'm not completely sure about the syntax for the cephfs tool.

Wido

Thanks!

Ryan


-----Original Message-----
From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Wido den Hollander
Sent: Wednesday, July 25, 2012 10:02 AM
To: Ryan Nicholson
Cc: ceph-devel@xxxxxxxxxxxxxxx
Subject: Re: Clusters and pools



On 07/25/2012 06:34 AM, Ryan Nicholson wrote:
I'm running a cluster based on 4 hosts that each have 3 fast, SCSI osd's, and 1 very large SATA osd, meaning, 12 fast osd's and 4 slow osd's total. I wish to segregate these into 2 pools, that operate independently. The goal is to use the faster disks as an area to hold rbd based VM's, and the larger area to host rbd-base large volumes (to start), and possibly have that become just a big cephfs area once, the fs side of things is considered more stable.

Now, I've been thrown a couple options, and am still unsettled. Which is best?:
- Create 2 independent clusters; one with the 12 SCSI osd's and the other with just the 4 large OSD's on the same hosts. This seems to be more complex from a scripting and boot time standpoint, but easier for my head.
- Create a single cluster and use CRUSH rules to separate the two. This one STILL has me lost, as I'm having trouble understanding the crushmap syntax, the Crushmap import/export commands, and the other mkpool or otherwise commands from the docs in order to "make rbd's come from this faster pool", while "cephfs, you come from this slower pool". I really would like to entertain this path, however, as this allows ceph to handle the entire situation, and, it would seem more elegant.

I'm also open to other options as well.

The "easiest" way to approach this:

Set up the cluster with the 12 fast OSD's first en leave the other 4 out of the configuration.

Get everything up and running and play with it.

Then, add the 4 remaining OSD's to the cluster:
1. Add them to ceph.conf
2. Increment max_osd
3. Add them to the keyring
4. Format the OSD's
5. Start the OSD's

Now they should show up in your "ceph -s" output, but no data will go to them.

The next step is to export your current crushmap:

$ ceph osd getcrushmap -o crushmap
$ crushtool -d crushmap -o crushmap.txt

You should now add 4 new hosts to the crushmap, something like "hostA-slow" and add one OSD under each of them.

Now you can add a new rack called "slowrbd" for example, add a new pool and a new rule afterwards.

Compile crushmap.txt back again to "crushmap" and load it into the cluster.

You can now create a new pool with a specific crushrule.

All the data in that pool will go onto those 4 slower OSD's.

Wido


Thanks!

Ryan Nicholson

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo
info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux