Braindump: multiple clusters on the same hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can run multiple Ceph clusters on the same hardware. They will
have completely separate monitors, OSDs (including separate data disks
and journals that will not be shared between clusters), MDSs etc.

This provides a higher level of isolation than e.g. just using
multiple RADOS pools and CRUSH rulesets.

This multiple cluster support is activated in all the Ceph commands,
such as ``ceph``, with the ``--cluster=`` option. For example, to see
the status of the cluster ``uswest1a``, run ``ceph --cluster=uswest1a
-s``. This will read ``/etc/ceph/uswest1a.conf`` instead of the
default ``ceph.conf``, and use the monitor addresses and keyrings
specified in that file. In the configuration file, the name of the
cluster can be referred to as ``$cluster``. Cluster names must consist
of letters a-z and digits 0-9 only.

This also means that when preparing disks for OSD hotplugging, you may
want to specify a non-default cluster with ``ceph-disk-prepare
--cluster=NAME /dev/sdb``.

To guard against typos in IP addresses and port numbers, make sure
each ``CLUSTER.conf`` file specifies an ``fsid``. You should also have
a distinct ``mon.`` key for each cluster. Following documented
installation procedures makes sure they are both randomly generated
for each cluster.

Current status: multi-cluster support has not been fully QA'ed, and
there are known issues (e.g. http://tracker.newdream.net/issues/3253
http://tracker.newdream.net/issues/3277 ).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux