Re: Multiple CephFS creation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

to create a second filesystem you have to use different pools anyway.

If you already have one CephFS up and running then you also should have at least one standby daemon, right? If you create a new FS and that standby daemon is not configured to any specific rank then it will be used for the second filesystem. Now you'll have two active MDS both with rank 0 (active):

---snip---
ceph:~ # ceph fs status
cephfs - 1 clients
======
+------+--------+-------+---------------+-------+-------+
| Rank | State  |  MDS  |    Activity   |  dns  |  inos |
+------+--------+-------+---------------+-------+-------+
|  0   | active | host6 | Reqs:    0 /s |   10  |   13  |
+------+--------+-------+---------------+-------+-------+
+-----------------+----------+-------+-------+
|       Pool      |   type   |  used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 1536k | 92.0G |
|   cephfs_data   |   data   | 5053M | 92.0G |
+-----------------+----------+-------+-------+
cephfs2 - 0 clients
=======
+------+--------+-------+---------------+-------+-------+
| Rank | State  |  MDS  |    Activity   |  dns  |  inos |
+------+--------+-------+---------------+-------+-------+
|  0   | active | host5 | Reqs:    0 /s |   10  |   13  |
+------+--------+-------+---------------+-------+-------+
+------------------+----------+-------+-------+
|       Pool       |   type   |  used | avail |
+------------------+----------+-------+-------+
| cephfs2_metadata | metadata | 1536k | 92.0G |
|   cephfs2_data   |   data   |    0  | 92.0G |
+------------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
---snip---

For the standby daemon you have to be aware of this:

By default, if none of these settings are used, all MDS daemons which do not hold a rank will
be used as 'standbys' for any rank.
[...]
When a daemon has entered the standby replay state, it will only be used as a standby for the rank that it is following. If another rank fails, this standby replay daemon will not be
used as a replacement, even if no other standbys are available.

Some of the mentioned settings are for example:

mds_standby_for_rank
mds_standby_for_name
mds_standby_for_fscid

The easiest way is to have one standby daemon per CephFS and let them handle the failover.

Regards,
Eugen


Zitat von Jarett DeAngelis <jarett@xxxxxxxxxxxx>:

Hi guys,

This is documented as an experimental feature, but it doesn’t explain how to ensure that affinity for a given MDS sticks to the second filesystem you create. Has anyone had success implementing a second CephFS? In my case it will be based on a completely different pool from my first one.

Thanks.
J
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux