Three MDS’ total, all serving the primary/original fs:
[root@cephmon-s03 ]# ceph fs status
cephfs1 - 8 clients
=======
+------+--------+-------------+---------------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+--------+-------------+---------------+-------+-------+
| 0 | active | cephmon-03 | Reqs: 0 /s | 160 | 163 |
| 1 | active | cephmon-02 | Reqs: 0 /s | 10 | 13 |
| 2 | active | cephmon-01 | Reqs: 0 /s | 10 | 13 |
+------+--------+-------------+---------------+-------+-------+
+---------------------+----------+-------+-------+
| Pool | type | used | avail |
+---------------------+----------+-------+-------+
| stp.cephfs_metadata | metadata | 344M | 24.8T |
| stp.cephfs_data | data | 11.1T | 24.8T |
+---------------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
MDS version: ceph version 14.2.8 (2d095e947a02261ce61424021bb43bd3022d35cb) nautilus (stable)
[root@cephmon-03 ]#
Sorry for email formatting, I’m stuck with formatting options.
peter
On Mon, Mar 9, 2020 at 2:17 PM Nathan Fish <lordcirth@xxxxxxxxx> wrote:
Peter Eisch Senior Site Reliability Engineer
T 1.612.445.5135
virginpulse.com
| virginpulse.com/global-challenge
Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland | United Kingdom | USA
Confidentiality Notice: The information contained in this e-mail, including any attachment(s), is intended solely for use by the designated recipient(s). Unauthorized use, dissemination, distribution, or reproduction of this message by anyone other than the intended recipient(s), or a person designated as responsible for delivering such messages to the intended recipient, is strictly prohibited and may be unlawful. This e-mail may contain proprietary, confidential or privileged information. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Virgin Pulse, Inc. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message.
v2.64
How many MDS's do you have total, and how are they assigned? 'ceph fs status'.
On Mon, Mar 9, 2020 at 3:14 PM Peter Eisch <peter.eisch@xxxxxxxxxxxxxxx> wrote:
Hi, (nautilus, 14.2.8, whole cluster)
I doodled with adding a second cephfs and the project got canceled. I removed the unused cephfs with "ceph fs rm dream --yes-i-really-mean-it" and that worked as expected. I have a lingering health warning though which won't clear.
The original cephfs1 volume exists and is healthy:
[root@cephmon-03]# ceph fs ls
name: cephfs1, metadata pool: stp.cephfs_metadata, data pools: [stp.cephfs_data ]
[root@cephmon-03]# ceph mds stat
cephfs1:3 {0=cephmon-03=up:active,1=cephmon-02=up:active,2=cephmon-01=up:active}
[root@cephmon-03]# ceph health detail
HEALTH_WARN insufficient standby MDS daemons available
MDS_INSUFFICIENT_STANDBY insufficient standby MDS daemons available
have 0; want 1 more
[root@cephmon-03]#
I have not yet deleted the pools for 'dream', the second cephfs definition. (There is nothing in it.)
Before deleting the pools is there a command to clear this warning?
Thanks!
peter
Peter Eisch
Senior Site Reliability Engineer
T
|
Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland | United Kingdom | USA
Confidentiality Notice: The information contained in this e-mail, including any attachment(s), is intended solely for use by the designated recipient(s). Unauthorized use, dissemination, distribution, or reproduction of this message by anyone other than the intended recipient(s), or a person designated as responsible for delivering such messages to the intended recipient, is strictly prohibited and may be unlawful. This e-mail may contain proprietary, confidential or privileged information. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Virgin Pulse, Inc. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message.
v2.64
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx